id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
261429711
|
pes2o/s2orc
|
v3-fos-license
|
Anti-IL5 Monoclonal Antibodies Reduce Asthma Exacerbations and Corticosteroids Dose in Three Eosinophilic Granulomatosis with Polyangiitis Case Reports
Eosinophilic Granulomatosis with Polyangiitis (EGPA) is a rare multisystem necrotizing small and medium-sized vessel vasculitis with eosinophilic adult-onset asthma as part of its spectrum. Therapeutic choices include corticosteroids, immunosuppressive agents, and novel immunomodulators. Mepolizumab and benralizumab are monoclonal antibodies targeting Interleukin-5 (IL-5), which plays a leading role in every stage of production and maturation of eosinophils and are recently undergoing evaluation and administered in steroid-dependent, relapsing and/or refractory EGPA. Herein we describe the cases of three patients with a prior EGPA diagnosis, experiencing frequent asthmatic exacerbations despite oral and inhaled corticosteroid treatment (two patients) and adverse effects of corticosteroids (one patient). Two patients are under treatment with mepolizumab and one patient with benralizumab as an add-on supplemental regimen. In our case series anti-IL5 monoclonal antibodies proved efficient asthma-controlling and corticosteroid-sparing agents.
INTRODUCTION
Eosinophilic Granulomatosis with Polyangiitis (EGPA) is an Antineutrophil Cytoplasmic Antibody (ANCA) associated vasculitis 1 with a prevalence of 15.27 cases per million people worldwide. 2The spectrum of manifestations is variable and involve multiple systems.Predominant features include adult-onset asthma, peripheral and tissue eosinophilia, and extravascular eosinophil granulomata.Extrapulmonary clinical presentation may be marked with involvement of cardiac, renal, gastrointestinal, neural system, skin and/or generalised constitutional symptoms. 1,3The armamentarium of EGPA treatment comprises mainly of corticosteroids and immunosuppressive agents with biological therapy being the newest option. 3Circulating interleukine-5 (IL-5) is increased in active EGPA and possibly interferes with production, maturation, activation, and tissue tropism of eosinophils. 4Anti-IL5 antibodies are under investigation for their potential therapeutic role in different disease settings.Mepolizumab, reslizumab and benralizumab target the IL-5 axis and are currently undergoing efficacy evaluation administered in steroid-dependent, relapsing and/or refractory EGPA. 3 In accordance with the rarity of EGPA, the aim of this case series is to report three case reports diagnosed with EGPA according to the American College of Rheumatology (ACR) /European Alliance of Associations for Rheumatology (EULAR) criteria 5 (Table 1) successfully treated with an anti-IL5 agent as an asthma-controlling and corticosteroid-sparing supplemental regimen.
CASE SERIES DESCRIPTION Case 1
We report the case of a 36-year-old Caucasian female with a medical history compatible with EGPA (diagnosed nine years before) and difficult to treat asthma 6 (diagnosed two years prior to EGPA diagnosis).In October 2013 she presented in the emergency department with worsening dyspnoea and non-productive cough starting four days earlier.She had tachycardia (150beats/min), bilateral crackles on auscultation, a chest X-ray showing bilateral infiltrates and acute respiratory deficiency type I. Laboratory analysis demonstrated elevated white blood cells with neutrophilia, elevated liver enzymes, high troponin, and Pro-B-type natriuretic peptide.An emergency cardiology consultation highlighted severe left ventricle systolic dysfunction with an ejection fraction (EF) of 25% and a Pulmonary Arterial Systolic Pressure (PASP) of ≥65-70mmHg.The patient was admitted in the Intensive Care Unit (ICU).Further investigation revealed significantly elevated IgE (2800mg/dl), blood eosinophilia (6520cells/ μl), bilateral ground glass opacities, pulmonary infiltrates and fibrotic bands on chest computed tomography (CT).ANCA, rheumatoid factor, antinuclear antibodies proved negative.A cardiac Magnetic Resonance Imaging (MRI) revealed diffuse subendocardial enhancement.Bronchoalveolar lavage biopsies obtained through bronchoscopy were not suggestive of eosinophilic inflammation, given that the patient was on systematic corticosteroids for several days.The patient was diagnosed with EGPA and received induction therapy as shown in Table 2.She gradually improved (eosinophilia resolved, EF 45-50%, PASP 25-30mmHg) and was discharged from hospital receiving the treatment shown in Table Table 1 2).A herpes zoster incidence was reported which may be ascribed to mepolizumab and/or corticosteroid treatment but was without severe clinical impact.
Case 2
We herein describe the case of 44-year-old Caucasian female diagnosed with EGPA one year ago.During the last year she had two emergency hospitalisations for angioedema with concomitant eosinophilia which subsided after prolonged per os corticosteroid treatment.She also developed polyarthritis and asthma.Her baseline prescribed medication (excluding acute phase treatment) to address the full spectrum of disease manifestations is shown in Table 2. Follow up ophthalmology examination seven months prior revealed cataract which is closely monitored.Cataract diagnosis after two long courses of systemic corticosteroids was attributed to their use and an immunomodulatory agent was considered as an add-on therapy intended to be used as a steroid sparing therapy.Mepolizumab at 300mg subcutaneous injections every four weeks was initiated.The patient had monthly re-examinations for six months in which period she achieved a complete withdrawal of corticosteroids.
No exacerbations of asthma, polyarthritis or angioedema is reported, adverse effects have not occurred, and eosinophil count remained at 30 cells/mm 3 throughout mepolizumab treatment.
Case 3
A 30-year-old Caucasian male diagnosed with EGPA three years ago was considered for anti-IL5 treatment with benralizumab (30 mg SC every four weeks for the first eight weeks and 30mg SC every eight weeks subsequently) due to severe eosinophilic asthma with frequent exacerbations.Initial presentation of EGPA was diverse and associated with many systems.General symptoms included fatigue and weight loss and musculoskeletal symptoms consisted of thoracic and lower back pain.Eosinophilia and persistent hypokalaemia were prevalent.The spectrum of pulmonary features is characterized by lung infiltrates and asthma.Neurological involvement manifested with sensorimotor neuropathy affecting lower limbs with fluctuations in symptoms severity.Lower limbs vasculitis and purpura also occurred.Furthermore, thrombosis of left testicle and in multiple sections of renal cortex bilaterally were diagnosed.Digestive tract symptomatology, mainly abdominal pain, was frequent and led to upper and lower gastrointestinal endoscopy which revealed gastric ulcers, gastritis, and oedematous mucosa of rectosigmoid colon respectively.The patient was treated with methylprednisolone 32mg qd, esomeprazole 40mg bid, spironolactone 25mg qd azathioprine 50mg qd as indicated.In March 2020 (approximately one and half year after initial diagnosis) the patient experienced a life-threatening vasculitis flare which presented with worsening dyspnoea, orthopnoea, elevated heart rate (125beats/min) and oedematous lower limbs.Cardiology examination with heart ultrasound showed an EF of 20%.Spirometry results showed severe restriction (Table 3).The patient was immediately admitted to hospital and induction therapy was initiated as shown in 2021, Benralizumab was administered as supplemental therapy for asthma and proved successful in reducing asthma exacerbations and improving spirometry results.In the latest follow-up examination (November 2022) the patient had an excellent asthma control and normal spirometry values (Table 3) while achieving an effective tapering of methylprednisolone from 16mg to 4mg qd and zeroing this eosinophil number (Table 2).
DISCUSSION
EGPA is a necrotising small and medium-sized vessel vasculitis with different disease phenotypes according to ANCA status. 3,7Symptomatology is heterogeneous with cross-sectionally considerable intra and inter patient variability.The hallmark of EGPA is prominent blood and tissue hypereosinophilia (>10% or > 1.500/mm3) often forming granulomata and implicated in cytotoxicity.The vast majority of patients are diagnosed with difficult-to-treat, 6 steroid-dependent asthma with frequent exacerbations despite optimum inhaled therapy.Other respiratory signs are rhinitis, sinusitis, lung infiltrates, pleural effusion, and alveolar haemorrhage.Generalised constitutional features of weight loss, fever and myalgias are also commonly descripted.Eosinophil toxicity may affect the cardiac muscle and the digestive tract causing cardiomyopathy and/or pericarditis and mucosal inflammation respectively.Neural system involvement may take the form of sensorimotor mononeuritis multiplex or peripheral neuropathy and skin manifestations include mainly palpable purpura and less frequently ulcers, nodules or an urticaria rash.The extend of kidney impact ranges from mild urinary sediment abnormalities to renal insufficiency due to glomerulonephritis. 1,3,7he therapeutic array of EGPA comprises of traditional (corticosteroids and immunosuppressants), novel agents (monoclonal antibodies) 8 and inhaled therapies for respiratory symptoms (bronchodilations and ICS). 9Biologic treatment is designed to target a specific pathophysiologic route implicated in disease pathogenesis.Rituximab is a monoclonal antibody that binds to the CD20 antigen uniquely present in antibody-producing B lymphocytes leading to their elimination and presumably reducing ANCA toxicity. 9Both mepolizumab, a humanised monoclonal antibody that targets IL-5, and benralizumab, an IL-5 receptor antagonist, neutralise IL-5 axis which participate in eosinophil production and maturation process. 9In the DREAM study mepolizumab was initially investigated for severe eosinophilic asthma and proved to significantly reduce asthmatic exarcebations. 10M.E.Wechsler et al. administered 300 mg of mepolizumab or placebo subcutaneously every 4 weeks in relapsing or refractory EGPA.The above stand-alone randomised placebo-controlled trial highlighted the increased collective remission rates and emphasized the clinician's ability to deescalate corticosteroids dose. 11Mepolizumab was initially licensed to treat EGPA in USA and in September 2021 received formal administrative approval to control asthmatic symptomatology in EGPA patients in Europe.Benralizumab has been approved for eosinophilic asthma however off-label utilization in mitigating EGPA asthmatic exacerbations is lately acknowledged in a series of open label trials and case reports. 12,13The MANDARA trial, which is a randomised, double-blind, active-controlled 52-week study with an open-label extension to evaluate the efficacy and safety of benralizumab compared to mepolizumab and the BITE open-label study of benralizumab are two on-going studies which hope to bring new and high-quality insights into implementing benralizumab in the treatment of EGPA. 14,15he ACR/Vasculitis Foundation proposes a guideline for the management of ANCA-associated vasculitis; Remission treatment may be managed with corticosteroids in both life-or organ threatening and non-threatening vasculitis manifestations as high-dose monotherapy or in combination with methotrexate, azathioprine, mycophenolate mofetil, rituximab, or mepolizumab respectively.Another preferred option for induction in severe EGPA is cyclophosphamide or rituximab over mepolizumab.Tapering of corticosteroids after successful remission is individualised 16 and should be considered due to adverse effects implicated in their usage. 8Remission is preferentially maintained with methotrexate, azathioprine, or mycophenolate mofetil. 16The above recommendations are supported by low quality of evidence and considerable variability is witnessed in real-life settings.Our EGPA case series presents with a diversity of symptomatology with evident eosinophilic asthma in all patients.Cases 1 and 2 were treated with mepolizumab, while case 3 received benralizumab, added to their standard of care medication.Notwithstanding, their heterogeneity, all patients were naïve to any biologic therapy and responded remarkably to an anti-IL-5 agent.Anti-IL5 treatment was particularly prone to reduce asthmatic exacerbations (Cases 1,3) and had a corticosteroid-sparing ability (partial in Cases 1,3 and completely withdrawal in Case 2).Furthermore, patients were followed for a minimum of one year to a maximum of five years with continuous administration of monoclonal antibodies with no major side effects apart from one herpes zoster in Case 1.In our case series, mepolizumab was administered subcutaneously at 300mg every four weeks, however mepolizumab dose of 100mg every four weeks may also be suggested as an alternative.The above is supported by a large European multicentre observational cohort including 203 patients with EGPA divided into two subgroups receiving mepolizumab at 100 mg or 300 mg every four weeks.The two subgroups not only achieved similar response and relapse rates, but also effective control of respiratory symptoms and reduction in oral corticosteroid dose. 17TITLE Possible beneficial outcomes of mepolizumab treatment in EGPA's neurologic manifestations were recently investigated with a single-arm observational study from Nakamura et al. and a case report from Kai et al. demonstrating improvement in neuropathic pain and paresthesia. 18,19A unique case report of EGPA with mononeuritis multiplex who did not initially respond to mepolizumab, experienced sustained resolution of mononeuritis multiplex after being treated with benralizumab. 20Case 3, receiving benralizumab, was the only patient in our case series with lower limbs' sensorimotor neuropathy, however his neuropathy slowly subsided while treated with corticosteroids, cyclophosphamide, and mycophenolate mofetil before anti-IL5 agent initiation.Sustained neurologic remission was observed during benralizumab treatment.In accordance with previous results both mepolizumab and benralizumab proved efficacious and should be considered as steroid-sparing and asthma control agents in EGPA.The holistic pathogenesis and management of EGPA remains elusive; however, discovering specific disease-driven routes (eg, IL-5 axis) and targeting their significant components represents a major breakthrough in understanding the disease and its treatment respectively.More research is required to elucidate the indications, contraindications, precautions, dosage, and interactions of monoclonal antibodies when used in treating EGPA manifestations safely and effectively.
CONCLUSION
In EGPA targeted blockage of IL-5 axis with mepolizumab or benralizumab poses as a viable option in mitigating asthmatic exacerbations and achieving an OCS dose reduction thought specific context of use needs to be further ascertained with high-quality randomised control trials.
AUTHOR CONTRIBUTIONS
ES: study concept and design, data acquisition, analysis and interpretation, manuscript drafting and critical revision for important intellectual content; MK, DZ: study concept and design, acquisition, analysis and interpretation of data and critical revision for important intellectual content.PV, KK: study concept, analysis and interpretation of data and critical revision for important intellectual content.All the authors have read and approved the final version of the manuscript and agreed to take full responsibility for the integrity and accuracy of all aspects of the work.
Table 2 .
The
|
2023-09-02T06:18:08.286Z
|
2023-06-01T00:00:00.000
|
{
"year": 2023,
"sha1": "b5e4daa31fb79d0d4ed9aee19bede673ef0f1758",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.31138/mjr.34.2.238",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1d4cebeda4c29d63ba50b5eeffd53906df77d498",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254338294
|
pes2o/s2orc
|
v3-fos-license
|
Red Flags and Adversities on the Way to the Robust CE-ICP-MS/MS Quantitative Monitoring of Self-Synthesized Magnetic Iron Oxide(II, III)-Based Nanoparticle Interactions with Human Serum Proteins
The growing interest in superparamagnetic iron oxide nanoparticles (SPIONs) as potential theranostic agents is related to their unique properties and the broad range of possibilities for their surface functionalization. However, despite the rapidly expanding list of novel SPIONs with potential biomedical applications, there is still a lack of methodologies that would allow in-depth investigation of the interactions of those nanoparticles with biological compounds in human serum. Herein, we present attempts to employ capillary electrophoresis-inductively coupled plasma tandem mass spectrometry (CE-ICP-MS/MS) for this purpose and various obstacles and limitations noticed during the research. The CE and ICP-MS/MS parameters were optimized, and the developed method was used to study the interactions of two different proteins (albumin and transferrin) with various synthesized SPIONs. While the satisfactory resolution between proteins was obtained and the method was applied to examine individual reagents, it was revealed that the conjugates formed during the incubation of the proteins with SPIONs were not stable under the conditions of electrophoretic separation.
Introduction
Despite the COVID-19 outbreak in late 2019, cancer is still the second leading cause of death worldwide and a significant problem in public health care [1,2]. Traditional cancer therapies such as chemotherapy or radiation therapy have been associated with adverse side effects. In order to limit non-selectivity toward cancer cells and thus systemic toxicity, new strategies are sought. Of great interest are nanomaterials and their use as potential theranostic agents [3]. Cancer theranostic is a relatively new medical approach that combines diagnosis and therapy to remove a solid tumor in a non-invasive way. Integration of those elements in a single nanoplatform allows for real-time monitoring of treatment progress and efficiency [4].
There are many nanomaterials with the potential to be used in cancer treatment, and among them, superparamagnetic iron oxide nanoparticles (SPIONs) are particularly noteworthy due to their properties. They consist of iron oxide in the form of magnetite (Fe 3 O 4 ). As a consequence of their size being below 20-30 nm (with only one magnetic domain within one nanoparticle), they exhibit superparamagnetic properties: they can reversibly change their magnetic moment after exposure to the external magnetic field [5,6]. occurring for iron. The optimized method was primarily used to investigate commercially available differently charged SPIONs. In addition, simple tests were also carried out to examine the method's applicability in matrices containing proteins, namely albumin. However, the optimized ICP-MS/MS methodology can be effectively used to monitor signals from both SPIONs and human serum proteins, but the separation method was insufficient to observe the synthesized nanoparticle (NP) changes in the presence of whole human serum. The main limitation was the poor resolution of signals.
For the abovementioned reasons, we decided to develop the novel CE-ICP-MS/MS method in the frame of this study. During the optimization process, the emphasis was placed on improving the method resolution to obtain acceptable separation of signals from the two most abundant human serum transporting proteins-albumin and transferrin. The optimized methodology was then used to effectively portray the interactions of selfsynthesized SPIONs with single serum proteins. Why self-synthesized? While their quality and size distribution does not always match that of the commercially available SPIONS, this study aimed to provide a basis for methodologies that could be used to investigate newly designed nanoparticles with various surface modifications and their interactions with biological compounds in laboratory environments. This work concerns the successive stages of research, and multiple problems noticed along the investigation pathway.
ICP-MS/MS Detection Method Optimization
A previously elaborated ICP-MS/MS methodology was used as a basis for the current research, and the original method can be found in the previous study [29]. Some of the detection parameters were marginally changed in order to obtain higher signal intensity. Moreover, additional instrument tuning has been performed with more detailed optimization of the collision/reaction gas flow rate. The main values based on which specific parameter was chosen were the intensity of signals and the signal-to-noise ratio for the sulfur ( 32 S 16 O + ) and iron ( 56 Fe 16 O + ) signals. The best results were obtained for 30% of oxygen at the maximum flow rate (0.45 mL/min). The effect of evaluating the oxygen flow rate in the collision/reaction cell (CRC) is presented in ESM Figure S1, and the final operational parameters are summarized in Table 1.
CE Separation Method Optimization
The main objective of the CE method optimization was to obtain the highest separation efficiency and resolution between the signals corresponding to specified reagents and products. Without distinctively separated signals corresponding to proteins, it would be impossible to clearly determine the composition of formed conjugates in a mixture of SPIONs and proteins. The separation conditions were optimized using two samples: one with albumin and transferrin (1 mg/mL each) and the other containing commercially available SPIONs with carboxyl-functionalized groups (Fe 3 O 4 @COOH, 15 µg/mL Fe), all diluted with incubation buffer.
Various combinations were investigated throughout the research, e.g., different background electrolyte (BGE) configurations, sample stacking techniques, or elements of isotachophoretic separation. Among those, Tris hydrochloride as the BGE in standard CE mode generated the best results. While the good quality of the signals (shape, width at the base) was desired, the choice was made based on the resolution between proteins and the stability of the interface-reagents that could cause aspiration problems, in the long run, had to be discarded. Next, three different concentrations of Tris hydrochloride buffer (containing 5, 10, and 20 mM of Tris hydrochloride and pH adjusted to 7.4 using NaOH) were tested, but it was noted that an increase in this parameter was associated with significant deterioration in the quality of signals and an unstable baseline. For this reason, 5 mM Tris hydrochloride was selected as the BGE. The addition of the NaOH solution was used then to adjust the pH to 7.4 and simulate the physiological environment. For the same reason, the capillary temperature was set to 37 • C during all analyses.
In the next step, the effect of the sample injection volume was examined. As expected, the reduction in the sample loading resulted in the improvement of albumin and transferrin separation factors (ESM, Figure S2). The hydrodynamic injection of 20 mbar per 5 s was chosen for further analyses. Afterward, the voltage applied to the capillary was selected. Lowering the voltage, despite significantly improving the separation of protein signals in relation to SPIONs, was also associated with reducing the signals' qualities. An increase in peak width resulted in a lower resolution between albumin and transferrin. The voltage of +15 kV was chosen to compromise the quality of monitored signals and the separation between proteins (ESM, Figure S3).
The optimized parameters are presented in Table 2, while the electropherograms for the separations of the samples containing the SPIONs or proteins obtained under those conditions are shown in ESM, Figure S4.
Evaluation of the Optimized Method
During the subsequent experimental step, the method's analytical performance was verified. For each analyte, the calibration curve was determined. Additionally, capillary recovery and inter-and intra-day repeatability parameters for peak area and migration time were calculated. Standard solutions of Fe 3 O 4 @COOH SPIONs (Fe concentrations ranging from 1 to 50 µg/mL), albumin (0.25 to 5 mg/mL), and transferrin (0.25 to 4 mg/mL) were analyzed under optimized conditions (ESM, Figures S5-S7). The linearity of the obtained curves was at a satisfactory level (R 2 ≥ 0.995), the same as the recovery (≥80%) and the intra-(relative standard deviation; RSD ≤ 5%) and inter-day (RSD ≤ 8%) repeatability for the migration times (Table 3). Unfortunately, the unacceptable values were obtained in the case of the inter-day peak areas, proving the necessity of each-day calibration in the case of carrying out the quantitative analysis.
The possible explanation of the phenomenon can be related to at least a few aspects. Firstly, the interface between CE and ICP-MS/MS is a very delicate construction susceptible to various factors. After each experimental session, it is deconstructed due to the need to rinse individual elements and capillaries. Perfect reconstruction of the previous position during the next session may prove a challenge. Next, a nebulizer setup affects the draw rate of the sheath liquid and the flow rate of the sample, and thus the signal intensity for monitored masses. On the other hand, the inner surface of the capillary may undergo undesirable modifications by the compounds present in the samples or the BGE configu- rations during subsequent analyses affecting the complete migration of the reagents. An adverse effect of the analytes on the capillary was observed during the research on SPIONs and proteins. Despite the frequent rinsing, the signal quality, resolution, and capillary recoveries gradually deteriorated. For this reason, a regular capillary replacement was required (every 2 weeks). A nebulizing capillary can be clogged, even with systematic rinsing. Because of all these factors, the method may not be suitable for quantifying proteins and SPIONs in a routine manner unless the calibration curves are prepared during the same experimental session.
Investigation of Synthesized SPIONs
Before the SPION-protein interactions were investigated, first the method was used to scrutinize individual NPs and their stability during electrophoretic separation. Six different types of SPIONs were tested, commercially available (Fe 3 O 4 @COOH, Sigma-Aldrich, St. Louis, MO, USA) and synthesized: non-modified (Fe 3 O 4 ), doped with Au (Fe 3 O 4 @Au), stabilized with polyethylene glycol (Fe 3 O 4 @PEG), polyethyleneimine (Fe 3 O 4 @PEI) or sodium citrate (Fe 3 O 4 @Citr). Initially, the research was conducted using nanoparticle cores obtained as described in the previous study [20]. However, due to their large size distribution, intense interactions with the capillary occurred, resulting in poor quality signals. At that moment, it was clear that more stringent conditions were needed during the SPIONs' synthesis, and only these types of NPs were employed during the following experiments. Moreover, to have the possibility for brief comparison, the cores of the non-doped synthesized nanoparticles were obtained similarly, but the different stabilizers used for a surface modification determined the charge of the nanomaterial and their electrophoretic mobility. This resulted in a group of nanoparticles with identical superparamagnetic core morphology but substantially different surface characteristics. Table 4 shows the ζ-potential (ζ-zeta) of the synthesized SPIONs with different stabilizers. ζ-potential is the parameter that gives information about the charge of the nanoparticles and their tendency to formulate aggregates or remain discrete (regarding electrostatic stabilization). Based on the obtained values, Fe 3 O 4 @PEI and Fe 3 O 4 @Citr should be the most stable. Table 4. ζ-Potential of synthesized SPIONs (n = 3).
SPIONs ζ-Potential
As seen in Figure 1, surface modification is not the only factor determining the values of the SPIONs' migration times. In some cases, it crosses out the possibility of their signal detection in a reasonable analysis time. The best quality signals among synthesized NPs were registered for Fe 3 O 4 @Citr. The strong signal disturbance for non-modified Fe 3 O 4 and Fe 3 O 4 @PEG was caused by their agglomeration. Despite this phenomenon, the obtained results allow for establishing their average migration times. On the other hand, the lack of a distinctive signal in the case of Fe 3 O 4 @PEI suggests the presence of sufficiently strong interactions between the capillary inner surface (-SiO − groups) and polyethyleneimine (-NH 3 + groups), which resulted in the NPs remaining in the capillary during the measurement. It is worth mentioning that analysis of the commercially available Fe 3 O 4 @COOH SPIONs (with an amphiphilic polymer coating) resulted in the improved quality of the signal parameters. Still, the modification enormously reduced their magnetic properties, which can be a perfect explanation for the ease of their analysis and signal detection. Finally, the doping of nanoparticles with gold significantly impacted their structure, which resulted in both: the reduction in magnetic properties and the quality of the signal.
SPIONs (with an amphiphilic polymer coating) resulted in the improved quality signal parameters. Still, the modification enormously reduced their magnetic prop which can be a perfect explanation for the ease of their analysis and signal detecti nally, the doping of nanoparticles with gold significantly impacted their structure, resulted in both: the reduction in magnetic properties and the quality of the signal.
Synthesized SPIONs were also characterized-scanning-transmission electro croscopy (STEM) images for Fe3O4@Citr and Fe3O4@PEI were obtained (Figure S8 diameter of the Fe3O4 nanoparticles in both cases does not exceed 10 nm, and we that other types of nanoparticles will have similar sizes. This can be explained by t that the entire family of nanoparticles was synthesized according to the well-know precipitation methodology [30,31]. Irrespective of the type of ligand, the same volu ammonia was used in each case, ensuring the medium's pH stability at the stage cipitation of iron hydroxides and nucleation of nanostructures. Different types of l were added 10 min after the precipitation when the nanoparticles were already m formed, and the presence of the stabilizer had no significant effect on the diameter Fe3O4 cores obtained. In this case, the ligand only determines the nature of the surfa protects against aggregation. Synthesized SPIONs were also characterized-scanning-transmission electron microscopy (STEM) images for Fe 3 O 4 @Citr and Fe 3 O 4 @PEI were obtained ( Figure S8). The diameter of the Fe 3 O 4 nanoparticles in both cases does not exceed 10 nm, and we expect that other types of nanoparticles will have similar sizes. This can be explained by the fact that the entire family of nanoparticles was synthesized according to the well-known co-precipitation methodology [30,31]. Irrespective of the type of ligand, the same volume of ammonia was used in each case, ensuring the medium's pH stability at the stage of precipitation of iron hydroxides and nucleation of nanostructures. Different types of ligands were added 10 min after the precipitation when the nanoparticles were already mainly formed, and the presence of the stabilizer had no significant effect on the diameter of the Fe 3 O 4 cores obtained. In this case, the ligand only determines the nature of the surface and protects against aggregation.
Interactions of SPIONs with Albumin and Transferrin
Despite the low stability and polydispersity due to the agglomeration of the SPIONs in the buffer solutions, in some cases, the influence of proteinaceous media improves the parameters mentioned [20]. Consequently, in the last step of the examination, the applicability of the method for studying the SPIONs' behavior in the proteinaceous media was examined. Two different proteins were chosen as potential reagents: albumin (the most abundant protein in human serum) and transferrin (the potential tumor-targeting ligand [32]). Two different samples were prepared for each type of NP, and their separation results were compared after six hours of incubation. Figure 2 shows Fe 3 O 4 @Citr SPIONs incubated with transferrin (a) and albumin (b). The obtained results suggest the absence of nano-protein conjugates as there is no visible overlap of sulfur and iron signals' migration times, which can be assigned as proof for the protein corona formation. In the case of the transferrin sample, it was confirmed that a small peak (signal no. 3) corresponds to iron naturally present in the structure of this protein (25% of total transferrin is present in the bloodstream in the form of iron-saturated holo-transferrin) [33]. However, the differences in migration times of two signals of iron (no. 4 and 6) compared to the previously established time for Fe species (signal no. 7) and the visible correlation between the times of signals no. 4 and 6, and these corresponding to proteins (contain sulfur, no. 2 and 5) indicate the hypothesis of the possible interaction between reagents in both samples. The presence of proteins was notably the reason for the shift in the SPIONs' migration times. As the eventual explanation for the correlation between the changes in the signal's migration times, we look out for the protein corona formation in the sample and their breakdown during the separation of magnetic nanoobjects in the electric field. Similar to the surface modification, proteins that coat the NP's surface determine the value of their migration times. The mentioned phenomenon is probably caused by the SPIONs' interaction with the inner capillary wall due to their magnetism. To confirm that the change in migration time of the SPIONs was related to protein corona formation, the sample of Fe 3 O 4 @Citr incubated with albumin was subjected to ultrafiltration in order to remove the excess of free proteins. Then the >100 kDa fraction was subjected to analysis under optimized conditions. Despite the decrease in signal intensity (caused by the reduced recovery of conjugates from the filter), a similar observation (in comparison to the samples before ultrafiltration) was noted, confirming that the conjugates were present in the sample just before the electrophoretic analysis (ESM, Figure S9). Moreover, the lack of signals corresponding to the SPIONs in the filtrate (<100 kDa) was also noticed, suggesting that only free proteins passed through the filter. (7); separation under optimized conditions (see Tables 1 and 2).
Analogical tests were carried out for other synthesized SPIONs, and similar phenomena occurred in the case of Fe3O4@PEG (and non-modified Fe3O4, see ESM, Figure S10). As can be seen in Figure 3, the SPION signals (no. 3 and 5) have migration times longer than the transferrin (no. 2) and albumin (no. 4) signals and shifted to shorter times than before interaction with proteins. However, the signal quality for these types of SPIONs is worse Analogical tests were carried out for other synthesized SPIONs, and similar phenomena occurred in the case of Fe 3 O 4 @PEG (and non-modified Fe 3 O 4 , see ESM, Figure S10). As can be seen in Figure 3, the SPION signals (no. 3 and 5) have migration times longer than the transferrin (no. 2) and albumin (no. 4) signals and shifted to shorter times than before interaction with proteins. However, the signal quality for these types of SPIONs is worse due to their tendency to agglomerate. While PEG stabilization is primarily related to the strong steric effect (independent from the ζ-potential value and the charge of NPs), the agglomeration of Fe 3 O 4 @PEG may be caused by the vulnerability of the PEG coating to substitution due to the low affinity to the SPIONs. Concerning the Fe 3 O 4 @Au, despite the inferior quality of nanomaterial signals, incubation with albumin allowed for the stabilization of the nanomaterial. Moreover, analogous to previous analyses, the visible peak without disturbance appeared following the signal originating from the albumin (ESM, Figure S11). In the case of Fe 3 O 4 @PEI, no iron signals were detected on electropherograms despite incubating with proteins. Similar to other NPs, the reason can be attributed to the breakdown of the already formed protein corona during electrophoretic separation and, thus, the retention of SPIONs in the capillary. By contrast, in the case of Fe 3 O 4 @Au, the presence of albumin on their surface with a high possibility stabilized the nanomaterial structure, and the signal was noticed on the electropherogram. Investigation of the SPIONs in more complex samples was also carried out. Fe3O4@Citr was incubated for 6 h with 10-times diluted human serum in order to study the behavior of synthesized nanoparticles in the presence of other matrix compounds. As albumin constitutes about 50-60% of the total plasma proteins, the obtained electropherogram ( Figure S12) was similar to Figure 2b. Unfortunately, the examination of the interactions between SPIONs and other human serum proteins is laborious in real samples due to the still insufficient resolution between many compounds and, on top of that, a high concentration of albumin (38-50 g/L in relation to, e.g., 2-3 g/L of transferrin), which can obscure other signals.
Considering previous research [29], in which the protein corona formation was confirmed based on the overlap of albumin and nanoparticle signals, the question ariseswhat is the reason for the difference in the obtained results between the commercially available and self-synthesized NPs? The answer can be strictly related to the different Investigation of the SPIONs in more complex samples was also carried out. Fe 3 O 4 @Citr was incubated for 6 h with 10-times diluted human serum in order to study the behavior of synthesized nanoparticles in the presence of other matrix compounds. As albumin constitutes about 50-60% of the total plasma proteins, the obtained electropherogram ( Figure S12) was similar to Figure 2b. Unfortunately, the examination of the interactions between SPIONs and other human serum proteins is laborious in real samples due to the still insufficient resolution between many compounds and, on top of that, a high concentration of albumin (38-50 g/L in relation to, e.g., 2-3 g/L of transferrin), which can obscure other signals.
Considering previous research [29], in which the protein corona formation was confirmed based on the overlap of albumin and nanoparticle signals, the question arises-what is the reason for the difference in the obtained results between the commercially available and self-synthesized NPs? The answer can be strictly related to the different magnetism of the investigated nanomaterials (strong for synthesized, poor in the case of the custom product), as, undoubtedly shown above, its presence strongly affects the processes taking place in the separation capillary. In summary, the purchased SPIONs displayed poor magnetic properties, so their migration in the capillary during electrophoretic separation was predictable, and the biologically occurred forms were stable. In contrast, currently used SPIONs exhibit remarkable magnetic properties, and thus, their behaviors in the capillary after applying the voltage resulted in the decomposition of nano-bio forms or their in-capillary further changes. On the other side, the mentioned issues can be treated as a premise that strongly superparamagnetic nanomaterials are not able to form durable interactions (bonds?) with proteins. This observation shed new light on the necessity of changing the procedures of protein corona discerning investigation.
Chemicals
All reagents used throughout this study were at least of analytical grade. Iron(II) chloride tetrahydrate, iron(III) chloride hexahydrate, polyethyleneimine (branched, M w = 25,000 Da), sodium chloride, sodium hydroxide, sodium hydrogen phosphate, sodium dihydrogen phosphate, and gold(III) chloride trihydrate were obtained from Sigma-Aldrich (St. Louis, MO, USA). The 25% ammonium hydroxide was the product of Chempur (PiekaryŚląskie, Poland). Tris hydrochloride (tris(hydroxymethyl)aminomethane hydrochloride), sodium citrate dihydrate, iron, and sulfur ICP standard solutions were purchased from Merck Millipore (Darmstadt, Germany). Vanadium standard solution was obtained from Fluka (Buchs, Switzerland). Poly(ethylene glycol) (M w = 800 Da) was the product of Thermo Scientific (Geel, Belgium). Albumin and transferrin from human serum and superparamagnetic iron oxide nanoparticles with an amphiphilic polymer coating terminated with carboxyl groups (25 nm core size, Fe 3 O 4 @COOH) were obtained from Sigma-Aldrich (St. Louis, MO, USA). Methanol (LC-MS grade) was the product of POCh (Gliwice, Poland). The nitrogen of purity ≥99.999% applied during SPIONs' synthesis and oxygen of purity ≥99.999% used for the collision/reaction cell in ICP-MS/MS were purchased from Messer (Bad Soden, Germany). Ultrapure Milli-Q water was obtained from a Millipore Elix 3 system (Merck Millipore) and used throughout this study.
SPIONs Synthesis
The reaction was carried out at room temperature, under an inert gas (nitrogen) atmosphere, with constant stirring at a speed of 2000 rpm. In a 250 mL three-neck flask, 4.886 g FeCl 3 · 6 H 2 O and 2.982 g FeCl 2 · 4 H 2 O were dissolved in 120 mL water. The solution was deoxygenated by nitrogen purging for 10 min and heated to 90 • C. Then, after the iron salt had dissolved entirely, 15 mL of a 25% aqueous ammonia solution was quickly injected into the flask. After another 10 min, 20 mL of one of the selected stabilizers was added as an aqueous solution. Three different stabilizers were used: branched polyethyleneimine (20 mg/mL), polyethylene glycol (20 mg/mL), and sodium citrate (20 mg/mL). In synthesizing non-stabilized nanoparticles, the above-described step was omitted. The process of stirring and heating in an inert gas atmosphere was carried out for 2 h. After the end of the synthesis, the cooled nanoparticle suspension was purified six times by magnetic decantation and washed with ultrapure water. In the case of Au-doped nanoparticles, the reaction was carried out under similar conditions but on a different scale. Namely, 488.6 mg FeCl 3 · 6 H 2 O and 298.2 mg FeCl 2 · 4 H 2 O were dissolved in 120 mL of water. In parallel, a mixture of NaOH and HAuCl 4 was prepared by adding 3.3 mL of 100 mM aqueous HAuCl 4 to 7.5 mL of 1M NaOH. The molar ratio of Fe to Au precursors was 10:1. Then, when the iron salt was completely dissolved, the mixture of NaOH and HAuCl 4 was quickly added to the flask and stirred continuously at 90 • C under a nitrogen atmosphere for 2 h. After the synthesis was completed, the nanoparticle suspension was cleaned three times and separated from the gold nanoparticles formed in parallel by magnetic decanting. All finished NPs were suspended in ultrapure water. The presence of Au in the doped nanoparticles was confirmed by optical emission spectrometry (AvaSpec AVS-Desktop-USB2, Avantes, NS Appledoorn, The Netherlands) [34].
Sample Preparation
All samples were prepared in an incubation buffer simulating the physiological environment (10 mM phosphate buffer containing 100 mM NaCl, pH 7.4). Commercially available SPIONs were diluted to 15 µg/mL Fe. Synthesized nanoparticles were subjected to brief vortexing (MX-S Vortex, Scilogex, Rocky Hill, CT, USA) before they were 1000 times diluted (final Fe concentration in the range from 10 to 20 µg/mL). Both albumin and transferrin samples were diluted to 1 mg/mL. Human serum protein-SPION samples were prepared by adding NP solution to an individual protein. Samples were incubated at a temperature of 37 • C and stirred at 400 rpm for desired time using a MultiTherm incubator (Benchmark, Lodi, NJ, USA). The Fe 3 O 4 @Citr-albumin sample was subjected to ultrafiltration through a 100 kDa filter (Amicon Ultra 0.5 mL) at 6000 rcf for 40 min in normal and then reversed mode using MPW-352RH centrifuge (MPW Med. Instruments, Warsaw, Poland).
CE-ICP-MS/MS Instrumentation
Analyses were conducted on the CE-ICP-MS/MS hyphenation: an Agilent 7100 CE system and an Agilent 8900 ICP tandem mass spectrometer (Agilent Technologies, Waldbronn, Germany) working in MS/MS mode using collision/reaction cell. A CEI-100 nebulizer interface (Teledyne CETAC Technologies, Omaha, NE, USA) equipped with a low-volume spray chamber and a cross-piece merging the sheath liquid flow (containing 10 ppb 51 V in 10-times diluted BGE) was used for the liquid introduction system. The CE electrical circuit was closed by the platinum wire and maintained by the constant flow of the sheath liquid.
Vanadium, used as an internal standard in sheath liquid, allowed for observation of the hyphenation stability and nebulization efficiency during analyses. Those were initiated when the 51 V 16 O + signal was sufficiently high (counts per second, cps, >4000) and stable (relative standard deviation, RSD, >2%). Agilent MassHunter 4.5 Workstation (version C.01.06) and Agilent OpenLab Chem-Station (version C.01.09) software were used for the instrument control and data analysis (accessed on 8 July 2022).
ζ-Potential Measurement
ζ-potential measurements of synthesized SPIONs were carried out in disposable polystyrene cuvettes using a Zetasizer Nano ZS device (Malvern Panalytical, Malvern, UK) equipped with a dip cell with palladium electrodes (Malvern). Suspensions of different SPIONs were diluted with water to obtain the appropriate optical density for ζ-potential measurements and then briefly (~10 s) sonicated (ultrasonic cleaner Sonic-5, Polsonic, Warsaw, Poland) before the analysis. All measurements were conducted at 25 • C after 60 s of temperature stabilization and replicated three times per each type of NPs.
Morphology Characterization of Synthesized SPIONs
Scanning transmission electron microscopy (STEM) micrographs for Fe 3 O 4 @Citr and Fe 3 O 4 @PEI nanoparticles were captured at 30.0 kV accelerating voltage by means of Hitachi SU8230 ultra-high-resolution field emission scanning-transmission electron microscope (Hitachi High-Technologies Corporation, Tokyo, Japan) using copper TEM grids coated with Lacey carbon film.
Conclusions
To summarize, many problems and red flags emerged during the application of CE-ICP-MS/MS for studying SPION-human serum protein transformations. Firstly, the high resolution between SPIONs and protein signals is required for the method to be applicable for studying those analytes. While analysis of individual reagents is usually possible, samples containing self-synthesized strongly magnetic SPIONs incubated with serum proteins are highly challenging. Next, the optimal operational parameters need to be chosen as the compromise between two such different compounds, and because of the variety of surface modifications, it is not easy to prepare one versatile method that could be used to study samples consisting of different SPIONs. On top of that, the protein corona's lack of durability under electrophoretic separation conditions is a significant limitation in the investigation of NP changes.
The study concludes that, at this research stage, we are standing at a crossroads and divagate how we can achieve the robust speciation analysis of such fastidious analyte as conjugates of synthesized SPIONs with proteins burdened with high spectral interferences and for which applying the high-resolution separation technique as capillary electrophoresis is not enough. Moreover, while the magnetic properties are sought due to the potential applications of SPIONs, they are also a reason for the difficulties during their reliable analysis. The solution to the mentioned issues can be addressed by the selective modification of the inner capillary walls and by outlining our future tasks.
Supplementary Materials:
The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules27238442/s1. Figure S1:Evaluation of the oxygen flow rate in collision/reaction cell based on the S/N and signal intensity for 56 Fe 16 O + (a) and 32 S 16 O + (b) signals. The evaluation was based on the solution containing iron standard (50 ng/mL) and sulfur standard (250 ng/mL) in 2% HNO 3 ; Figure S2: Effect of the injection volume on the resolution of albumin (1 mg/mL), transferrin (1 mg/mL), and carboxyl-SPIONs (15 µg Fe/mL) for the samples of proteins and SPIONs measured independently. BGE: tris hydrochloride 5 mM, pH 7.4, voltage: +18 kV, MS/MS signals: 56
|
2022-12-07T17:46:23.706Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "c9fb6ad6b792067655d689bdf50feace2d97b148",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/23/8442/pdf?version=1669969765",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa9b92aba66aa1341f07f05be51d76d9d332368f",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265118979
|
pes2o/s2orc
|
v3-fos-license
|
Association between sensation, perception, negative socio-psychological factors and cognitive impairment
Background Evidence has suggested that sensation and socio-psychological factors may be associated with cognitive impairment separately in older adults. However, the association between those risk factors and cognitive impairment is still unknown. Objective To investigate the association between sensation, perception, negative socio-psychological factors, and cognitive impairment in institutionalized older adults. Methods From two public aged care facilities, 215 participants were investigated. The Mini-mental State Examination was applied to assess cognitive function. The sensory function was bifurcated into auditory and somatosensory realms which were evaluated using pure tone audiometry and Nottingham Sensory Assessment, respectively. Albert's test, left and right resolution, and visuospatial distribution were used to evaluate perception. Depression and social isolation were selected as negative socio-psychological factors and were evaluated by the Geriatric Depression Scale and the Lubben Social Network Scale. The multivariate analysis was performed utilizing binary logistic regression. Results Participants with moderately severe or severe hearing loss exhibited significant cognitive impairment compared to those with mild hearing loss. It was observed that perceptual dysfunction and depression were independently related to cognitive impairment. However, there was no significant association between somatosensory function, social isolation, and cognitive impairment in the institutionalized older adults. Conclusion More profound hearing loss, abnormal perception, and depression are associated with cognitive impairment in older adults. Subsequent research endeavors should delve into the causal mechanisms underpinning these associations and explore whether combined interventions have the potential to postpone the onset of cognitive impairment.
Introduction
Older adults who experience cognitive impairment that may or may not progress to dementia will experience poor quality of life and an enormous social and economic burden [1].Identifying modifiable risk factors for cognitive impairment is highly prioritized, which could be an important basis for providing effective and targeted early prevention strategies in the absence of current effective treatment [2].Aging presents great challenges to older adults' sensation, perception, and psychosocial functioning, which may further pose a threat to cognitive function.
Sensory dysfunction is a common age-related condition that affects older adults.By 2050, it is estimated that approximately 2.5 billion people worldwide will experience hearing loss, with up to one-third of older adults in China already affected [3].Hearing loss, the most prevalent sensory dysfunction in older adults, has been associated with cognitive impairment.It has been shown that older adults with hearing loss have a 24 % higher risk of cognitive impairment than those without [4].This association between hearing loss and cognitive function remains significant even after adjusting for visual impairment [5].Recent evidence suggests that this link may be attributed to the reallocation of cognitive processing resources to degraded auditory signals and the decline in right temporal lobe and brain volume [6].
Somatosensory impairment is another common sensory dysfunction among older adults, with previous study reporting a prevalence of 26 % for 65-74 years old, and 54 % for those 85 and older [7].Research indicates that somatosensory impairment may increase the risk of cognitive impairment and even dementia [8].Notably, early involvement of the somatosensory cortex has been observed in the progression of Alzheimer's disease, potentially leading to behavioral and functional consequences [8].However, limited research has focused on examining the association between somatosensory abnormalities and cognitive impairment.
Perception, which encompasses figure discrimination, spatial perception, and orientation, plays a crucial role in the cognitive process [9].Co-existing perceptual and cognitive deficits can magnify cognitive impairment, potentially leading to misattribution of cognitive causes to perceptual deficits.Previous study has suggested that optimizing perceptual input could restore normal cognition [10].Nevertheless, there is a lack of research investigating the relationship between perception and cognitive function.In addition, terminological confusion exists among existing studies regarding the distinction between sensation and perception.
Apart from sensory and perceptual challenges, older adults may also experience psychosocial difficulties.Among these, social isolation and depression stand out as significant issues within the older population.Recent data has shown that over 30.0 % of older adults are socially isolated [11] and 25.2%-40.7 % exhibit depressive symptoms [12].Data from the 2002-2018 waves of the Chinese Longitudinal Healthy Longevity Survey showed that social isolation independently predicted a higher risk of cognitive impairment [13].However, social isolation was assessed by questions such as marital status, living arrangement, type of social network, and sometimes social engagement, without a cut point.These metrics may not provide a precise representation of social isolation, as the fact that an older adult lives alone doesn't automatically indicate social isolation, and conversely, those who live with family can still experience significant isolation [14].Therefore, it is challenging and requires careful consideration to elucidate the relationship between social isolation and cognitive impairment solely based on a few simple items.Furthermore, depression has been associated with a 20-30 % risk of Alzheimer's disease, with evidence suggesting that depression can precede cognitive impairment [15,16].
While previous studies have explored the relationship between sensation, perception, negative socio-psychological factors, and cognitive impairment, most have tended to concentrate on just one or two variables at a time.To provide targeted efforts for early prevention, it is essential to examine these factors collectively and identify which carries a greater risk for cognitive impairment.Therefore, the aim of this study was to investigate the association between sensation, perception, negative socio-psychological factors, and cognitive impairment among older adults.
Participants
The data used in this study was originally collected as a component of a project centered around path analysis.The sample size of our project was determined using the empirical formula based on previous studies and expert opinions, as well as the recommendations outlined by James Grace [17].It recommends a minimal sample size of 200 for conducting structural equation modeling and path analysis.In alignment with the research query at hand and the accessibility of the available data, we opted to scrutinize it using binary logistic regression analysis.As for the sampling approach employed, we utilized a convenience sampling strategy to enlist participants from the intended population.
A total of 215 participants were enrolled in this study from two public aged-care facilities in Wuhan, China.The inclusion criteria encompassed individuals aged 60 years or older who had been resided in these public aged-care facilities for a minimum duration of one month.Exclusion criteria involved participants with a history of cognitive impairment caused by other central nervous system disorders, such as Lewy's body dementia, neurosyphilis, and intracranial tumors, as well as those diagnosed with severe disabilities that prevented them from participating in the study.All participants were invited to provide demographic information, which included age, sex, education, marital status, and comorbidities.Regarding the comorbidities, we classified the participants into three groups: individuals without any comorbidity, those with a single comorbidity, and those with multiple comorbidities.We conducted a thorough review of the health records of older adults residing in these two public aged-care facilities to ensure data accuracy.
Cognitive function
The Chinese version Mini-Mental State Examination (MMSE) [18] was applied to assess cognitive function by a well-trained researcher.This instrument is a valid tool to measure global cognitive function and has been widely used in previous studies [19].The MMSE consists of subtasks to assess orientation, attention, numeracy, immediate recall, delayed recall, language function, and visuospatial ability, with a total score of 30.Participants with a total MMSE score below 24 (for those with more than 6 years of education or secondary education and above), 20 (for those with 6 years or fewer of education, equivalent to lower secondary education), or 17 (for illiterate individuals) are classified as having cognitive impairment [20].
F. Wu et al.
Sensation
The Nottingham Sensory Assessment scale (NSA) was used to assess somatosensory function.The NSA scale was originally developed by Lincoln and colleagues [21], and its cultural adaptation and psychometric validation for implementation within the Chinese population were conducted by Yang Yuqi and colleagues [22].The NSA scale consists of three dimensions: tactile sensation, kinesthetic sensations, and stereognosis, with a total of eight subtasks including light touch sensation, temperature sensation, pinprick sensation, pressure sensation, tactile localization sensation, two-point discrimination sensation, proprioception, and solid sensation.Participants who correctly performed across all three dimensions are categorized as having normal somatosensory function, while those with incorrect performances are classified as having abnormal function.
All participants underwent Pure Tone Audiometry (PTA) for hearing loss in a quiet environment.Achieving a quiet environment involved measures such as conducting assessments in a sound-controlled consultation room, minimizing external noise sources (e.g., closing windows, turning off fans), and scheduling data collection during quieter times of the day.A portable audiometer (Tiger DRS Inc, Shanghai), which was provided by the Resource Center of Disabled Technology Adapter, was used.Before the test, instructions were provided to ensure participants' understanding and cooperation in the test.The average pure tone threshold for participants' better ears at 0.5, 1, 2, and 4 kHz was taken as the pure tone average.According to the World Health Organization criteria [23], the defined hearing loss degree includes normal hearing (<20 dB), mild (20 to <35 dB), moderate (35 to <50 dB), moderately severe (50-65 dB) and severe (65 to <80 dB).
Perception
There is no universally accepted instrument for assessing perception.Given the definition of perception and drawing from relevant literature, we integrated several tools, namely Albert's test [24], left and right resolution, and visuospatial distribution.This amalgamation provided us with uncomplicated means to evaluate perception.The rating scale for Albert's test is categorized as follows: none (1-2 crossed-out lines omitted), suspicious neglect (3-23 crossed-out lines omitted), and unilateral neglect (≥23 crossed-out lines omitted).The assessment of left and right resolution comprises two components.Firstly, participants were instructed to raise either their left or right hand.Secondly, participants were asked to identify specific body parts such as the left eye (or right eye), left ear (or right ear), and left knee (or right knee).The researcher then evaluated the accuracy of the participants' responses, categorizing them as either completely correct or incorrect.The visuospatial distribution involved figure-background discrimination.Participants were presented with a set of overlapping images and were required to identify a particular item within this arrangement.The researcher assessed the accuracy of the participants' answers, determining whether they were correct or incorrect.Furthermore, another aspect of the visuospatial distribution entails participants being provided with a picture and instructed to replicate the same shape and spatial position on the opposite side.The researcher evaluates the correctness of participants' reproductions.Those tools were commonly used in rehabilitation clinics.Participants who performed correctly in all three subtasks were classified as normal; otherwise, they were classified as abnormal.
Negative socio-psychological factors
Depression was assessed using the Geriatric Depression Scale-30 (GDS-30), originally developed by Brink et al. [25].It stands as the most extensively employed assessment of depression among older adults, especially those who might encounter cognitive impairment, even within the Chinese population [26].Each item is assigned a score ranging from 0 to 1.A higher score corresponds to a heightened level of depression, with a cutoff point set at 11. Scores below 11 are considered normal, while scores equal to or greater than 11 indicate depression.
Social isolation was assessed by the Lubben Social Network Scale-6 (LSNS-6) developed by Lubben and colleagues [27].A Chinese version of this instrument has also been developed [28].The scale comprises six items, each with five options, scoring from 0 to 5. The total scores range from 0 to 30, with scores below 12 indicating a risk of social isolation.
Data collection
The hearing testing was conducted by the first author, who underwent training by professionals at the Resource Center of Disabled Technology Adapter in Hubei Province.The testing frequencies were conducted in the order of 1, 2, 4, and 0.5 kHz, with a final retest of the threshold at 1 kHz.Audiometric thresholds were measured in 5 dB HL (hearing level decibels) increments and reduced by 10 dB HL.If the retest result differed by 10 dB from the initial result, a retest was performed for 2 and 4 kHz, with the second result being considered valid.The overall testing duration was approximately 20 min.In addition, the first author received training from a rehabilitation specialist and medical doctor, both of whom are faculty members at our institution.The cognitive function assessment was conducted by the first author, who received training from a neurologist during a previous study on MMSE.In cases where participants reported challenges in hearing clearly, the researcher conducted the assessment of the MMSE by allowing participants to visually observe each item.Concurrently, the researcher adjusted the volume and read each item aloud, providing assistance to the participants and asking if they could hear clearly.Importantly, the researcher refrained from engaging in or guiding the participants' responses to the items during this process.To ensure consistency in the assessment process, the first author was primarily responsible for conducting evaluation of hearing function, somatosensory and perception, and cognitive function, while the second author was responsible for assessing depression, social isolation, and demographic information.The second author provided precise instructions F. Wu et al. on how to complete the questionnaires and demographic forms.During the data collection process, the second author adopted a neutral and non-judgmental attitude.
Ethics statement
The study was approved by the Ethics Permission Committee of Wuhan Polytechnic University (BME-2021-1-01).All participants signed informed consent.
Statistical analyses
Statistical analyses were performed with SPSS (version 26.0).Data were presented as means and standard deviation for numeric variables, while categorical variables were presented as frequencies for.Firstly, univariate analysis was applied to recognize the possible associated factors of cognitive impairment.The Wilcoxon non-parametric test was used for numeric variables that were not normally distributed.The chi-squared test was used for categorical variables, including sex, education, marital status, comorbidities, hearing loss, somatosensory, perception, depression, and social isolation.Secondly, multivariate analysis was performed by binary logistic regression analysis.Model 1 was constructed without adjustments for other variables, while Model 2 was adjusted for age, sex, education, marital status, and comorbidities.A p-value of 5 % or lower was considered statistically significant.
Results
The mean age of participants was 73.1 ± 9.5 years old, with women accounting for 56.7 % of the total.Among the participants, 59.5 % had less than a lower secondary education, and widows constituted 45.6 % of the group.More than half of the older adults experienced comorbidities, and 60.5 % were socially isolated.Out of the total participants, 105 individuals (48.8 %) were identified as having depression.Moreover, it was found that over half of the participants had abnormal somatosensory and perception.
Table 1 depicts a profile of participant characteristics based on cognitive impairment and normal cognition.Of the 215 participants, 81 (37.7 %) were categorized as having normal cognition, and 134 (62.3 %) were categorized as having cognitive impairment.There was no significant difference in age between the participants with cognitive impairment and normal cognition (p > 0.05).The normal cognition group was more likely to have a higher educational level (p < 0.001).When compared to older adults with normal cognition, older adults with cognitive impairment suffered greater hearing loss.The proportions with moderate, moderately severe, and severe hearing loss were 29.9 %, 43.3 %, and 17.9 %, respectively.The prevalence of abnormal perception among older adults with cognitive impairment was 85.1 %, compared to 60.5 % among those with normal cognition.The percentages of somatosensory abnormalities and social isolation were similar in both groups, with more than 80 % and 50 %, respectively.Older adults with cognitive impairment were more likely to exhibit depression than those with normal cognition (p < 0.05).The associations of sensation, perception, and negative socio-psychological factors with cognitive impairment are shown in Table 2. Depression, perception, and hearing loss were significantly associated with cognitive impairment with (Model 2) or without (Model 1) the adjustment of other covariates.In model 2, in comparison to individuals with less than lower secondary education, those with secondary education or above exhibited a significantly lower risk of cognitive impairment, with an odds ratio of 0.309 (95 % CI = 0.148, 0.642).Furthermore, participants who experienced depression and abnormal perception had elevated risks of cognitive impairment, with odds ratios of 2.480 (95 % CI = 1.249, 4.924) and 4.428 (95 % CI = 2.022, 9.700) respectively.Compared to those with mild hearing loss, those with moderately severe and severe hearing loss had 4.619 (95%CI = 1.642, 12.997) and 5.836 (95%CI = 1.525, 22.327) higher odds of cognitive impairment.
Discussion
This study provides support for an independent association between cognitive impairment and sensation, perception, as well as negative socio-psychological factors.Specifically, older adults with moderately severe and severe hearing loss, as well as those experiencing abnormal perception, exhibited a significant association with cognitive impairment.Furthermore, depression emerged as an independent risk factor for cognitive impairment, while higher levels of education were identified as a protective factor.However, no significant associations were found between cognitive impairment and social isolation, abnormal somatosensory, and moderate hearing loss.
Sensory dysfunction, including hearing, vision, and olfaction impairments, has been identified as a common risk factor for cognitive impairment, with a growing body of evidence supporting these associations [29,30].These findings further substantiate previous research on the link between hearing loss and cognitive function.Longitudinal population-based studies have consistently demonstrated a robust connection between hearing loss and incident cognitive impairment [31,32], although some studies have reported weak or nonsignificant associations [33].The discrepancy in findings may arise from variations in the assessments of hearing loss.Prior studies often employed different methods, such as clinical testing or epidemiologic measures, including self-reported assessments like whispered voice tests and pure tone audiometry.Another possible explanation for inconsistent results could be attributed to the diversity in cognitive screening instruments employed across various studies, alongside variations in adjusting for potential confounding factors.Our findings indicated that older adults with moderately severe and severe hearing loss were at higher risk of cognitive impairment compared to those with mild hearing loss.This observation aligns with the notion that the severity of hearing loss is associated with the degree of cognitive impairment [34].The underlying mechanisms explaining the impact of hearing loss on cognitive function are still under investigation.Two commonly known explanations are the sensory deprivation hypothesis and the information-degradation hypothesis [35].The sensory deprivation hypothesis suggests that auditory deprivation leads to neural deafferentation, cortical reallocation to support other processes, and atrophy in brain regions associated with speech perception processing [36].Previous study has also observed reductions in temporal lobe volume among individuals with peripheral hearing loss [37].On the other hand, the information-degradation hypothesis proposes that degraded auditory ability increases demands on cognitive processing, leading to compensatory efforts [38].Another hypothesis suggests that a degraded peripheral hearing system resulted in the dysfunction of the central auditory system and interacted with existing cognitive impairment [39].Regardless of the specific hypothesis, existing study suggests that moderately severe or a greater degree of hearing loss might represent promising and modifiable targets for secondary prevention of cognitive impairment in older age.Apart from hearing loss, our research also focused on somatosensory function, which emcompass tactile sensation, kinesthetic sensations, and stereognosis.With aging, there is a decline in the sensitivity and discrimination abilities of sensory receptors, leading to a deterioration in somatosensory function [40].This co-occurrence conditions in aging might suggest a potential link with cognitive impairment.However, our findings indicated that individuals with abnormal somatosensory function showed no significant risk of cognitive impairment when compared to those with normal somatosensory function.This finding differs from what was reported in a previous study [8].One plausible explanation for this absence of significant finding could be that the particular nature of the somatosensory impairment might not directly impact the cognitive processes associated with cognitive impairment.The brain engages distinct regions and circuits in somatosensory processing and cognitive function, and if these areas are not closely interconnected, the impact on cognitive function could be constrained.The disparities in results might also stem from variations in measurement instruments.
Perception and cognitive impairment are considered to be potentially related, as both are susceptible to the effects of aging.In this study, abnormal perception demonstrated a significant association with cognitive impairment, consistent with findings reported by Robert and Allen [41].As perceptual input becomes more challenging to discriminate, additional compensatory cognitive processes are required to decode the incoming signals [41].A previous study showed that visual acuity, contrast sensitivity, and stereo acuity impairments were associated with lower scores on the Modified MMSE [42].However, Komes' findings indicated that poor perception might not necessarily correlate with poor memory performance [43].It is possible that pre-attentive or unconscious processing in spatial neglect could impact conscious perception and decision-making, which might not have a direct connection to memory performance [44].Further investigation is needed to gain a better understanding of the interplay between perceptual disorders and impairments in various cognitive domains.
Our findings indicated that depression constituted a significant risk factor for cognitive impairment, whereas social isolation did not exhibit a significant association.Nevertheless, contrasting results were reported in a prior study concerning the link between social isolation and cognitive impairment.Several longitudinal studies have demonstrated an independent association between social isolation and cognitive decline, even after adjusting for depression and other covariates [45,46].A meta-analysis that encompassed fifty-one articles found a statistically significant, albeit small effect size, association between social isolation and cognitive function [47].The disparities between these studies and our findings could be attributed to variations in the instruments employed to assess social isolation.Social isolation is typically evaluated based on factors such as the types or size of social networks, the frequency of social contacts, and engagement in social activities [47].However, few instruments provide definitive cutoff points to identify instances of social isolation.Additionally, when examining the relationship between social isolation and cognitive impairment, it is crucial to consider not only the quantity but also the quality of social interactions.Even individuals with extensive social networks might experience loneliness and isolation if they lack intimate and supportive emotional connections [48].This phenomenon is referred to as "emotional isolation", which represents a form of social isolation.A previous study indicated that heightened levels of emotional isolation were associated with an elevated risk of cognitive decline and dementia [49].In forthcoming research, it is imperative to encompass the multidimensional nature of social isolation, with a focus on the importance of factors such as social network structure, interpersonal connections, and the diversity of social activities.
There are several limitations of this study.Firstly, the MMSE was applied in this study by using verbal language instructions.Despite our efforts to improve audibility by adjusting the volume and allowing participants to wear their own glasses, it remains possible that the impact of hearing loss on the assessment could not be entirely eliminated.As a result, there is a potential for an overestimation of participants' cognitive function.Secondly, a portable audiological assessment instrument was used to evaluate the pure tone hearing thresholds of both ears in a relatively quiet room, rather than in a standardized environment.This choice may have potential impact on the reliability and generalizability of the findings.We recommend future studies be conducted under standardized conditions and then compared with results obtained in non-standard environments.Additionally, exploring alternative methods or technologies that minimize such influences would also be beneficial.Thirdly, it is imperative to acknowledge the constraints inherent in employing non-standardized instruments for the evaluation of perception in this study.Consequently, it is crucial for future research endeavors to prioritize the validation and enhancement of the assessment methodology.Additionally, causal inferences could not be established by this cross-sectional study.Longitudinal studies are needed to provide more robust evidence that validates our observed results and to further elucidate the intricare relationship between sensation, perception, negative socio-psychological factors, and cognitive impairment.Moreover, it's worth noting that this study had a relatively small sample size, and only two public aged-care facilities were selected as the research sites.Consequently, the generalizability of the findings might be limited.Larger studies with more diverse populations could offer further insights into the relationship between sensation, perception, negative socio-psychological factors, and cognitive impairment.Finally, despite our efforts to control for potential confounding factors, there remains the possibility of unaccounted variables influencing the outcomes, such as lifestyle choices and genetic predisposition.
Conclusions
Cognitive impairment, abnormal sensation, abnormal perception, and negative socio-psychological factors are prevalent agerelated conditions that contribute to significant negative health outcomes and substantial disease burden.This study suggests that older adults with moderate to severe hearing loss or more, abnormal perception, and depression face a heightened risk of cognitive impairment.Fortunately, these identified independent risk factors hold the potential for modification to some extent.Strategies such as hearing protection, rehabilitation, and non-pharmacological interventions targeting perception and depression could offer promising and cost-effective avenues for enhancing the prevention and management of cognitive impairment.
F
.Wu et al.
Table 1
Demographic characteristics of participants.
Table 2
Results of binary logistic regression analysis.
Note: Model 1 represents the results without controlling for variables such as age, sex, education, marital status, and comorbidities.Model 2, on the other hand, includes the results after incorporating these control variables.F.Wu et al.
|
2023-11-11T16:21:21.137Z
|
2023-11-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5c696626bdee4d0c4d7396a92174045d9e0d86d6",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cell.com/article/S240584402309309X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99fd6e65630a125b6d435d5d3ca590dd310d55cb",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246683550
|
pes2o/s2orc
|
v3-fos-license
|
The efficacy and safety of intravitreal injection of Ranibizumab as pre-treatment for vitrectomy in proliferative diabetic retinopathy with vitreous hemorrhage
Background Intravitreal injection of anti-vascular endothelial growth factor (VEGF) has become first line therapy for diabetic macular edema. This study evaluated the efficacy and safety of intravitreal injection of Ranibizumab (IVR) as pre-treatment for pars plana vitrectomy in proliferative diabetic retinopathy (PDR) patients with vitreous hemorrhage. Methods This pilot randomized controlled trial included 48 eyes with vitreous hemorrhage resulting from active PDR. Eyes were treated with IVR 1 or 3 days before vitrectomy or a sham subconjunctival injection 3 days before surgery. The occurrence of new tractional retinal detachment (TRD), total operation time, and intraoperative findings were compared. The concentrations of VEGF and connective tissue growth factor (CTGF) in aqueous humor and plasma collected at the time of IVR and vitrectomy were determined by ELISA. Results None of the patients who received IVR experienced new TRD. Ranibizumab injection improved intraoperative outcomes. The mean concentrations of VEGF in aqueous humor were significantly lower after than before IVR in patients who received IVR 1 and 3 days before surgery (P < 0.001 each). The CTGF/log10 (VEGF) ratio was significantly higher after than before IVR in patients who received IVR 3 days before vitrectomy (P = 0.046). Conclusion Preoperative IVR is an effective and safe strategy for the surgical treatment of severe PDR combined with vitreous hemorrhage. IVR 1 and 3 days before surgery can significantly reduce VEGF content in aqueous humor and effectively improve intraoperative conditions without causing TRD. Trial registration This study was registered with the Chinese Clinical Trial Registry. Name of the registry: Exploratory analysis of effect of intravitreal ranibizumab as pre-treatment for pars plana vitrectomy in proliferative diabetic retinopathy. Trial registration number: ChiCTR-ONC-16009520. Date of registration: October 20, 2016. URL of trial registry record: http://www.chictr.org.cn/searchprojen.aspx
neovascularization membrane during surgery, however, especially in patients with advanced PDR, can induce bleeding or oozing of blood, seriously impairing the surgical field. Repeated bleeding prolongs the time of surgery and increases the frequency of intraoperative import and export of surgical instruments into the vitreous cavity, greatly increasing the likelihood of potential complications.
Vascular endothelial growth factor (VEGF) has been found to play an important role in many retinal vascular diseases [3][4][5]. Ranibizumab is a recombinant, monoclonal antibody fragment that inhibits VEGF [6], and intravitreal injection of ranibizumab (IVR) before vitrectomy has been reported to be an effective adjunct treatment for PDR [7]. Ranibizumab causes transient vasoconstriction, which may clinically resemble vascular regression. This "regression" of retinal neovascularization can occur in PDR, although clinical observations suggest that responses to anti-VEGF therapy are spontaneously reversed [8,9].
Surgical trauma may be reduced by planning pars plana vitrectomy during this window period. Although anti-VEGF therapy can temporarily reduce leakage from diabetic neovascular lesions, it may be associated with tractional retinal detachment (TRD), a serious complication with poor prognosis in patients with PDR. Because anti-VEGF agents can induce fibrous proliferation, TRD can occur after their injection. Proliferative membrane formation without obvious TRD has been reported in one eye 9 days after intravitreal injection of conbercept [10]. Another study reported that the mean time from intravitreal injection of bevacizumab to TRD was 13 days (range, 3-31 days) and that anti-VEGF treatment can cause TRD for up to 3 days [11]. Intravitreal injection of conbercept 3-7 days before vitrectomy was found to increase the risk of TRD [12][13][14]. Although the effects of bevacizumab 24 h after injection have been reported [8], no study to date has evaluated the efficacy and safety of IVR 1-3 days before vitrectomy for the treatment of PDR with vitreous hemorrhage.
TRD is closely related to fibrous proliferation after anti-VEGF treatment. Retinal fibrosis in patients with PDR correlated significantly with the concentrations of connective tissue growth factor (CTGF) [15][16][17]. Increasing CTGF levels and/or decreasing VEGF levels can alter the balance between CTGF and VEGF, leading to a tilt in the angiofibrotic switch towards fibrosis [18][19][20].
The present study evaluated alterations in aqueous VEGF and CTGF concentrations before and after Ranibizumab injection and the correlations between these concentrations and clinical improvements. These findings may help determine a period during which IVR can be safely administered prior to surgery without worsening or inducing TRD.
Patients
The study was approved by the Ethical Review Board of the Second Xiangya Hospital of Central South University and conformed to the tenets of the Declaration of Helsinki. Participants were informed of the off-label use of IVR and provided a detailed description of the treatment. All participants provided written informed consent for this treatment.
In this prospective pilot study, participants were recruited prospectively in the Department of Ophthalmology of the Second Xiangya Hospital from November 2016 to July 2018. All subjects were Han Chinese. One eye of each subject was included. All subjects underwent a comprehensive ophthalmological evaluation, including visual examination, slit-lamp biological microscopy, measurement of intraoperative pressure (IOP) by noncontact tonometry, and fundus examination after pupil dilation with tropicamide (1%) and phenylephrine hydrochloride (2.5%). Patients were also examined by indirect ophthalmoscopy, ultrasonic biological microscopy, B-ultrasound scan, and optical coherence tomography (OCT). Patients were included if they had vitreous hemorrhage resulting from active PDR.
Patients were excluded if they had (1) neovascular glaucoma resulting from active PDR plus cataract or PDR combined with rhegmatogenous retinal detachment (RRD). If preoperative RRD was obscured by dense vitreous hemorrhage in patients with PDR, these patients were excluded if RRD was detected during surgery. Patients were also excluded if they had (2) a history of previous laser treatment or vitrectomy in the study eye; (3) a history of thromboembolic events, including myocardial infarction or cerebrovascular accidents; (4) a systemic inflammatory, autoimmune, or immunosuppressive disease; (5) a pre-existing ocular disease (retinal vein occlusion, retinal artery occlusion, or age-related macular degeneration); (6) uncontrolled hypertension, as defined by the guidelines of the seventh report of the joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure; (7) a history of previous ocular surgery; or (8) a coagulation abnormality or current use of an anticoagulative medication other than aspirin.
The patients were randomly divided into three groups using a computer-generated list of random numbers was used. Patients in the control group (group A) were administered a sham subconjunctival injection 3 days before vitrectomy, whereas patients in the 1-day IVR group (group B) and 3-day IVR group (group C) were administered IVR (0.5 mg/0.05 ml) 1 and 3 days before surgery, respectively.
Preliminary experimental results indicated that, to detect a reduction of VEGF between before and 1 or 3 days after IVR, with a one-sided 5% significance level and a power of 90%, a sample size of 16 patients per group was necessary.
Patients underwent fundus examinations and B-scanning before IVR and before vitrectomy to detect vitreous hemorrhage density and TRD. Vitreous hemorrhage density was graded as described [21], with Grade 0 indicating that no blood was present in the vitreous, and the entire retina was visible; Grade 1 as hemorrhage obscuring 1 to 5 clock hours of the retina; Grade 2 as hemorrhage obscuring 5 to 10 clock hours of the central and/or peripheral retina, or a large hemorrhage located posterior to the equator with varying clock hours of the anterior retina visible; Grade 3 as the presence of a red reflex with no retinal detail visible posterior to the equator; and Grade 4 as dense vitreous hemorrhage with no red reflex present. The effect of Ranibizumab on the extent of TRD prior to surgery was assessed by examining the study eye prior to IVR with ultrasonography (Acuson Sequoia 512 scanner, 14 MHz linear probe; Siemens Medical Solutions USA, Mountain View, CA, USA), and on the day of surgery by slitlamp biomicroscopy and ultrasonography.
The total operation time, defined as the time from the first incision to the final operative closure, was measured intraoperatively. Immediately after surgery, the surgeon (J.Z.), who was masked to treatment allocation, completed a standardized questionnaire on the intraoperative outcomes, including the presence or absence of intraoperative bleeding, iatrogenic tears, relaxing retinotomy, use of endothermic, and use of silicone oil at the end of surgery. All surgical procedures were performed by a single surgeon (J.Z.) to avoid differences in technique and in data collection.
Sample collection
Samples of aqueous humor and venous blood were collected during IVR (before IVR) and during vitrectomy (after IVR). Aqueous humor samples were collected through anterior chamber puncture using a 30-gauge needle and immediately frozen at -80℃ in the dark. Venous blood samples (5 ml) were collected in the fasting state and centrifuged at 1000 × g for 15 min, and the resulting plasma samples were divided into aliquots and stored at -80℃. To avoid damage to the blood-water barrier caused by surgical trauma, all samples were obtained at the beginning of the operation, prior to any conjunctival or intraocular procedure.
Enzyme-linked immunosorbent assay (ELISA)
VEGF and CTGF concentrations were measured by VEGF (MultiSciences, Hangzhou, China) and CTGF (FibroGen, South San Francisco, CA) ELISA kits, according to the manufacturers' protocols, as described [20][21][22]. Briefly, 50 µl/well aliquots of aqueous humor were pre-plated with monoclonal antibody on 96-well plates, and the plates were incubated at room temperature for 3 h. After washing three times, a second antibody was added to each well, and the plates were incubated at 37 ℃ for 3 h. Substrate was added to each well; the plates were incubated in the dark for 30 min at room temperature; and 100 µl stop solution (Multiskan Ascent; Thermo Fisher Scientific GmbH, Schwerte, Germany) were added to each well to terminate the reaction. All samples were prepared and measured on the same day using the same standard preparation methods. Each experiment was performed three times and their results were averaged.
Data analysis
All data are expressed as the mean ± SD. The Shapiro-Wilk normality test was used to assess normality. Categorical covariates were assessed individually using χ 2 tests. Intragroup differences in VEGF and CTGF concentrations and CTGF/log 10 (VEGF) ratios were analyzed using Wilcoxon signed rank tests of two independent samples and paired-sample t tests. Correlations between VEGF and CTGF concentrations in aqueous humor in the PDR patients were analyzed by Spearman's rank-order correlation tests. Best corrected visual acuity (BCVA) was converted to logMAR for statistical evaluation, and logMAR BCVAs before and 1 and 6 months after IVR were compared by repeated measure ANOVA. All statistical analyses were performed using SPSS 22.0 software, with p-values less than 0.05 considered statistically significant.
Patients' demographic data
This study included 48 eyes with PDR; of these, 16 eyes received subconjunctival injections of 0.05 ml BSS 3 days before vitrectomy, and 16 eyes each received IVR 1 and 3 days before vitrectomy. The mean ± SD ages of patients in these three groups were 53.1 ± 6.3, 46.9 ± 11.7, and 49.8 ± 10.1 years, respectively. Age, gender, incidence of hypertension, vitreous hemorrhage density, and TRD did not differ significantly in these three groups (all p > 0.05) ( Table 1).
Functional and anatomical results
None of the patients receiving IVR 1 and 3 days before vitrectomy experienced a new occurrence of TRD after IVR treatment. The mean height of TRD after IVR increased slightly in patients who received IVR 3 days before vitrectomy (Table 2). At baseline, the mean BCVAs in patients who received BSS and IVR 1 and 3 days before vitrectomy were 2.006 ± 0.427 logMAR, 1.988 ± 0.463 logMAR, and 2.05 ± 0.412 logMAR, respectively, with no statistical significance. Three months after vitrectomy, the mean postoperative BCVAs in these three groups increased to 0.706 ± 0.277 logMAR, 0.488 ± 0.189 log-MAR, and 0.463 ± 0.159 logMAR, respectively, with all of these values differing significantly from baseline (all p < 0.05) and VA improvement being significantly greater in the treated groups than in the sham group (p = 0.004) ( Table 2). At the end of the operation, the retina was completely attached and vitreous hemorrhage was cleared in all eyes. Table 3 shows the incidences of TRD before surgery, intraoperative bleeding, iatrogenic retinal breaks, use of endodiathermy, and use of silicone oil, and mean total surgical time in the three groups. Patients who received IVR 1 and 3 days before surgery showed significant differences in incidences of intraoperative bleeding, use of endodiathermy, and use of silicone oil tamponade and in mean total surgical time, from sham injected patients. However, there were no statistically significant differences in patient or surgical characteristics between the two IVR treated groups (p > 0.05) ( Table 4). .305 pg/ml), respectively, after IVR, indicating that plasma VEGF concentrations in both groups did not differ significantly before and after IVR (p > 0.05 each) (Fig. 1). .98 pg/ml), respectively, after IVR, indicating that plasma CTGF concentrations did not differ significantly before and after IVR in the two groups (all p > 0.05) (Fig. 2).
CTGF/log 10 (VEGF) ratio
Paired-sample t tests showed that the CTGF/log 10 (VEGF) ratio before and after IVR did not differ significantly in patients who received IVR 1 day before vitrectomy (P = 0.051). This ratio, however, was significantly higher after than before IVR in patients who received IVR 3 days before vitrectomy (P = 0.046) ( Table 5).
Correlation between the aqueous humor VEGF and CTGF in patients with PDR
Spearman's rank-order correlation analysis found no correlations between the concentrations of VEGF and CTGF in aqueous humor of patients who received IVR 1 day (r = -0.119; P = 0.66) and 3 days (r = -0.179; P = 0.506) before vitrectomy (Fig. 3).
Correlations between the aqueous humor and plasma VEGF and CTGF levels in patients with PDR
No significant correlations were observed between aqueous humor and plasma VEGF concentrations of patients who received IVR 1 day (r = 0.174, P = 0.52) and 3 days (r = -0.218, P = 0.418) before surgery. Similarly, the concentrations of CTGF in aqueous humor and plasma of patients who received IVR 1 day (r = 0.049, P = 0.858) and 3 days (r = -0.156, P = 0.564) before surgery did not differ significantly (Table 6).
Discussion
To our knowledge, this study is the first to measure the effects of IVR on VEGF and CTGF concentrations in aqueous humor and serum and the correlations between these factors and clinical improvements in patients with PDR. Because the concentrations of VEGF are strongly correlated in the aqueous and vitreous humor of patients with PDR [23], aqueous samples were collected to measure the concentrations of VEGF and CTGF. We found that the concentration of VEGF in aqueous humor decreased significantly 1 day after IVR in patients with PDR, consistent with results showing that bevacizumab had a similar effect at 24 h [8]. This finding provides a theoretical basis for early traction dissection after injection. This study also found that the concentrations of VEGF in aqueous humor were significantly lower 3 days after than before IVR. Up-regulation of VEGF can promote angiogenesis and increase vascular permeability [22,24,25]. VEGF plays a key role in the pathogenesis of diabetic retinopathy as a mediator between neovasculogenesis and permeability. VEGF levels are significantly higher in patients with than without PDR, with VEGF level being directly proportional to the growth of new blood vessels and leakage [26,27] and the severity of diabetic retinopathy [28]. The present study found that, compared with eyes that underwent vitrectomy in the absence of IVR, those who received IVR experienced intraoperative improvements. Specifically, the rates of intraoperative bleeding, use of intraocular electric coagulation, and need for silicone oil tamponade, as well as operation time, were significantly lower in patients who received IVR 1 and 3 days before vitrectomy compared with the sham treated group. Injection of Ranibizumab reduced bleeding during the cutting phase and during the removal of fibrous vascular tissue, facilitated membrane dissection, and made surgery easier and safer. However, the rates of intraoperative bleeding, use of intraocular electric coagulation, and need for silicone oil tamponade, as well as operation time, did not differ significantly in patients who received IVR 1 and 3 days before vitrectomy.
In agreement with results showing that bevacizumab was effective 24 h after administration [8], the present study confirmed that IVR 1 day before vitrectomy was effective for patients with PDR, as it significantly improved intraoperative conditions.
The concentrations of CTGF in aqueous humor 1 and 3 days after IVR did not differ significantly from the concentration before IVR. These time periods may have been too soon after IVR to detect significant changes in CTGF in aqueous humor. In addition, because VEGF can upregulate CTGF, a decrease in VEGF may down-regulate CTGF, especially during early stages of VEGF inhibition [29,30]. Moreover, because CTGF binds to the extracellular matrix in its natural state through the heparin binding domain, soluble CTGF that circulates freely in eye fluid may become degraded [31]. Finally, the number of eyes included in this study may have been too small to detect a significant difference in CTGF concentration.
CTGF is an indicator of intraocular fibrosis. Evidence has shown that CTGF plays an important role in promoting fibrosis and inducing wound healing in the body and eyes. Elevated levels of CTGF have been associated with proliferative vitreoretinopathy, choroid neovascularization, and PDR. In addition, the ratio of CTGF to VEGF concentration is an important predictor of vascular fibrosis transformation in PDR. Increasing CTGF and/ or reducing VEGF concentrations can alter the balance between CTGF and VEGF, leading to a vascular fibrosis switch that causes fibrosis. Anti-VEGF therapy can lead to the development of TRD [18]. We did not observe significant TRD progression 1 and 3 days after IVR, suggesting that an interval of 1 to 3 days between IVR and vitrectomy is a safe window to reduce intraoperative hemorrhage during membrane dissection, facilitating surgery without worsening pre-existing TRD or inducing new TRD. IVR 1-3 days before vitrectomy can also reduce hospital length of stay and patient economic burden when compared with IVR 3-7 days before PPV.
Because the ratio of CTGF to VEGF concentrations was found to be the strongest predictor of the degree of fibrosis [17,32], CTGF/log 10 (VEGF) ratios were compared before and after IVR. We found CTGF/log 10 (VEGF) ratio was significantly increased 3 days after IVR, mainly due to a reduction in VEGF levels. This change may be related to intraocular fibrosis activity, suggesting that after IVR, the risk of fibrosis occurrence or progression increases over time, suggesting that vitrectomy soon after IVR would benefit patients. The present study found no correlation between CTGF and VEGF concentrations in aqueous humor. In contrast, a previous study found significant correlations between CTGF and log 10 (VEGF) concentrations in the vitreous humor of patients with PDR [16]. Additional studies are required to clarify this discrepancy.
This study took some steps to decrease the potential inaccuracies. Plasma concentrations of VEGF and CTGF were measured before and after IVR, but did not differ significantly. In addition, correlation analysis showed that no association between plasma and aqueous levels of VEGF and CTGF in either IVR group. These results suggest that the elevated levels of VEGF were not associated with systemic diseases but are produced locally by increased retinal secretion, followed by leakage into the anterior chamber.
This study had several limitations, including the small number of included subjects. In addition, despite measuring the levels of CTGF and VEGF in aqueous humor, we could not establish a cause-and-effect relationship between them. In addition, although we found that IVR improved the intraoperative conditions of these patients, we did not conduct a postoperative analysis. Additional studies are needed to determine longterm outcomes of IVR combined with vitrectomy in patients with PDR.
Conclusions
This pilot study showed that preoperative IVR is an effective and safe strategy for the surgical treatment of severe PDR combined with vitreous hemorrhage. IVR administered 1 and 3 days before surgery can significantly reduce VEGF concentrations in aqueous humor and effectively improve intraoperative conditions without causing TRD. Prospective randomized studies with larger sample sizes are needed to further investigate the effect of IVR 1 or 3 days before vitrectomy for PDR.
|
2022-02-10T14:26:22.744Z
|
2022-02-10T00:00:00.000
|
{
"year": 2022,
"sha1": "f8b5de4ede598d36071d1346789e7a4f0550d26c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "5a818586de01a75982c4d64c28eb349b8beda559",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14761272
|
pes2o/s2orc
|
v3-fos-license
|
Pure L-functions from algebraic geometry over finite fields
This is an expository paper which gives a simple arithmetic introduction to the conjectures of Weil and Dwork concerning zeta functions of algebraic varieties over finite fields. A number of further open questions are raised.
Introduction
The most basic question in number theory is to understand the integers. In particular, for a given integer N , we need to understand the absolute value N for every absolute value ? on the rational number field Q. For the complex absolute value, this is to determine For the p-adic absolute value with p being a prime, this is to determine |N | p = p −ap , a p = ord p (N ) =?
This last theoretical question is practically the problem of factoring integers which has important applications.
More generally, suppose that we are given a sequence of interesting integers {N 1 , N 2 , · · · , } In order to understand this sequence of integers, one naturally forms a suitable generating function Z({N i }, T ) which contains all information about the given sequence. The basic question is then to understand the analytic properties of the generating function Z({N i }, T ) with respect to each absolute value ? of Q. This includes the possible meromorphic continuation Z({N i }, T ) and a suitable RH about its zeros and poles, for both the complex absolute value and the p-adic absolute value. If we have a family of such generating functions, then we would like to understand its analytic variation when the parameter varies.
The most interesting type of sequences arises from counting prime ideals in a finitely generated commutative ring or equivalently from counting rational points on an algebraic variety. In the case of counting prime numbers in the ring Z of integers, the natural generating function is the Riemann zeta function. This first example was studied by Riemann from complex point of view and by Kummer-Kubota-Leopoldt from p-adic point of view. It is the motivating example for much of the modern developments on general Hasse-Weil zeta functions of algebraic varieties as well as their conjectural p-adic analogues.
Our interesting sequence of integers in this paper arises from counting rational points over various finite extension fields of an algebraic variety X defined over a finite field of characteristic p. The resulting generating function is the zeta function of X which is the object of study in the celebrated Weil conjectures. The zeta function is a rational function as proved by Dwork using p-adic methods. It satisfies a suitable complex and ℓ-adic RH as proved by Deligne using ℓ-adic methods, where ℓ is a prime number different from p. The p-adic RH for the zeta function is more complicated and remains mysterious in general. The variation of the whole zeta function, when the variety moves through an algebraic family, leads to new interesting questions which are understood to certain extent.
The zeta function is however not pure. That is, the zeros and poles have different absolute values. This is especially so from p-adic point of view. Thus, the zeta function decomposes as a product of pure pieces defined in terms of the absolute values of the zeros and poles. A finer form of the RH is to understand the purity decomposition. A further question is to understand the variation of each pure piece of the zeta function when the variety moves through an algebraic family. This naturally leads to the construction of pure L-functions arising from algebraic geometry. Our fundamental question here is then to understand the analytic properties of such a pure L-function, notably its meromorphic continuation and RH. Since the zeta function has integer coefficients, there are three different types of absolute values (complex, ℓ-adic and p-adic) that we can choose to work. These lead to different results and different theories.
In the case that the absolute value is the complex or the ℓ-adic absolute value, Deligne's main theorem shows that the pure L-function from algebraic geometry can be identified with a geometric L-function, that is the L-function of a certain geometric constructible ℓ-adicétale sheaf. One can then apply the full machinery of ℓ-adicétale cohomology. In particular, the pure L-function from algebraic geometry is always rational by Grothendieck's rationality theorem. It satisfies a suitable complex and ℓ-adic RH by Deligne's theorem. The situation is quite algebraic in nature. All the expected finiteness properties hold. The main point is that one does not need to distinguish the subcategory of geometric ℓ-adic sheaves from the full category of ℓ-adic sheaves. Much of the relevant theory works for every ℓ-adic sheaf, whether it is geometric or not.
In the case that the absolute value is the p-adic absolute value, the situation is quite different and much more complicated. A pure L-function from algebraic geometry is not rational any more. However, Dwork conjectured that a pure L-function from algebraic geometry is p-adic meromorphic. The situation is quite transcendental in nature. Grothendieck's specialization theorem, Katz's isogeny theorem and Berthelot's finiteness theorem on relative crystalline cohomology show that, at least in nice cases, a pure L-function from algebraic geometry can be identified with the L-function of a certain geometric p-adicétale sheaf. The trouble is that the L-function of a general p-adicétale sheaf does not behave well. The usual trace formula does not hold. Even worse, the L-function is not meromorphic in general, unlike what Katz conjectured. Thus, in order to prove Dwork's conjecture, one must distinguish the subcategory of geometric p-adic sheaves from the full category of p-adic sheaves. Our recent work shows that the geometric p-adic sheaves can be understood by introducing a new category with growth condition. Roughly speaking, this new category consists of infinite nuclear complexes of infinite rank nuclear overconvergent F-isocrystals. The nuclear overconvergent condition insures that the L-function is p-adic meromorphic. This then establishes the meromorphic continuation of a pure L-function from algebraic geometry and thus proves Dwork's conjecture. The p-adic RH for such a pure L-function is extremely mysterious. A good understanding seems to require entirely new ideas.
We would like to point out that there is a more general and more difficult type of zeta functions arising from counting algebraic cycles on an algebraic variety X defined over a finite field. These zeta functions are called the zeta functions of algebraic cycles. They seem to be out of reach at this time. For zero cycles, they reduce to the zeta functions studied in the Weil conjectures, which are already quite interesting and fruitful. In general, they contain important arithmetic information about algebraic cycles and are related to Tate's conjecture. Under a mild finiteness condition on the effective cone of the Chow group, these zeta functions of algebraic cycles are conjectured to be p-adic meromorphic. If this conjecture is true, one could go on to understand its p-adic RH, their variation when the variety moves through an algebraic family, the purity decomposition and the resulting pure L-functions of algebraic cycles. Any proof of this meromorphic conjecture would likely have a profound impact on arithmetic and geometry of algebraic cycles.
Rationality of zeta functions
Let F q be the finite field of q elements of characteristic p. Let X be an algebraic variety defined over F q , namely, a separated scheme of finite type over F q . For example, if X is affine, then X is defined by a system of polynomial equations For an extension field F q k of degree k over F q , let X(F q k ) denote the set of F q k -rational points on X. The zeta function of X/F q is then defined to be the following formal power series where X 0 is the set of closed points on X/F q andF q denotes a fixed algebraic closure of F q . Recall that a closed point on X/F q is simply the orbit of an actual geometric point x ∈ X(F q ) under the q-th power Frobenius map σ : x → x q and deg(x) is the smallest positive integer k such that σ k (x) = x. The integer #X(F q k ) is simply the number of fixed points of the k-th power σ k acting on X(F q ). We shall write Z(X, T ) for Z(X/F q , T ) when the ground field F q is clear.
The zeta function is a generating function for counting rational points on the variety X over various finite extension fields of F q . Although at each stage F q k , only finitely many points are counted, the generating function counted all points of the variety X over the algebraic closureF q . This explains why the zeta function should contain a great deal of geometric and arithmetic information about the variety X. Thus, our first fundamental question is Question 2.1. Understand the zeta function Z(X, T ).
A general principle in analytic arithmetic algebraic geometry is that all zeta functions and L-functions arising naturally from arithmetic algebraic geometry are analytically good functions. Here we are considering algebraic geometry over F q . Based on earlier results in various special cases such as diagonal hypersurfaces, curves and abelian varieties, Weil conjectured the following rationality result.
Theorem 2.2. The zeta function Z(X, T ) is a rational function in T .
This theorem was first proved by Dwork [Dw1] using p-adic analysis. His basic idea is to establish a trace formula which expresses the zeta function as a finite alternating product of the Fredholm determinants of several nuclear operators acting on certain p-adic Banach spaces. Since the Fredholm determinant of a nuclear operator is entire [Se], it follows that the zeta function is p-adic meromorphic. One then concludes the proof of Theorem 2.2 with the following p-adic analogue of Borel's rationality criteria. Dwork's rationality proof pioneered his p-adic theory of zeta functions. Although his proof is not cohomological in nature, it can be viewed as a proof on the chain level in a suitable sense. Its refinement by introducing commuting differential operators is cohomological in nature. The relevant cohomology is of De Rham type as shown by Katz [K1]. This motivated later more systematic developments of various p-adic cohomology theories such as the formal cohomology [MW] for smooth affine varieties, the liftable cohomology [Lu] for smooth projective liftable varieties, crystalline cohomology for smooth projective varieties and rigid cohomology [Be] for arbitrary varieties. These p-adic cohomology theories in their full generality have not been well understood. The relevance of differential operators suggests the existence of a close relationship between zeta functions and differential equations. This is the subject of F-crystals [K3], which we will not discuss here.
Weil's conjectural approach to Theorem 2.2, generalizing his method for curves and abelian varieties, is to construct a suitable cohomology theory for varieties in characteristic p so that the Lefschetz fixed point theorem holds which immediately implies the rationality of the zeta function. This led to the introduction and full development of the ℓ-adicétale cohomology theory by Grothendieck and others, where ℓ is a prime number different from p. This theory provides some of the most powerful tools in the study of zeta functions, especially from the complex point of view. For the exceptional prime p, the theory of p-adicétale cohomology has also been studied by Grothendieck's school but it does not behave well for L-function purpose. Nevertheless, the p-adicétale cohomology contains important p-adic information about the zeta function. This turns out to be closely related to Dwork's conjecture. We shall however not discuss this cohomological point of view here due to our restricted simple nature of this survey.
The zeta function is a power series with integer coefficients. However, all proofs of Theorem 2.2 use non-archimedian methods, either ℓ-adic or padic. No direct "motivic" proof over Q or C is known. Although the classical method of Gauss sums and Jacobi sums for the elementary diagonal case can be viewed as a method over C, it seems hopeless to extend this method to the general case. In the case of curves, there is a direct approach over Z using the Riemann-Roch theorem. It would be of great interest to extend this Riemann-Roch approach to the general case. Such a direct approach would make it possible to attack Tate's conjecture [Ta] relating orders of zeros and ranks of algebraic cycles which so far has been inaccessible. Such a Riemann-Roch approach should also be useful in understanding the harder zeta functions of algebraic cycles introduced in [W1].
The first application of Theorem 2.2 is the existence of a formula for the number of rational points on X. Since the zeta function is rational, there are finitely many algebraic integers α i and β j such that .
Taking logarithmic derivative, one deduces the following well structured formula for every positive integer k: In particular, this provides a fast algorithm to compute the number of rational points on the variety X over a large finite field F q k provided X is defined over a small finite field F q . The theory can be improved to get a fast algorithm, even for X defined over a large finite field F q as long as the characteristic p is small (q can be large), see [W5] for a perspective on this algorithmic subject which has important practical applications.
Once we know that the zeta function is a rational function, we can move on to the next fundamental question. What can we say about its zeros and poles? Ideally, we would like to know how many zeros and poles with a given absolute value. This is the RH for the zeta function. Since the reciprocal zeros α i and β j are algebraic integers, there are several different types of absolute values (complex, ℓ-adic and p-adic) that we can consider. Accordingly, we can talk about the complex RH, the ℓ-adic RH and the p-adic RH for the zeta function Z(X, T ). These questions and their family versions are discussed in the following sections.
Purity decomposition and RH
The zeta function has rational coefficients. In order to understand its analytic properties, we have to choose an absolute value of the rational number field Q. For this purpose, we let ? be a fixed absolute value on Q. Let Ω be the smallest extension field of Q such that Ω is both algebraically closed and topologically complete with respect to ? . If ? is the complex absolute value |?|, Ω is the field C of complex numbers. If ? is the p-adic absolute value |?| p for some prime number p, Ω is the field C p of p-adic numbers which is the completion of an algebraic closure of the p-adic rational numbers Q p , where the p-adic absolute value is normalized by |p| p = 1/p. For a fixed prime power q of p, we define the slope function on Ω by This definition depends on q which is our base of the slope function. For α ∈ Ω, we can write where u(α) is a number with absolute value 1. Note that in the complex absolute value case, our slope function s(α) is twice the weight function w(α) as defined by Deligne. In the p-adic case, the slope function is simply the order function s(α) = ord q (α).
For a given polynomial and a real number s, we define the slope s part of P (T ) by This immediately yields the purity (or slope) decomposition of P (T ): This is a finite product since P (T ) is a polynomial. This definition easily extends to rational functions in Ω(T ) as well as meromorphic functions in Ω((T )) by the Weierstrass factorization theorem. In the latter case, the purity decomposition is an infinite product in general. Applying the purity decomposition to the zeta function Z(X, T ), our notation becomes where Z s (X, T ) is the slope s part of Z(X, T ). This is called the purity (or slope) decomposition of the zeta function Z(X, T ). Our next fundamental question is then to understand this purity decomposition. That is, The first step is to understand the degree of the rational function Z s (X, T ) for each s. To be precise, we recall that the degree of a rational function is the degree of the numerator minus the degree of the denominator. Similarly, the total degree of a rational function is the degree of the numerator plus the degree of the denominator.
Definition 3.2. Let d(X) (resp. D(X)) denote the degree (resp. the total degree) of the zeta function Z(X, T ). Similarly, for each real number s, let d s (X) (resp. D s (X)) denote the degree (resp. the total degree) of the slope s part Z s (X, T ) of the zeta function Z(X, T ).
It is clear that we have the purity decomposition for the degrees d(X) and D(X): The RH for the zeta function is to determine the exact slopes of the zeros and poles. It is easy to see that the number of reciprocal zeros of slope s is given by (D s (X) + d s (X))/2. Similarly, the number of reciprocal poles of slope s is given by (D s (X) − d s (X))/2. Thus, the following weaker but more precise form of Question 3.1 is already the RH for Z(X, T ).
Question 3.3. Understand the pure degree d s (X) and the pure total degree D s (X) for all s.
Both the degree d(X) and the total degree D(X) of the whole zeta function Z(X, T ) can be effectively bounded using p-adic methods as shown by Bombieri [Bo]. This is because Dwork's p-adic theory is constructive. Thus, the pure degree d s (X) and the pure total degree D s (X) are also effectively bounded for all s. The integers d s (X) and D s (X) depend both on the slope s and on the variety X. Of course, they also depend on the choice of the absolute value ? which was built in the definition of the slope decomposition. In this section, we consider the case that X is fixed. In next section, we consider how d s (X) and D s (X) vary when X varies.
Deligne's main theorem [De2] on the complex and the ℓ-adic RH can be stated in our notations as follows.
Theorem 3.4 (complex case). Let ? be the complex absolute value. Let n be the dimension of the variety X. If This shows that for the complex absolute value, the non-trivial slopes are rational numbers in the interval [0, n] with denominators at most 2. Thus, for the complex RH, it remains to determine the 2n + 1 values d s (X) and D s (X), where s varies in the above exceptional set of 2n + 1 numbers. These remaining values are in general difficult to determine although some extremal cases such as s = n, n− 1 2 can be done. They depend on the detailed geometry of the variety X. However, in nice situations, they can be determined by the Betti numbers. This includes the smooth projective (more generally smooth proper) case as conjectured by Weil and first proved by Deligne [De1] [De2] using ℓ-adic cohomology. A similar proof was later given by Faltings [Fa] using crystalline cohomology.
For the ℓ-adic RH, the answer is much simpler and very clean.
Theorem 3.5 (ℓ-adic case). Let ? be the ℓ-adic absolute value for some prime ℓ = p. If s = 0, then In particular, That is, all zeros and poles of the zeta function are ℓ-adic units.
Since the ℓ-adic case is always pure and thus gives no interesting decomposition, from now on, we shall mostly restrict our attention to the complex case and the p-adic case.
For the p-adic RH, unfortunately, no clean general answer is possible, even in the smooth projective case, even in the case of smooth projective curves. However, one has the following weak but simple general result, which is a consequence of the rationality of the zeta function.
Theorem 3.6 (p-adic case). Let ? be the p-adic absolute value. There is an effectively computable positive integer N such that if This result is far weaker than Theorem 3.4, because the denominator N here is not bounded by 2. In fact, the denominator N cannot be bounded by any finite absolute constant. It depends very much on the variety X and the prime number p, not just on the geometry of X. This explains why the p-adic RH for the zeta function is very complicated. It can be determined in a few special cases such as the elementary diagonal hypersurface case where one can use the Stickelberger theorem for Gauss sums. In nice situations such as the smooth projective case, a good lower bound for the Newton polygon (which determines the p-adic RH) is given by the Hodge polygon (constructed using the Hodge numbers of a lifting of X), as conjectured by Katz and proved by Mazur [M2]. Strictly speaking, both polygons are defined for each cohomological dimension. For our restricted purpose, one could either fix a cohomological dimension or take the collection over all cohomological dimensions. If the two polygons coincide, the variety X is called ordinary.
For a given smooth projective variety X, Mazur's theorem provides a geometric lower bound for the arithmetic Newton polygon but it does not tell if X is ordinary or how far X is from being ordinary. There is no known clean recipe (conjectural or not) to determine the Newton polygon of X. A preliminary step might be to look at the size of the endomorphism group of X. The larger the endomorphism group for X is, the more relations there would be among the zeros and poles and thus X would be less likely ordinary, as one observes in the diagonal case and the supersingular elliptic curve case. It would be interesting to make such heuristic arguments more precise.
Another interesting question is to consider the limiting behavior of the Newton polygon as p varies. For instance, let X be a smooth projective variety defined over Q and let X p be the reduction of X mod p for large prime p. Let N P (X p ) denote the Newton polygon of X p and let HP (X) denote the Hodge polygon of X. As p goes to infinity, the Newton polygon N P (X p ) would not have a limit in general. But it always has its lower limit. As this lower limit has nothing to do with any particular prime p, it should be a geometric invariant of X. Thus, it is tempting to make Conjecture 3.7. Let X be a smooth projective variety over Q. Then, lim p→∞ inf N P (X p ) = HP (X).
Since there are only finitely many possibilities for the N P (X p ) for a fixed smooth projective X/Q, the above conjecture is equivalent to saying that there are infinitely many ordinary primes p (probably of positive density) for a given smooth projective X/Q. If the endomorphism group of X is sufficiently small, one may further hope that the set of ordinary primes p for X often has density 1. A somewhat related conjecture is given by Serre [Oo] in the case of abelian varieties over number fields. Similarly, one could ask lim p→∞ sup N P (X p ) =?
This sup limit should again be a geometric invariant of X. So far, we have been concerned with the degree of the pure slope s part Z s (X, T ). A finer question is to understand the rational function Z s (X, T ) itself for non-trivial slope s. For instance, one could ask about the possible rationality of the coefficients for Z s (X, T ) when Z s (X, T ) is written as the quotient of two relatively prime polynomials with constant term 1. The answer depends on the absolute value ? we choose. If ? is the ℓ-adic absolute value, then Theorem 3.5 shows that Z s (X, T ) trivially has integer coefficients. The same result holds in the complex absolute value case. This follows from Deligne's main theorem and Galois theory.
Theorem 3.8. Let ? be the complex absolute value. Then, the pure slope s part Z s (X, T ) has integer coefficients.
In the p-adic case, it is no longer true that the coefficients of Z s (X, T ) are integers, because the purity decomposition becomes more substantial, even in nice situations. One has the following p-adic rationality result which is a consequence of the rationality of Z(X, T ).
Theorem 3.9. Let ? be the p-adic absolute value. Then, the coefficients of the pure slope s part Z s (X, T ) are p-adic integers in Z p , which are also algebraic integers.
Of course, the reciprocal roots of Z s (X, T ) will not be in Z p in general.
Variation of the pure degrees
In the previous section, we discussed the pure degrees d s (X) and D s (X) for the zeta function of a single variety X. In this section, we turn to discussing how the pure degrees d s (X) and D s (X) vary when X moves through an algebraic family.
Let f : Y → X be a family of algebraic varieties over F q parametrized by X. For each geometric point x ∈ X(F q deg(x) ), the fibre Y x = f −1 (x) is an algebraic variety defined over F q deg(x) and thus we have the purity decomposition: where the slope function is defined with respect to q deg(x) since F q deg(x) is the base field of Y x . For each rational number s, let d s (Y x ) (resp. D s (Y x )) denote the degree (resp. the total degree) of the slope s part Z s (Y x , T ) of the zeta function Z(Y x , T ), where we stress again that the slope function is defined with respect to q deg(x) . The general results in Section 3 give information about the nature of the numbers d s (Y x ) and D s (Y x ) for a fixed x. We would like to understand how these numbers d s (Y x ) and D s (Y x ) vary when the geometric point x varies. For a given integer m, we define These are subsets of X. We would like to understand the possible algebraic and geometric structure of these sets.
The theory of ℓ-adic cohomology and Deligne's main theorem [De2] imply Theorem 4.1. Let f : Y → X be a family of algebraic varieties over F q . Let ? be the complex or the ℓ-adic absolute value. Then for every integer m and every slope s, the set X(d s , m) is a constructible subset of X.
The same general statement for the harder X(D s , m) does not seem to follow from the existing results because the nature of cancellation is not well understood in general. Thus, we have Question 4.2. Let f : Y → X be a family of algebraic varieties over F q . Let ? be the complex or the ℓ-adic absolute value. Is it true that for every integer m and every slope s, the set X(D s , m) is a constructible subset of X?
This is known to be true if f is smooth and proper because there is then no cancellation of zeros and poles by Deligne's theorem. We hope and are slightly inclined to believe that Question 4.2 has a positive answer, at least after replacing the constructible notion by the more general definable notion of logic.
In the p-adic case, one can ask a similar question.
Conjecture 4.3. Let f : Y → X be a family of algebraic varieties over F q . Let ? be the p-adic absolute value. Then for every integer m and every slope s, the set X(d s , m) is a constructible subset of X.
Question 4.4. Let f : Y → X be a family of algebraic varieties over F q . Let ? be the p-adic absolute value. Is it true that for every integer m and every slope s, the set X(D s , m) is a constructible (or more generally definable) subset of X? Conjecture 4.3 and Question 4.4 are known to have a positive answer in nice situations such as when f : Y → X is proper and smooth over F q with a proper smooth lifting to characteristic zero. In such a case, Conjecture 4.3 is a consequence of Grothendieck's specialization theorem [K4] and Berthelot's finiteness theorem [Be] on the relative crystalline cohomology or the relative rigid cohomology in the proper smooth liftable case. The stronger Question 4.4 in such a nice situation also needs Deligne's theorem to avoid the cancellation problem.
The general case of Conjecture 4.3 follows from the conjectural finiteness of the relative rigid cohomology. However, Conjecture 4.3 is already known to be true for s = 0 by the congruence formula of Deligne-Katz [De3] [K2] for zeta functions. The general case of Question 4.4 is apparently more difficult. It is not known to be true even for s = 0.
In some nice cases such as the universal family of hypersurfaces (or more generally complete intersections), the generic Newton polygon coincides with the Hodge polygon as conjectured by Mazur [M1]. This can be proved in two ways. One approach is to use hyperplane sections to reduce the question to the case of a generic plane curve, as worked out by Illusie [Il] using crystalline cohomology and some ideas of Deligne. Another more flexible approach introduced by the author [W2] is to establish suitable local to global decomposition theorems to reduce the question to the diagonal case where Stickelberger theorem applies, see [W7] for an exposition of this method. This method has other applications such as the more general Adolphson-Sperber conjecture [AS] for the generic Newton polygon of exponential sums. Hence, in such a generic ordinary case, the generic values of d s (X) and D s (X) are determined by the Hodge numbers of X. Of course, for a given smooth projective hypersurface, there is still no simple recipe to determine if X is ordinary.
It seems very difficult and complicated to have a complete understanding of the stratification of the universal family of hypersurfaces by Newton polygons. This is so, even in the case of curves. However, for the more managable family of abelian varieties, the stratification question by Newton polygons is reasonably well understood by the work of de Jong and Oort [DO].
Variation of the zeta function
Let f : Y → X be a family of algebraic varieties over F q parametrized by X. In this section, we consider how the zeta function Z(Y x , T ) varies when the parameter x varies. A standard procedure is to understand all the higher moments of the zeros and poles of the family of rational functions Z(Y x , T ). Write where the α i (x) and the β j (x) are algebraic integers parametrized by x. For a positive integer k, the k-th moment of the zeros and poles of Z(Y x , T ) for x ∈ X(F q ) is given by the sum More generally, the k-th moment over the d-th extension field F q d of F q is defined by This is an integer by Galois theory. The variation question is then to understand the k-th moment sequence S k,d (f ) (d = 1, 2, · · ·) for every k. In terms of generating functions, we need to understand the k-th power L-function of the family f defined by In the last two equations, the degree of x is defined over F q , but the ground field for the fibre Y x has been extended from F q deg(x) to its k-th extension field F kdeg(x) . Explicitly, This result is a special case of a more general rationality result for certain partial zeta functions studied in [W11]. It can be proved using either Dwork's p-adic method or Grothendieck's ℓ-adic method. The only new ingredient is to use tensor operations and Newton's formula expressing the k-th power symmetric functions in terms of elementary symmetric functions. A theorem of Faltings [W11] shows that the partial zeta function is always nearly rational.
By Theorem 5.1, for each k, there are finitely many algebraic integers γ i (k) and δ j (k) such that .
Thus, for every positive integer k, we have the formula for the k-th moment sequence Such formulas imply the existence of a suitable equi-distribution theorem for the zeros and poles of the family of rational functions Z(Y x , T ) parametrized by x ∈ X. The simple approach we take here is related to but different from the approach of Deligne-Katz [K5] via monodromy representations.
To be more precise in the application of the equi-distribution problem, one needs to know information about the RH for the k-th power L-function L [k] (f, T ). Applying the purity decomposition to L [k] (f, T ), we can write s (f, T ) is the slope s part of L [k] (f, T ) and the slope is defined with respect to the base q k . Recall that the purity decomposition always depends on our choice of the absolute value ? on Q.
Although the present situation is more complicated than the zeta function case treated in Section 3, it is essentially of the same nature. That is, similar RH holds for L [k] (f, T ) in the complex and ℓ-adic case. One simply keeps track of the simple proof of Theorem 5.1 and applies Deligne's theorem. In particular, the zeros and poles of L [k] (f, T ) are always ℓ-adic units. For the complex absolute value, we make it explicit as follows, see [W11] for a more general result on partial zeta functions.
This result shows that it suffices to understand d s (f, k) and D s (f, k) for the above 2(km+n)+1 exceptional values of the slope s. These are in general quite complicated to determine except in some extremal cases.
As for the p-adic RH, one only has the following very weak general result.
Theorem 5.4 (p-adic case). Let ? be the p-adic absolute value. There is an effectively computable positive integer N (f, k) such that if , · · · , km + n}, s (f, T ) = 1, where n is the dimension of X and m is the relative dimension of f .
Again, this result for the p-adic case is far weaker than Theorem 5.3 for the complex case. The problem is that we know almost nothing about the denominator N (f, k). An important problem is to estimate the size of N (f, k). This is related to the size of the degree d s (f, k). The extra integer k provides a new dimension of variation. That is, how the degrees d(f, k), D(f, k), d s (f, k) and D s (f, k) vary with k. Although the size of the degrees d(f, k) and D s (f, k) can be effectively bounded, no explicit general bounds are available in the literature. This should be a realistic interesting problem to study.
The first example of great arithmetic interest is the universal family f of elliptic curves parametrized by a modular curve or an Igusa curve. In this case, the sequence of integers d(f, k) is mutually determined by the sequence of dimensions of modular forms of weight k+2. The harder sequence D(f, k) is unknown because of possible cancellation of zeros and poles. For the complex absolute value, the easier degree sequence d s (f, k) is understood by Deligne's theorem on the Ramanujan-Peterson conjecture [De4] but again the harder total degree sequence D s (f, k) is unknown. For the p-adic absolute value, the easier degree sequence d s (f, k) is already quite mysterious and one conjectures [W6] that N (f, k) is bounded independent of k, or even stronger, d s (f, k) is bounded independent of k and s. This is the p-adic analogue of the Ramanujan-Peterson conjecture. Very little is known about it. The variation of d s (f, k) with k is closely related to the Gouvêa-Mazur conjecture [GM], see Coleman [Co] and [W4] for positive results in this direction. The variation of d s (f, k) with k is also closely related to the geometry of the eigencurve as studied by Coleman-Mazur [CM].
Variation of the pure part of the zeta function
Let f : Y → X be a family of algebraic varieties over F q parametrized by X. In this section, for each fixed slope s, we consider how the pure part Z s (Y x , T ) varies when the parameter x varies. This is the full aspect of Question 3.1. This question is deeper than just how the degree d s (Y x ) of Z s (Y x , T ) varies with x. It is also deeper than how the total zeta function Z(Y x , T ) varies with x. As in Section 5, the standard procedure is to understand the k-th moment sequence associated to the reciprocal zeros and reciprocal poles of Z s (Y x , T ). Equivalently, we need to understand the k-th power L-function associated to the k-th moment sequence. Write .
The associated k-th moment over the d-th extension field F q d of F q is defined by This is an integer in the complex absolute value case by a generalization of Theorem 3.8. It is a p-adic integer in Z p in the p-adic case by a generalization of Theorem 3.9. The variation question is then to understand the k-th moment slope s sequence S k,d (s, f ) (d = 1, 2, · · ·) for every k. In terms of generating functions, we need to understand the following pure L-function from algebraic geometry.
Definition 6.1. Let k be a positive integer. For a given slope s ∈ Q, the k-th power slope s L-function L [k] (s, f, T ) attached to the family f is defined to be where the slope function is defined with respect to the base q k (note that we have already replaced T by T deg(x) in the Euler factors).
In the last two equations, the degree of x is defined over F q , but the ground field for the fibre Y x has been extended from F q deg(x) to its k-th extension field F kdeg(x) q . Explicitly, The k-the power slope s L-function arises in a natural way from arithmetic and geometry. Thus, by the general principle, we expect it to be an analytically good function. For the ℓ-adic absolute value, the situation reduces to the k-th power L-function of Section 5, since the purity decomposition is trivial. For the complex absolute value, the situation is deeper than Theorem 5.1. One needs the full strength of Deligne's main theorem [De2] which says that the higher direct image sheaf is mixed. This together with Grothendieck's rationality theorem and Newton's formula implies the following result which can be viewed as the archimedian analogue of Dwork's conjecture.
Theorem 6.2 (complex case). Let ? be the complex absolute value. For every positive integer k and every rational number s, the k-th power slope s L-function L [k] (s, f, T ) is a rational function.
In the case that the absolute value is the p-adic absolute value, the situation is much more complicated. For slope s = 0, the pure L-function L [k] (0, f, T ) can be obtained as a suitable limit of the easier mixed L-functions treated in the previous section. To see this, one observes that the total degree of the zeta function of each fibre Y x is uniformly bounded by the results in [Bo]. This implies that for certain choices of positive integer M depending on f , we have the following congruence for all positive integers m. This p-adic continuity relation shows that the p-adic limit lim exists as a formal p-adic power series. In fact, one checks easily from the definition of the pure L-function that the above p-adic limit is precisely the pure slope 0 L-function: For slope s > 0, we do not know any p-adic limiting formula for the pure slope L-function L [k] (s, f, T ) in terms of the mixed L-functions L [k] (f, T ). For slope s = 0, although each mixed L-function L [k] (f, T ) is rational by Theorem 5.1, we cannot conclude that its limit L [k] (0, f, T ) would be rational or even meromorphic. There are geometric examples (the universal family of elliptic curves, for instance) which show that the k-th power slope 0 L-function L [k] (0, f, T ) is not rational in the p-adic case. Dwork, however, conjectured [Dw5] that the pure slope L-function is always p-adic meromorphic.
Conjecture 6.3 (Dwork). Let ? be the p-adic absolute value. For every positive integer k and every rational number s, the k-th power slope s Lfunction L [k] (s, f, T ) is a p-adic meromorphic function.
In , Dwork showed that Conjecture 6.3 is true for several examples. This includes the universal family of elliptic curves and a certain family of K3-surfaces. His idea is to reduce the problem to the classical overconvergent setting by establishing the existence of the so-called excellent lifting. The excellent lifting however rarely exists and thus one cannot hope that it would work in general. Another approach studied in [DS] and [W3] is to try to relax the overconvergent condition by using the weaker c log-convergent condition. This approach pushes Dwork's trace formula to its full potential but again it cannot work in general by the counter-example in [W3]. The general case of Dwork's conjecture is being proved in our recent series of papers [W8-10] by introducing an entirely new method building on the fundamental Dwork-Monsky trace formula [Mo]. At this point, the abstract version of Dwork's conjecture for F-crystals and σ-modules, as proved in [W8-10], implies Conjecture 6.3 whenever the finiteness of the relative rigid cohomology with compact support is known such as the smooth proper liftable case. Our strategy to handle singular and open family is to directly work with the infinite rank version. This requires additional work which is in progress.
Thus, the k-th power slope s L-function L [k] (s, f, T ) is always a good function. An immediate application is the existence of a formula for the k-th moment slope s sequence S k,d (s, f ). For each k and each s, we can write where the product is finite in the complex absolute value case and infinite in the p-adic case. We have The formula for the k-th moment slope s sequence is Such formulas imply the existence of a suitable equi-distribution theorem for the zeros and poles of the family of pure rational functions Z s (Y x , T ) parametrized by x ∈ X. In the complex absolute case, this is related to but different from the approach of Deligne-Katz via monodromy representations. In the p-adic absolute value case, such a result is completely new.
Once we know that L [k] (s, f, T ) is a good function, we can then ask for the various RHs for L [k] (s, f, T ). In the case of Theorem 6.2, there are three types of RHs (the complex, the ℓ-adic and the p-adic). The situation is similar to Section 5. The L-function in Theorem 6.2 has integer coefficients in view of Theorem 3.8.
In the case of Conjecture 6.3, one can only ask the p-adic RH since the zeros and poles are p-adic numbers. This p-adic RH for the L-function in Conjecture 6.3 is apparently more difficult than the already mysterious padic RH for the L-function in Theorem 6.2, because there are now infinitely many zeros and poles in the case of Conjecture 6.3. In fact, we have no ideas what the general conjectural formulation for the p-adic RH should be. It seems un-reasonable to expect a finite upper bound for the denominators of the slopes of the zeros and poles of L [k] (s, f, T ), although such a finite bound is conjectured to be true [W6] in the simplest elliptic family case.
In the case n = 1 (elliptic curves), the k-th power slope s (s = 0) L-function L [k] (s, f, T ) in the p-adic case already contains a great deal of arithmetic information about classical and p-adic modular forms. In the case n = 2 (K3-surfaces), Conjecture 6.3 was proved by Dwork but its arithmetic consequences have not been explored. In the higher dimensional case n > 2, Conjecture 6.3 follows from our recent work. It would be of great interest to explore its arithmetic consequences. For instance, the k-th power slope s (s = 0) L-function L [k] (s, f, T ) in the p-adic case should be closely related to the arithmetic of the mysterious mirror map in mirror symmetry, see Lian and Yau [LY] for another relation bewteen Dwork's work and the mirror map. In some cases, the special value of the L-function L [k] (s, f, T ) at T = 1 seems to be related to the conjectural p-adic L-functions of algebraic varieties defined over a number field. Very little is known in this direction.
In this final section, following [W1], we define the zeta functions of algebraic cycles of a projective variety embedded in a given projective space. Several standard conjectures associated with such zeta functions are described.
We first recall the definition of the degree of a projective variety. Let n and m be positive integers with n ≤ m. Let X be a closed n dimensional subscheme of the m dimensional projective space P m over F q (the embedding will be fixed). We shall always work over the ground field F q . Let be the homogeneous coordinate ring of X, where S d (X) consists of the homogeneous elements of degree d and S(X) is finitely generated by S 1 (X) as an F q -algebra (S 0 (X) = F q ). Let For large d, l(d) is a polynomial in d of degree n, called the Hilbert polynomial of X. The degree of X, denoted by degX, is defined to be the coefficient of d n /n! in this polynomial. If X is irreducible, an alternative definition of degX is the intersection number X · H n , where H is a hyperplane in P m . This means that the degree of X is the number of intersection points cut out by the intersection of n sufficiently general hyperplanes. Fix a closed n dimensional subscheme X of P m over F q . A prime cycle on X/F q is a closed integral subscheme of X (i.e., a reduced and irreducible closed subscheme). For each integer 0 ≤ r ≤ n, we define the zeta function of r-cycles on X to be the following formal power series where P runs over all prime cycles of dimension r on X and degP is the degree of P viewed as a closed subscheme of P m . Since the definition of the degree of a variety depends on the embedding, thus the zeta function of rcycles for 0 < r < n depends on the embedding of X in the projective space P m .
Let N r (d) (resp. M r (d)) be the number of prime r-cycles (resp. effective r cycles) of degree d on X. By a theorem of Chow and van der Waerden, the set of effective r-cycles of degree d is parametrized (one-to-one) by an algebraic set (the Chow variety) in a projective space. Thus, M r (d) and N r (d) are all finite. The zeta function of r-cycles on X/F q is well defined. Since Z r−cycles (X, T ) is a power series with integer coefficients, it is trivially p-adic analytic in the open unit disk.
Alternative expressions for Z r−cycles (X, T ) are given by where W r (d) is the following weighted sum (each prime r-cycle of degree k is counted k times) Geometrically, a point counted in N r (d) consists of d conjugate "non closed points" of dimension r. Thus, W r (d) can be interpreted as the number of "non closed points" of degree d and dimension r. In particular, W 0 (d) is just the number of F q d -rational points on X and Z 0−cycles (X, T ) is the classical zeta function of a projective scheme over a finite field. The zeta function Z 0−cycles (X, T ) of zero cycles is a rational function by Dwork's theorem. It is easy to show that for 0 < r < n, the zeta function Z r−cycles (X, T ) of r-cycles is never rational and in fact never complex meromorphic, as the radius of convergence for Z r−cycles (X, T ) is zero as a complex function. However, some simple examples [W1] suggest that Z r−cycles (X, T ) might be p-adic meromorphic in some cases. If Z r−cycles (X, T ) is indeed p-adic meromorphic, it would imply that there are two sequences of p-adic integers α i and β i approaching zero such that for all positive integers d, we have This would generalize the classical formula for the number W 0 (d), in which there are only finitely many terms, because the function Z 0−cycles (X, T ) of zero cycles is rational. To formulate our conjectures, we let A r (X) be the Chow group of rcycles on X/F q modulo rational equivalence. It is conjectured that A r (X) is a finitely generated abelian group. Let A + r (X) be the monoid in A r (X) generated by the effective r-cycles on X/F q . This is the effective cone in A r (X). The monoid A + r (X) is in general not a finitely generated monoid for 0 < r < n. Although we do not have a proven counter-example, we feel that the p-adic meromorphic continuation of Z r−cycles (X, t) might be false if the monoid A + r (X) is not finitely generated. This is a little inconsistent with the general philosophy that L-functions arising in a natural way from algebraic geometry are analytically good functions. Perhaps, one should not push the general principle too far. Thus, all our conjectures assume that A + r (X) is a finitely generated monoid. In fact, we are confident about the truth of our conjectures only under the stronger assumption that A r (X) is of rank one. However, we will state our conjectures under the weaker assumption that the monoid A + r (X) is finitely generated. The meromorphic continuation conjecture is Conjecture 7.1 (meromorphic continuation). Let X be an n dimensional projective variety in P m over F q . Assume that the monoid A + r (X) is a finitely generated monoid. Then Z r−cycles (X, T ) is a p-adic meromorphic function.
Since the zeta function of r-cycles is supposed to be a p-adic meromorphic function, we can only ask for its p-adic RH. Any RH is really a finiteness property about zeros and poles. Thus, the p-adic RH in our current situation should be a finitness property on the slopes of the zeros and poles. Since there are infinitely many zeros and poles in general for the zeta function of r-cycles, the set of slopes for the zeros and poles is unbounded. Thus, the best we can hope for is to bound the denominators of the slopes. In this direction, we propose Conjecture 7.2 (p-adic RH). Let X be an n dimensional projective variety in P m over F q . Assume that the monoid A + r (X) is a finitely generated monoid. Then the denominators of the slopes (which are rational numbers) of the reciprocal zeros and reciprocal poles of Z r−cycles (X, T ) are bounded by a constant which may depend on X.
In connection with Tate's conjecture about the order of poles, we propose Conjecture 7.3 (order of pole). Let X be an n-dimensional smooth projective variety in P m over F q . Assume that the monoid A + r (X) is a finitely generated monoid. Then, the rank of A r (X) is equal to the order of pole of Z r−cycles (X, T ) at T = 1.
All three conjectures are true in the two extremal cases r = 0, n. The first accessible new case is the case of divisors, i.e., when r = n − 1. If r = n − 1, the above three conjectures are known to be true in the case when A n−1 (X) has rank one [W1]. In this case, Conjecture 7.2 was not stated and hence not proved in [W1], but its truth follows from the meromorphy proof given there. Furthermore, Conjecture 7.3 is always true in the divisor case r = n−1 without the rank one assumption.
Thus, even in the case of divisors, Conjectures 7.1-7.2 are not known to be true in general if A n−1 (X) has rank greater than 1. At this point, the most fundamental conjecture seems to be Conjecture 7.1. Once Conjecture 7.1 is proved (if it is true), its proof should give a great deal of information about the other two conjectures. But no single example is known for 1 ≤ r ≤ n − 2. The simplest substantial example is to consider Z 1−cycles (P 3 , T ) (counting space curves of degree d in P 3 as d varies), which seems already sufficiently difficult and requires new mathematics.
Tate's conjecture says that the order of pole at T = q −r of the zeta function of zero cycles is equal to the rank of the group of algebraic r-cycles modulo ℓ-adic homological equivalence. For a variety over a finite field, this latter rank should equal to be the rank of the Chow group A r (X). With this modification, Tate's conjecture can be reformulated as Conjecture 7.4 (Tate). Let X be an n-dimensional smooth projective variety over F q . Then, the rank of A r (X) is equal to the order of pole of the zeta function Z 0−cycles (X, T ) at T = q −r .
As indicated in the above, Conjecture 7.3 is known to be true in the divisor case r = n − 1. This suggests that Tate's conjecture might also be provable in the divisor case r = n − 1 if the monoid A + n−1 (X) is finitely generated. Combining the previous two conjectures together, we obtain the following conjecture relating the zeta function of r-cycles to the zeta function of zero cycles.
Conjecture 7.5. Let X be an n-dimensional smooth projective variety in P m over F q . Assume that the monoid A + r (X) is a finitely generated monoid. Then, the order of pole of Z r−cycles (X, T ) at T = 1 is equal to the order of pole of Z 0−cycles (X, T ) at T = q −r .
Based on the Griffith-Katz counter-example about 1-cycles on certain 3fold, which applies only to varieties over fields with transcendental elements, we remarked in [W1] that one should not expect the equality between the order of pole at T = 1 of Z r−cycles (X, T ) and the order of pole at T = 1 of Z (n−r)−cycles (X, T ). This remark is misleading. As Sòule pointed out to me sometime ago, one should expect the equality of those two numbers by Beilinson's conjectures.
The special value at T = 1 of the zeta function of r-cycles is undoubtedly related to the torsion order of A r (X) and certain regulator of A r (X). This is proved in [W1] in the divisor case r = n − 1. We leave it to interested readers to find a conjectural formula for the special value for other r.
|
2014-10-01T00:00:00.000Z
|
2000-06-14T00:00:00.000
|
{
"year": 2000,
"sha1": "812dcf3cad62aed5493aaf5ad7cb97df1c02d9b5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0006235v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "e633fbe02c0ca4dc14febe1725eecee3a59feffb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
269387322
|
pes2o/s2orc
|
v3-fos-license
|
Cas13d-mediated gene knockdown in CAR T cells: towards off-the-shelf cancer treatment
In a recent study published in Cell , Tieu et al. used RfxCas13d to dynamically enhance performance and longevity of chimeric antigen receptor (CAR) T cells by massively multiplexed gene knockdown and thereby, move the treatment one step closer towards “ off-the-shelf ” next-generation medicine. 1 Surgery, chemotherapy, radiation, and targeted drug therapies are widely understood as the foundations of cancer treatment. Immunotherapy, sometimes referred to as the “ fi fth pillar, ” usually describes either administration of drugs that will boost the patient ’ s endogenous immune system to shrink a tumor or immune checkpoint inhibitors. In CAR T cell therapy, however, a patient ’ s own immune system is modi fi ed in order to disrupt the tumor cells ’ immune evasion. Herein, a patient ’ s T cells are isolated, engineered and cultured to express a chimeric transmembrane receptor that will target a surface antigen, speci fi c to the tumor cells. The goal of this treatment is targeting and destroying cancer cells after recognition by the CAR T cells upon their re-infusion into the patient ’ s bloodstream. The approach is predominantly used to combat hematologic forms of cancer like certain types of leukemia, lymphoma and multiple myeloma. But while there were remarkable results and, in some cases, even complete remission, the (side) effects, including mass die-off of antibody-producing B cells, neurologic toxicity, infections, T cell exhaustion, and cytokine release syndrome (CRS), should not be underestimated. And even though the treatment has entered the mainstream of next generation medicine, it is still inaccessible to the broader public, due to its price point at around $450,000. 2 The development of methods that could move this extremely personalized approach towards off-the-shelf medication have the potential to drastically lower
In a recent study published in Cell, Tieu et al. used RfxCas13d to dynamically enhance performance and longevity of chimeric antigen receptor (CAR) T cells by massively multiplexed gene knockdown and thereby, move the treatment one step closer towards "off-the-shelf" next-generation medicine. 1urgery, chemotherapy, radiation, and targeted drug therapies are widely understood as the foundations of cancer treatment.Immunotherapy, sometimes referred to as the "fifth pillar," usually describes either administration of drugs that will boost the patient's endogenous immune system to shrink a tumor or immune checkpoint inhibitors.
In CAR T cell therapy, however, a patient's own immune system is modified in order to disrupt the tumor cells' immune evasion.Herein, a patient's T cells are isolated, engineered and cultured to express a chimeric transmembrane receptor that will target a surface antigen, specific to the tumor cells.The goal of this treatment is targeting and destroying cancer cells after recognition by the CAR T cells upon their re-infusion into the patient's bloodstream.The approach is predominantly used to combat hematologic forms of cancer like certain types of leukemia, lymphoma and multiple myeloma.But while there were remarkable results and, in some cases, even complete remission, the (side) effects, including mass die-off of antibody-producing B cells, neurologic toxicity, infections, T cell exhaustion, and cytokine release syndrome (CRS), should not be underestimated.And even though the treatment has entered the mainstream of next generation medicine, it is still inaccessible to the broader public, due to its price point at around $450,000. 2 The development of methods that could move this extremely personalized approach towards off-the-shelf medication have the potential to drastically lower the costs and accelerate availability to the patient.One possibility is the collection and modification of T cells from healthy donors instead of the patient.Another, is the combination of CAR T cell therapy with transcription activator-like effector nucleases or clustered regularly interspaced palindromic repeats (CRISPR) technologies to induce the T cell's production of CARs via knock-in of the desired genetic sequences.The problem with these, however, is the permanency of the genetic alterations made, as well as, with CRISPR-Cas9's, potential unintended genotoxic side effects. 3eporting in Cell, Tieu and colleagues recently introduced Multiplexed Effector Guide Arrays (MEGA) for dynamically regulated knockdown of multiple target genes on the transcriptome level in primary human T cells without affecting genomic DNA by employing RfxCas13d. 1 This CRISPR effector is approximately two thirds the size of Cas9 and, unlike other Cas13a proteins, does not, or only to a minute extent, exhibit collateral trans-cleavage activity.
In the context of this study, it was employed to revert T cell exhaustion in HA-28ζ CAR T cells, a well characterized model for tonic signaling, by knockdown of three inhibitory receptors, LAG3, PD-1, TIM-3 (Fig. 1a).To this end, primary human T cells were first transduced with RfxCas13d and HA-28ζ CAR, and then with a crRNA array, consisting of multiple sequentially positioned crRNA guides.Single, double and triple knockdowns of the respective target genes could be successfully performed while only minimal off-target effects were reported.Interestingly, the knockdown efficiency for some genes was influenced by the position of the guide sequence within the array.Therefore, when designing new arrays different positional permutations should be tested to achieve an optimal efficiency of the system.In order to gain temporal control of gene knockdown, RfxCas13d was fused to a destabilization domain that leads to proteasomal degradation in the steady state and can be stabilized with the antibiotic trimethoprim.Following this approach, the authors showed a reversible, drug-dependent expression of CD46 (Fig. 1b).MEGA was subsequently employed to study genes involved in purinergic signaling and glycolysis.Through knockdown of the identified genes, enhanced anti-tumor activity and improved cell fitness in dysfunctional CAR T cells could be induced.In addition, safety and efficacy of the treatment was improved by targeting proximal T cell activation signaling elements, enabling receptor independent regulation of CAR T cells.
Targeting T cell exhaustion, a major problem of CAR T cell therapy, MEGA's potential was demonstrated by simultaneously suppressing the upregulation of three exhaustion markers and thereby improving longevity and tumor targeting.
In contrast to CRISPR/Cas9 systems, MEGA, employing the RNA recognizing and cleaving Cas13d, acts on the transcriptome instead of the genome level.It would be interesting to test a control element that also functions on the transcription level, i.e., inducible promoters: Regulation on the transcription level tends to be faster than at the protein level and is more energy and resource-efficient for the cell.It is also important to note that, neither cell culture, nor animal models provide a completely realistic prediction of the system's actual performance and more precise control over CAR T behavior does not necessarily translate to improved safety and anti-tumor efficiency in cancer patients.Clinical trials are required in order to verify this effect.
Combining conventional or next-generation medicine with CRISPR-Cas13d could involve multiplexed genetic knockdown to profile key genes involved in pathogenesis, disease progression, the molecular signature of a specific disease, or, in the case of CAR T cell therapy, treat CRS (a potentially fatal side effect of the treatment).Therein lies great potential for diagnostic applications as well as tailoring personalized treatment options: 4 potential drug targets, as well as drug sensitivity or resistance could be studied to predict or optimize the patient's response to particular therapies (Fig. 1c).Possible fields of applications would be autoimmune diseases or allergies, where MEGA could offer a more precise and controlled alternative to traditional treatment. 5The proteins involved in the immune system's overreaction could be temporarily downregulated at the transcription level to attenuate disproportionate reactions to non-pathogenic stimuli.
Overall, the cross-over of the two technologies creates a straightforward and fast method of transcriptional engineering for optimization of functional morphology.Its compact implementation and drug-inducible regulation hold massive potential for further improvements in safety, affordability and accessibility of the treatment.Having demonstrated its versatility in human T cells, MEGA could also be implemented for other cells or organisms.This study could pave the way for gaining further insights into the physiology of disease models, offering personalized and rapidly adjustable therapy options.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material.If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.To view a copy of this licence, visit http:// creativecommons.org/licenses/by/4.0/.
© The Author(s) 2024
Fig. 1 CRISPR/Cas13d mediated gene knockdown in T cells.a Reversion of tonic signaling phenotype in CAR T cells.The tonic signaling CAR T cell model overexpresses three inhibitory receptors (LAG3, PD-1, TIM-3) which lead to a T cell exhaustion phenotype.By transducing these cells with an array of guide crRNAs, the respective genes are knocked down by target specific Cas13d mediated RNA degradation.As a result, the exhaustion markers are no longer expressed, leading to improved T cell function such as cell expansion, memory, fitness and even tumor clearance.b Drug-inducible gene knockdown.A destabilization domain, fused to Cas13d leads to proteasomal degradation of the effector, disabling gene knockdown in the ground state.By addition of trimethoprim, the domain is stabilized and Cas13d cleavage is enabled.The system is reversible and can be utilized to dynamically regulate gene knockdown of T cell receptors by administration or withdrawal of the drug.c Potential MEGA-based T cell diagnostics.Dysregulated T cells could be extracted, transduced with the multiplexed guide array and screened for abnormally expressed genes.Afterwards, the cells could be transduced with guides targeting the identified genes, restoring physiological T cell function.Together with either inducible or permanent gene knockdown or in combination with other drugs, an individualized treatment plan could be implemented
|
2024-04-27T13:18:07.755Z
|
2024-04-26T00:00:00.000
|
{
"year": 2024,
"sha1": "db139059d1931c68c9b1f4d26ba1d0d19bc17c98",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "aac2f0d6dee59a74b00b2ee6756c6370d33548f1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218950658
|
pes2o/s2orc
|
v3-fos-license
|
A comparison of child development, growth and illness in home-care and day-care center settings development,growthandillness
Purpose – Childcare is an essential part of early life environment that has a significant influence on lifelong physical and mental health. This study aimed to examine the relationship between development, growth and frequency of illness in different types of care. Design/methodology/approach – Thiscross-sectionalstudyrecruited177childrenaged30 – 36monthsand their caregivers. Of these 66 were being cared for at home and 111 were attending out-of-home day-care facilities. An interview form, growth measurement and the Denver Developmental Screening Test II were collected. The association between child developmental, growth and illness variables was analyzed with Chi- square, Fisher ’ s exact and Mann – Whitney U tests. Findings – This study found that the development and growth results did not show statistically significant differencesbetweenthehome-careandday-caregroups.Thenumberofminorillnesseswassignificantlylowerinhome-carechildrenthaninday-carechildren(OR 5 0.33, 95% CI 5 0.15-0.72). – This study indicated that the risk of infection is increased in the children attending day care. Provision of a healthy and safe childcare environment needs to be an essential health promotion strategy to improve family and child well-being. Originality/value – As the number of women ’ s participation in the labor market has increased rapidly over the past decades, so did the number of children in nonparental care. The study findings reflect that the development of a day-care center for children was unclear, whereas the risk of infection was increased. Therefore, provision of a healthy and safe childcare environment needs to be an essential health promotion strategy to improve family and child well-being.
Introduction
As stated in many cross-culture reports, the number of women participating in the labor market has increased rapidly over the past decades and this coincides with the increased number of children in nonparental care [1][2][3][4]. In Thailand, a study by the Office of the Permanent Secretary for the Ministry of Education in 2017 showed that almost 90% of children under three years old were in some form of nonparental care. There are many different types of care. Some consist of in-home care, where a relative or other adult comes to the child's home; childcare homes, where an adult or adults provide care in their own homes; and childcare centers, where children receive care from adults at a nonhome location, such as a traditional day-care center [3].
As research has shown, the brain is more susceptible to the experiences of the first years of life. Early life exposures influence lifelong physical and mental health that can be either beneficial or deleterious in their effects [5][6][7]. On the one hand, it is assumed that childcare centers provide stimulating environments, which offers the opportunity to meet other children, experience a variety of daily activities and be cared for by certified staff; on the other hand, childcare staff might not be able to devote adequate attention to each child. Bearing these issues in mind, many parents struggle to find the right option when arranging childcare. Finding the right environment has a significant influence on childhood experience and determines whether the childcare facility benefits the children or disrupts their health and development.
Previous studies found that children gain developmental benefits from childcare over the short and long term, particularly in the areas of language and social development [3,[8][9][10][11]. A longitudinal study carried out by the National Institute of Child Health and Human Development (NICHD) Study of Early Child Care, begun 1991, has shown that the children who attended childcare centers had better cognitive and language development skills [3]. As regards social competence, children who spent time in childcare centers manifested more selfconfidence, were more likely to use self-directed emotion regulation behaviors and exhibited less distress in new situations [12,13]. However, stress is an important concern as the child needs to deal with novel situations, to relate to strange adults, an unfamiliar peer group and experience the fear of being away from parents. Many articles have reported that cortisol levels are higher in day-care children than in home-care children, which may be associated with emotional development and behavior. However, the relationship with long-term effects for the health and development of children is not conclusive [2,[14][15][16]. Some evidence has indicated that attending childcare centers can have negative health consequences. Children attending childcare centers experience a higher number of common communicable diseases, especially respiratory and gastrointestinal infections when compared with children who are cared for at home. For children younger than two years of age attending childcare centers, the longer the duration of time spent in childcare, and the greater the child-teacher ratios, the rates of illness correspondingly increase [17][18][19][20][21].
Although there are many studies about childcare type, it is still challenging for parents and healthcare providers to find reliable research-based information due to much of the previous research being focused on a specific problem. To resolve this issue, this research examined the overall issues that included how differences in childcare experiences are related to the development, growth and health of children in the same context. We also explored the main reasons for enrolling the child in a day-care program and the characteristics of their ideal arrangements.
Methodology
Study design, procedure and participants This investigation was a cross-sectional study. Data were gathered from children between 30 and 36 months of age and their parents who lived in Chiang Mai Province between November JHR 2017 and July 2018. The children were grouped into two by type of childcare arrangement: care by a relative or a nonrelative in the child's own home (home-care children, n 5 66) and supervision by someone at day-care centers that provide all-day programs (day-care center children, n 5 111). The day-care center children were required to have attended the center continuously for at least six months.
Measures
The data around the following parameters were analyzed: caregiver and child demographic development, growth, experience and frequency of illness in the past two months, the main reasons for enrolling the child in a day-care program and the characteristics of their ideal arrangements. The study tools used were semistructured and open-ended questionnaires. The Denver Developmental Screening Test II (DDST) was used to screen children's development in four areas of functioning: fine-motor-adaptive, gross motor, personal-social and language skills [5,6]. A subinvestigator, a nurse, who was trained and certified in the performance of the DDST, conducted all interviews, growth measurements and child developmental examinations. The growth chart used in our study was derived from the maternal and child health handbooks provided by the Ministry of Public Health of Thailand [22]. For height and weight, we divided the height-for-age and weight-for-age into three groups: the 97th percentile (þ2SD), under the 3rd percentile (-2SD) and the 3rd-97th percentile to represent the normal and abnormal groups.
Data analyses
All answers were coded and recorded in an electronic database by two investigators. The Statistical Package for Social Sciences (SPSS) for Windows software application program version 22.0 was used for data analysis. Descriptive statistics frequencies, means and standard deviations were used to describe sample characteristics and illness experiences and the child developmental examination and frequency of illness of the sample. The Kolmogorov-Smirnov test was used to assess the normality of distribution. The association between child developmental variables was analyzed with a Chi-square test. If the data were not showing a normal distribution, we used Fisher's exact and the Mann-Whitney U tests.
Ethical consideration
The study was approved by the Research Ethics Committee of the Faculty of Medicine, Chiang Mai University, Thailand (No.035/2017, January 25, 2017). The parents or guardians of all participants gave informed consent. Table 1 presents the demographic data of home-care children (n 5 66) and day-care center children (n 5 111). The mean age of the children was 30.6 þ 1.35 months for home-care and 32 þ 2.21 months for day-care center children. The mean age of children in the day-care center was significantly higher than those for home-care children (p < 0.001). As regards gender, 45.5% were male in the home-care group and 51.4% in the day-care center group. The primary caregivers were the mother (65.2% for home-care children and 53.2% for day-care center children), followed by grandparents (19.7% for home-care children and 23.4% for daycare center children), father (9.1% for home-care children and 15.3% for day-care center children) and others (6.1% for home-care children and 8.1% for day-care center children). The mean age of caregivers was 39.1 years for home-care children and 37.7 years for day-care center children. About 33 (51.6%) of the caregivers in the case of home-care children had an education level lower than graduation from high school, whereas 50 (46.3%) of the caregivers for day-care center children had an education level lower than high school graduation. Most caregivers came from a two-parent household (93.9% for home-care children and 82.9% for day-care center children). A two-parent household for home-care children was significantly higher than it was for day-care center children (p 5 0.03). Enrollment ages for day-care centers were a minimum age of two months while the average age was 30 months.
Demographic data of children and caregivers
Development and growth between home-care and day-care center children A comparison of development and growth between home-care and day-care center children is shown in Table 2. Most children had a normal DDST result: no delay and a maximum of one caution item (77.3% of home-care children and 80.2% for day-care center children) followed by a suspect DDST result: two or more caution and/or one or more delay items (22.7% for home-care children and 19.8% for day-care center children). Most children had personalsocial development in normal items (80.3% for home-care children and 84.7% for day-care center children), followed by caution items (19.7% for home-care children and 12.6% for daycare center children) and advanced items (0% for home-care children and 2.7% for day-care center children), respectively. Most children had fine-motor-adaptive development in normal items (69.7% for home-care children and 76.6% for day-care center children), followed by advanced items (19.7% for home-care children and 17.1% for day-care center children) and caution items (10.6% for home-care children and 6.3% for day-care center children). Most children had fine language development in normal items (54.5% for home-care children and 55% for day-care center children), followed by advanced items (24.2% for home-care children and 25.2% for day-care center children) and caution items (21.2% for home-care children and 19.8% for day-care center children). Most children had gross motor development in normal items (84.8% for home-care children and 78.4% for day-care center children), followed by advanced items (10.6% for home-care children and 12.6% for day-care center children) and caution items (4.5% for home-care children and 9% for day-care center children).
Regarding growth, most children had a weight between the 3rd and 97th percentile (93.9% for home-care children and 91% for day-care center children), followed by above 97th Table 1. Demographic data of children and caregivers JHR percentile (4.5% for home-care children and 5.4% for day-care center children) and below the 3rd percentile (1.5% for home-care children and 3.6% for day-care center children), respectively. Most children had a height between the 3rd and 97th (97% for home-care children and 95.5% for day-care center children), followed by below the 3rd percentile (3% for home-care children and 3.7% for day-care center children) and above the 97th percentile (0% for home-care children and 1.8% for day-care center children).
Illness experience between home-care and day-care center children The comparison of experience and frequency of illness in the past two months between home-care and day-care center children is shown in Table 3. The results found that the total numbers of minor illnesses in the home-care child are lower than those in the day-care center child (OR 5 0.33, 95% CI 5 0.15-0.72). Minor illnesses caused by infection were very common among both sets of children, 71.2% and 88.3% reported illness episodes in homecare and day-care center children, respectively. The common cold was the most frequent minor illness in both groups. Other minor illnesses among home-care children included fever and tonsillitis, whereas diarrhea, fever and hand, foot and mouth disease were the second, third and fourth most common diseases in the day-care center child. Total numbers of serious illnesses and the causes of illness were no different between home-care and day-care center children (OR 5 0.42, 95% CI 5 0.149-1.20). The first five serious illnesses causing hospital admission included the common cold, acute bronchitis, pneumonia, influenza and diarrhea. Table 4. The results found that the mean age of the primary caregiver was associated with the normal item in fine motor development (p 5 0.046) and the frequency of serious illnesses (p 5 0.026). The relationship of the primary caregiver to the child was associated with the frequency of serious illness in the past two months. The results found that the total numbers of minor illnesses were lower among the mothers as primary caregivers group than the grandparents as primary caregivers group (OR 5 0.28, 95% CI 5 0.14-0.76). Other factors, including the education level of the major caregiver and household type, were not associated with development and numbers of illnesses. We also collected data pertinent to the reason for the participants enrolling their child into childcare and the factors for choosing their childcare facility. The most popular reason for enrolling a child into childcare was the parent needing to work with no other family support available to take care of the child (64.5%). This reason was followed by the wish to enhance their child's development (14.5%) and improve the child's social development (12.7%). The general factors for choosing the childcare facility were proximity to their home (45.9%), hygiene and cleanliness (17.1%) and safety and qualified day-care (10.8%) (Data not shown).
Discussion
Over the past decades, changes in family formation patterns have been observed. The proportion of females in the labor force is increasing worldwide. The numbers have nearly doubled in just over 30 years, 34.3% of mothers with children under the age of 3 were working in 1975 compared to 61.8% in 2008 [23]. This global phenomenon is reflected in an increasing number of child day-care center attendances [4,[23][24][25][26]. Moreover, single-parent families are also increasing. Approximately 5-10% of children worldwide live in single-parent families. In Thailand, the number of single-parent families rose from 6.5% in 2001 to 8.3% in 2016 [27]. Our results indicated that day-care center children were more likely to be from single-parent growth and illness households than in the case of home-care children. There is also a substantial increase in the number of dual-earning couples, and this increase, along with the single-parent increase, has meant a higher demand for childcare with day-care centers now providing an essential service for many families. Questions about the possible impact of childcare on the development and health of young children are of enormous interest both to parents and to health professionals. Our results showed no evidence of a relationship between childcare type and the developmental skills of the children in the four main areas of development: personal-social, fine motor, language and gross motor. Referring to the results of previous studies and links between day-care experience and child outcomes show mixed results. Typically, the studies have demonstrated significant positive effects on the development of children from disadvantaged families attending high-quality early childhood programs [3,8,26,28]. Family background and quality of day care were not controlled in our study and that may have had an impact on our results. Many studies found a positive impact of day care on language development and cognitive development [3,8,11,[28][29][30].
The NICHD longitudinal study indicated that children who attended day-care centers had somewhat better cognitive and language development than children who experienced other nonmaternal childcare arrangements [3]. This long-term study showed that children who experienced higher-quality early childcare displayed better vocabulary scores in the fifth grade than did children who experienced a lower quality of care [3]. Conversely, a study by Stolarova et al. [30] showed that girls not attending day care before the age of two years exhibited a larger vocabulary size in comparison to all other children [30]. As regards social development, some researchers found that attending childcare facilities in early childhood has positive effects [1,26,31]. Children in day-care centers were also more skilled with strangers and more autonomous of mothers in a laboratory playroom [1]. One observational study found that the closeness of the teacher-child relationship in preschool childcare was related to social skills through the elementary school years. A possible explanation is that children with a positive early experience with someone other than their parents learn a pattern of interacting that expedites their relationships with a future caregiver [26]. Many pieces of evidence are consistent in concluding that motor skills significantly improve with extended physical activity, but preschoolers' physical activity levels are always low during center attendance [32][33][34][35][36]. However, there is little documentation regarding how childcare types interact with the motor skills of children, so the relationship between childcare exposure and the development of motor skills has not yet been concluded. In our study, the relationship between childcare exposure and motor skills was not observed; nevertheless, we found an association between higher primary caregiver age and the normal result of fine motor skills. Similarly, Comuk-Balci et al. found that higher maternal age, especially in female gender children older than 24 months, and higher maternal education correlated with an earlier accomplishment of fine motor skills [37]. The consequence of higher parental age may involve greater intellectual levels and positive psychology in enhanced opportunities for the stimulation of the child.
The outcomes of the investigation into illnesses in this study are consistent with earlier studies in that day-care center children are at a higher risk of common infection [19,21]. Moreover, our study showed the effect of the type of childcare on the severity of illnesses that have not been assessed in most previous articles. The results of our study indicated that the total number of minor illnesses that did not require hospital admission in day-care center children was significantly higher than in home-care children, whereas the incidence of serious illness that caused the child to be admitted to the hospital did not differ significantly between both groups. It is notable that the most common cause of serious illness in our study was the common cold, which is generally not considered to be a severe illness. JHR Like many similar studies, our results demonstrated that day-care attendance significantly increases the risk of respiratory and gastrointestinal tract infections, which are typically transmitted via airborne droplets, direct contact with secretions or feral-oral transmission [21,[38][39][40]. Good personal hygiene and a clean environment are essential to reduce the spread of infections in childcare settings. Children need to learn about and practice personal hygiene, such as consistently washing their hands and covering their mouths with a tissue or upper sleeve during coughing or sneezing. School policies should cover practical procedures for preventing the spread of infectious diseases, including cleaning of toys, diaper changing and food preparation. For example, toys that are contaminated by body fluids should be set aside until they are cleaned by hand with detergent, rinsed with water and airdried. The soiled clothes should be stored in a sealed plastic bag and sent home with the child at the end of the day. A clear school protocol for children with common symptoms of childhood infection, which include fever, cough, runny nose or diarrhea, is essential. When children present with any of these symptoms, parents should keep their children home from school, or the teachers should call the parents to come and collect their children and take them home. The children need to be symptom-free for at least 24 h before returning to school. Immunization is a safe and effective intervention to reduce the transmission of infectious diseases. Therefore, policymakers should support the most up-to-date vaccination protocol for children and childcare staff. Furthermore, healthcare personnel should continuously support childcare services by providing health assessments and educate childcare staff to improve health and development outcomes [21,38,40,41].
On the positive side, children who attended large day-care centers during the first three years of life had more frequent colds during the preschool years, but less frequent colds during their school years until reaching 13 years of age. Acquisition of immunity may develop earlier among children who participate in large day-care centers [18].
The use of our understanding of the relationship between child health and the type of day care, and knowing how parents make choices about childcare, is fundamental to developing effective services to promote the provision of high-quality care. Therefore, we also explored the main reasons for enrolling the child in a day-care program and the characteristics of their ideal arrangements. More than half of the participants sent their child to a day-care center because they needed to work and there was no other family support available to take care of their child. Only one-quarter of the parents expressed concern about the positive developmental influence of day-care centers. Not surprisingly, the ideal characteristics of the day-care centers related to issues regarding the convenience of a good location. Fewer than half of these parents considered hygiene, safety or the quality of day care as a primary choice.
There are many significant characteristics for day care cited in previous studies, which have examined issues around childcare choices such as quality of day care, the presence of caring caregivers and an environment where their children could learn. Factors affecting parents' day-care choices depend on educational level, ethnicity and family role beliefs. However, there are many limitations regarding opportunities for childcare including family finances, inflexible work schedules and locations, especially for low-income families [42][43][44][45][46].
Therefore, the quality of a day-care center may not always be a priority if the day-care facility did not meet the financial constraints and availability for the families. Similar to our investigation, other studies found that the quality of day-care center is not the first concern. There is a need to ensure all children have the opportunity to attend high-quality day care rather than just the most convenient. Beyond improving the quality of childcare in identifying family needs, the application of quality improvement strategies is also necessary.
As a study limitation, the period that children attended day care was too short to show any association between childcare exposure and growth. Thus, there is an opportunity for future studies to extend the study period and investigate any impact on growth.
Child development, growth and illness
Conclusions A day-care center provides an essential service to many families, and the demand continues to increase. From our results, the developmental benefits of a day-care center for children are unclear, whereas the risk of infection is increased. Therefore, the provision of a healthy and safe childcare environment needs to be an essential health promotion strategy to improve family and child well-being. Pediatricians and healthcare providers should help provide perspective on these issues, including the risks and benefits of childcare to assist parents in making arrangements for what is best for their children and families. These findings also provide valuable information about policy implications.
|
2020-05-07T09:08:36.521Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d99e4a5b94971a2990e845337144253fe5894e50",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1108/jhr-08-2019-0193",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "650a76a75b73ad8b1d208219cbb7aba071d12463",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
264844950
|
pes2o/s2orc
|
v3-fos-license
|
Human T-lymphotropic virus type-1 p30 alters cell cycle G2 regulation of T lymphocytes to enhance cell survival
Background Human T-lymphotropic virus type-1 (HTLV-1) causes adult T-cell leukemia/lymphoma and is linked to a number of lymphocyte-mediated disorders. HTLV-1 contains both regulatory and accessory genes in four pX open reading frames. pX ORF-II encodes two proteins, p13 and p30, whose roles are still being defined in the virus life cycle and in HTLV-1 virus-host cell interactions. Proviral clones of HTLV-1 with pX ORF-II mutations diminish the ability of the virus to maintain viral loads in vivo. p30 expressed exogenously differentially modulates CREB and Tax-responsive element-mediated transcription through its interaction with CREB-binding protein/p300 and while acting as a repressor of many genes including Tax, in part by blocking tax/rex RNA nuclear export, selectively enhances key gene pathways involved in T-cell signaling/activation. Results Herein, we analyzed the role of p30 in cell cycle regulation. Jurkat T-cells transduced with a p30 expressing lentivirus vector accumulated in the G2-M phase of cell cycle. We then analyzed key proteins involved in G2-M checkpoint activation. p30 expression in Jurkat T-cells resulted in an increase in phosphorylation at serine 216 of nuclear cell division cycle 25C (Cdc25C), had enhanced checkpoint kinase 1 (Chk1) serine 345 phosphorylation, reduced expression of polo-like kinase 1 (PLK1), diminished phosphorylation of PLK1 at tyrosine 210 and reduced phosphorylation of Cdc25C at serine 198. Finally, primary human lymphocyte derived cell lines immortalized by a HTLV-1 proviral clone defective in p30 expression were more susceptible to camptothecin induced apoptosis. Collectively these data are consistent with a cell survival role of p30 against genotoxic insults to HTLV-1 infected lymphocytes. Conclusion Collectively, our data are the first to indicate that HTLV-1 p30 expression results in activation of the G2-M cell cycle checkpoint, events that would promote early viral spread and T-cell survival.
Background
Human T lymphotrophic virus type 1 (HTLV-1) is the etiological agent of adult T cell leukemia/lymphoma (ATL), which in its acute form is a highly aggressive CD4+ T-cell cancer that is refractory to standard therapies (reviewed in [1][2][3]). As a complex retrovirus, the HTLV-1 genome encodes structural, enzymatic, regulatory and accessory proteins [2,4]. The pX region of the virus contains four open reading frames (ORFs). ORFs III and IV encode the well characterized Rex and Tax proteins, respectively. Tax is a 40 kDa nuclear phosphoprotein that increases viral transcription from the HTLV-1 LTR (reviewed in [5][6][7]). The ability of HTLV-1 to cause T-cell transformation is linked to deregulation of cellular gene expression and cell cycle checkpoints by Tax [5]. Rex is a 27 kDa nucleolar phosphoprotein that increases the cytoplasmic accumulation of non-spliced and singly spliced viral RNA (reviewed in [8]). In contrast to the extensive knowledge about the structure and function of Tax and Rex, less is known about the role of pX ORF I and II-encoded proteins in the replication cycle and pathogenesis of HTLV-1.
HTLV-1 p30 is a 241 amino acid nuclear localizing protein encoded by pX ORFII [9], that contains serine and threonine-rich regions with partial homology to the POU family of transcription factors [10]. pX ORFs II mRNA is present in infected cell lines and freshly isolated cells from HTLV-1-infected subjects [11] and in ATL and HAM/TSP patients [12]. Infected human subjects form antibodies [13] and cytotoxic T cells [14] against recombinant proteins or peptides of pX ORF II proteins, confirming the expression of the proteins in HTLV-1 in both disease patients and asymptomatic subjects. Freshly cultured transformed lymphocytes from HTLV-1 patients express both Tax and p30 [15]. Our studies were the first to demonstrated that pX ORF II encoding p30 is necessary for establishment and maintenance of HTLV-1 infection in a rabbit model [16,17]. Emerging evidence indicates that p30 has important roles in the viral and cellular gene expression at both the transcriptional and the post translational level [18][19][20][21][22][23][24][25][26][27]. Two recent studies indicate that p30 interacts with Rex and co-localize in nucleolar compartments [27,28]. We have demonstrated that p30 also differentially regulates CREB responsive element and Tax responsive element mediated transcription by interacting with CREB binding protein p300 [24,26]. Our microarray studies indicated that p30 is actually a selective repressor of genes including some encoding cell cycle control proteins, while sparing T-cell signaling pathways [25]. Consistent with these findings, a recent study indicated that p30 has the ability to enhance Myc-associated transforming activities and increase S-phase cell cycle progression through its interactions with both Myc and the 60 kDa Tat-interacting protein (TIP-60) [15]. Collectively these studies support the role of p30 as a multi-functional pro-tein with transcriptional and post-transcriptional activities that balances the influence of Tax to regulate viral gene expression and modulates the transcriptional control of the cell cycle. Herein, we report that expression of p30 in Jurkat T-cells results in an accumulation of cells in the G2 phase of cell cycle. Our data indicates that expression of HTLV-1 p30 resulted in an increase in phosphorylation of Cdc25C at serine 216 and enhanced nuclear localization of phosphorylated Cdc25C at serine 216. Furthermore, the activated form of Chk1 phosphorylated at serine 345 was increased in p30 expressing Jurkat T-cells. p30 expression was also associated with a decrease in expression of PLK1 and diminished phosphorylation of PLK1 at tyrosine 210. Consistent with less PLK1, p30 expression resulted in reduced phosphorylation of Cdc25C at S-198. Finally, primary human lymphocyte derived cell lines immortalized by an HTLV-1 proviral clone defective in p30 expression were more susceptible to camptothecin induced apoptosis. Collectively, our data indicate that HTLV-1 p30 expression modulates regulatory cell cycle control in Tcells to enhance early viral spread and prolong cell survival.
Results
Our microarray data indicated that p30 modulates a number of genes in T-cells including genes involved in cell cycle and apoptosis control [25]. To examine if p30 expression results in alteration of cell cycle, we infected Jurkat Tcells with a p30 expressing lentivirus and tested the expression of the viral protein by western blot assay (Fig. 1A). p30 mRNA levels were similar between Jurkat T-cells expressing p30 and a primary human lymphocyte derived cell line immortalized by an HTLV-1 full-length proviral clone (ACH.2)[42,43] by reverse transcriptase PCR (Fig. 1B). Typically at least 88 -92 % of Jurkat T-cells were GFP positive in both p30 and mock Jurkat T-cells by FACS in four trials (data not shown). We then synchronized p30 and mock transduced Jurkat T-cells at the G1/S boundary by hydroxyl urea treatment to test their ability to progress through the cell cycle. After release from arrest, cells were collected at indicated time points and stained with propidium iodide and monitored for their progression through the cell cycle by flow cytometry.
At 4 h after release, cells started to enter the G2/M phase of cell cycle in both p30 expressing and mock Jurkat Tcells. However, as compared to mock transduced cells, p30 transduced Jurkat T-cells had a higher proportion of cells at the G2/M phase of cell cycle, particularly between 6 to 10 h in 4 independent trails ( Fig. 2A and 2B). The observed increase in G2/M population in p30 expressing Jurkat T-cells may be attributed to a faster S phase exit. However, we did not see any significant difference in S phase population between mock or p30 expressing Jurkat T-cells (Fig. 2C). p30 expression resulted in a doubling of the number of Jurkat T-cells in G2 phase of the cell cycle by 6 h after release from synchronization (Fig. 2D). Thus, p30 expression resulted in increased accumulation of cells in the G2/M phase of cell cycle. We hypothesized that if p30 mediated a delay in G2 exit, then the rate at which p30 Jurkat T-cells divide should be different from mock (lentivirus vector lacking p30) transduced Jurkat T-cells.
To examine the effect of p30 expression on cell proliferation over an extended time period (1-5 days), we compared viable cell numbers of p30-expressing versus mock infected Jurkat T-cell lines using trypan blue exclusion assay. The number of p30-expressing Jurkat T-cells was significantly reduced compared to mock infected Jurkat Tcells (Fig. 2E). The slower proliferation rate of p30 transduced Jurkat T-cells in these longer term proliferation assays was consistent with the observed G2 cell cycle delay exhibited by p30 expressing cells.
Adult T-cell leukemia/lymphoma is a highly aggressive CD4+ T-cell malignancy that is refractory to conventional chemotherapeutic intervention [1]. To test the influence of p30 on the ability of T-cells immortalized by HTLV-1 to resist drugs that induce apoptosis, we used cell lines derived from primary human T-cells that were immortalized by wild type HTLV-1 (ACH.1) and a clone of HTLV-1 that is mutated to prevent expression of p30 (ACH.30.1) as previously described [17,42,44]. To determine if the ACH.1 and ACH.30.1 cell lines would display differential Expression of p30 in Jurkat T-cells sensitivity to apoptotic stimuli, we tested the cell lines following treatment with various apoptosis inducing agents, camptothecin, etoposide, and TRAIL. Camptothecin is a topoisomerase I inhibitor, which induces apoptosis in cells in the S phase of the cell cycle (reviewed in [45]). Etoposide is a topoisomerase II inhibitor, which induces apoptosis via the intrinsic pathway [46,47]. TRAIL is a member of the TNF ligand family, which induces apoptosis through activating the death receptors (reviewed in [48]). In independent trials, camptothecin induced apoptosis in the ACH.30.1 cell line to a greater degree than in the ACH.1 cell line (nonparametric Wilcoxon rank sum test, p-value 0.03) (Fig. 3A). Camptothecin effectively induces apoptosis in cells in the S phase of the cell cycle. This increased susceptibility to camptothecin-induced apoptosis in the ACH.30.1 cell line is likely due to the unabated influence of Tax expression driving cells into the S phase, which would typically be counteracted by p30 [26]. These results are consistent with a recent report [15]. Following treatment with etoposide, there was no significant difference in the degree of apoptosis induction between ACH.1 and ACH.30.1 cell lines (nonparametric Wilcoxon rank sum test, p-value 0.25) (Fig. 3B). Both ACH.1 and ACH.30.1 cell lines lack TRAIL receptor expression and were not susceptible to TRAIL-mediated apoptosis (nonparametric Wilcoxon rank sum test, pvalue 0.59 and 0.41, respectively) ( Fig. 3B). Jurkat T-cells served as positive control for the apoptotic induction protocols and were susceptible to all treatments (Fig. 3C).
We then tested the influence of exogenously expressed p30 on susceptibility of cells to apoptosis independent of other viral proteins. p30 was transiently expressed in Jurkat T-cells and 292T cells and tested for susceptibility to apoptotic stimuli. Expression of p30 in Jurkat T-cells did not result in increased apoptosis when left untreated, compared to mock infected cells, consistent with recent findings that p30 does not induce apoptosis in transiently transfected Molt-4 lymphocytes [15]. p30-expressing Jurkat T-cells and mock infected Jurkat T-cells were treated with camptothecin, etoposide, or TRAIL and assayed for apoptosis (Fig. 4A). Although the transduced cells were induced into apoptosis following treatment with camptothecin, etoposide, and TRAIL, there was no significant difference in the percentage of apoptotic cells between p30-expressing T-cells and mock Jurkat T-cells for any of the treatment groups (nonparametric Wilcoxon rank sum test, p values: camptothecin 0.82, etoposide 0.51, TRAIL 0.13). To examine the role of p30 in modulating cellular apoptosis in other cell types, we transiently transfected 293T cells with either pME-p30 HA or empty vector control (pME-18S). Following treatment with camptothecin or etoposide, cells were tested for apoptosis using immunoblot assay for the 89 kd fragment of cleaved PARP. Consistent with our data using Jurkat T-cells, we did not observe an increase in susceptibility to apoptosis between p30-expressing cells and negative control cells (Fig. 4B), and lead us to further test the influence of the viral protein in cell cycle regulation.
To further examine p30 mediated G2 delay, we next tested the expression of cyclin B1 and Cdc2 in p30 expressing Jurkat T-cells. During cell cycle progression, the G2-M transition is mediated by active Cdc2 and cyclin B1 complex [49]. Our data indicated that asynchronous Jurkat Tcells expressing p30 had no change in cyclin B1, Cdc2, or phosphorylated Cdc 2 at tyrosine 15, but a 1.5 fold decrease in phosphorylation of Cdc2 at threonine 161 compared to mock infected Jurkat T-cells ( Fig. 5B and Fig. 6B). These results lead us to further examined proximal signals of cell cycle regulation that could explain a delay in G2/M transition in p30 expressing T-cells.
The activity of Cdc2 is regulated by the phosphatase Cdc25C. Dephosphorylation of Cdc2 at threonine 14 and tyrosine 15 by Cdc25C results in activation of Cdc2 and initiation of an autoactivation loop between Cdc25C and Cdc2 that efficiently drives cells into mitosis. We reasoned that since p30 expression is associated with a decrease in phosphorylation of Cdc2 at threonine161, we anticipated a less active form of Cdc25C. To test this hypothesis we examined the expression and phosphorylation status of Cdc25C in p30 and mock Jurkat T-cells. No change was observed in the amounts of nuclear Cdc25C in p30 expressing Jurkat T-cells (Fig. 5C) or transcript levels of Cdc25C by reverse transcriptase PCR when compared to mock transduced Jurkat T-cells (data not shown).
We next tested the phosphorylation status of Cdc25C at serine 216 using phosphospecific antibodies by western blot assay. Interestingly, p30 expression resulted in enhanced phosphorylation of Cdc25C at serine 216 and an increase in accumulation of the phosphorylated form in the nucleus in both p30 transduced Jurkat T-cells (Fig. 5C) and 293T cells transfected with pME p30 (data not shown). These data indicate that p30 expression was associated with an increase in nuclear accumulation of Cdc25C phosphorylated at serine 216, consistent with a delay in G2 exit from the cell cycle.
Phosphorylation of Cdc25C at serine 216 is mediated primarily by Chk1 and other kinases including Chk2 or Cdc25C associated kinase (cTAK1). Chk1 is activated by phosphorylation mediated by ataxia telangiectasia mutated and rad 3 related kinase (ATR) in response to single stranded DNA breaks [50]. We therefore examined the phosphophorylation status of Chk1 at serine 345 in p30 expressing and mock infected Jurkat T-cells. Consistent with enhanced phosphorylation of Cdc25C at serine 216 and a delay in G2 exit from the cell cycle, we observed an Phosphorylation of Cdc25C at serine 198 by PLK1 results in nuclear localization of Cdc25C by eliciting a conformational change that conceals its nuclear export signal [40,41] and therefore PLK-1 has been described as a positive regulator of G2/M transition [51]. Polo-like kinase 1 also phosphorylates cyclin B1 and promotes nuclear accumulation of the cyclin B1-Cdc2 hetero dimmer [52]. Pololike kinase 1 is activated upon phosphorylation at threonine 210 and serine 137 and phosphorylation at these sites is inhibited upon DNA damage to prevent cells from entering mitosis [53]. We therefore examined if PLK1 protein levels were altered in p30 expressing versus mock Jurkat T-cells. Interestingly, p30 expression resulted in reduced amounts of detectable PLK1 and the threonine 210 phosphorylated form of PLK-1 (Fig. 6A). Finally, we examined the phosphorylation status of Cdc25C at serine 198. Consistent with less PLK1, p30 expression resulted in reduced phosphorylation of Cdc25C at serine 198 (Fig. 6A). These data further supported our observed G2/M delay as PLK1 promotes G2/M transition. Using PLK1 specific primers, we examined the transcript levels of PLK1 in p30 and mock transduced Jurkat T-cells by reverse transcriptase PCR and found that p30 did not result in decrease in PLK1 transcript levels (data not shown). The fold change in expression of key G2/M cell cycle regulatory proteins in p30 expressing Jurkat T-cells is summarized in Fig. 6B.
Discussion
The ability of HTLV-1 to promote T-cell survival is critical to allow the virus to spread cell-to-cell following infection prior to an active immune response. This permits the virus to establish an infection that is maintained life-long through regulated virus expression and clonal expansion of infected lymphocytes [54]. Multiple studies indicate the importance of HTLV-1 ORF II expression during the course of the natural infection. Infected human subjects exhibit antibody and cytotoxic T cell responses against recombinant proteins or peptides of pX ORF II proteins [13,14] and freshly cultured transformed lymphocytes from HTLV-1 patients express both Tax and p30 [15]. We were the first to demonstrated that pX ORF II encoding p30 is necessary for establishment and maintenance of HTLV-1 infection in a rabbit model [16,17]. In this study, we sought to determine if p30 has a functional role in modulating T-cell survival. Herein, we report that expression of p30 in Jurkat T-cells results in an accumulation of cells in the G2 phase of cell cycle. Expression of the viral protein resulted in an increase in phosphorylation of Cdc25C at serine 216, which was presented in greater amounts in the nucleus of p30 expressing cells. The activated form of Chk1 phosphorylated at serine 345 was increased in p30 expressing Jurkat T-cells concurrent with a decrease in expression of PLK1 and the phospho-tyrosine 210 form of PLK1. Consistent with less PLK1, p30 expression resulted in reduced phosphorylation of Cdc25C at S-198. Interestingly, primary human lymphocyte derived cell lines immortalized by an HTLV-1 proviral clone defective in p30 expression were more susceptible to camptothecin induced apoptosis. Collectively, our data indicate that HTLV-1 p30 expression modulates regulatory cell cycle control in T-cells resulting in accumulation of cells in G2-M phase of cell cycle, which would enhance early viral spread and prolong lymphocyte survival.
The effects of p30 in modulation of the cell cycle contrast to the influence of HTLV-1 Tax on cell cycle regulation. We have recently demonstrated that p30 balances and counteracts the influence of Tax [26]. Tax has been reported to interact directly with Chk-2 resulting in attenuation of DNA damage induced signaling in an ATM/chk2-mediated pathway dependent manner [55]. Our data indicates that p30 results in G2-M delay by enhancing Chk-1 phosphorylation. In response to DNA damage, ATR kinase phosphorylates and activates Chk1 resulting in G2 arrest [50]. Thus, p30 may be involved in a DNA damage/repair signaling pattern, similar to HIV-1 Vpr [56][57][58]. Our current studies indicate that p30 enhances DNA damage/ repair signaling in an ATM dependent manner (manuscript in preparation) and suggest a role in integration allowing DNA repair to take place. Thus, p30 counteracts some of the cellular effects of Tax, which if not regulated, could cause premature cell death by apoptosis or a more rapid oncogenic transformation event, which would be detrimental for long-term viral persistence.
HTLV-1 is the etiologic agent of adult T-cell leukemia/ lymphoma a highly aggressive CD4+ T-cell malignancy affecting approximately 1-5 % of HTLV-1-infected individuals after a latent period as long as three decades [1]. Our data has implications in our understanding of how the virus establishes infection and immortalizes T-cells in a manner that results in a relative resistance to drug induced apoptosis. T-cells immortalized with HTLV-1 proviruses lacking p30 expression (ACH.30.1) were more susceptible to camptothecin-induced apoptosis. Camptothecin is a topoisomerase I inhibitor, which induces apoptosis in cells in the S phase of the cell cycle (reviewed in [45]). We have recently demonstrated that p30 balances and counteracts the influence of Tax [26]. Without the dampening influence of p30 on Tax, the ACH.30.1 cells would be predicted to be more in the S phase of the cell cycle and susceptible to drugs such as camptothecin. Thus, p30 effects upon the cell cycle, in particular during the early phase of viral spread in vivo may enhance cell survival and promote cell to cell spread of the infection.
Gene array studies have implicated p30 in the modulation of expression of a variety of cellular genes, including many cell cycle and apoptosis regulatory genes [15,25]. To further test potential mechanisms for our observed p30 mediated G2 delay, we tested the expression status of cyclin B1 and Cdc2 key mediators of the G2-M transition. We did not see a change in cyclin B1 or Cdc2 in either p30 expressing or mock Jurkat T-cells. However, p30 expression was associated with a decrease in phosphorylation at threonine 161 supporting the G2 delay observed in p30 expressing Jurkat T-cells. Cdc2 is activated by Cdc25C that removes phosphate groups from tyrosine 15 and threonine 14 [31]. Activated Cdc2 can further activate Cdc25C [39]. We therefore looked at the expression and phosphorylation status of Cdc25C by western blot analysis. When phosphorylated at serine 216, Cdc25C is typically shuttled out of the nucleus by the cytoplasmic anchor protein complex 14-3-3 and therefore excluded from its substrate [32]. We found that p30 expression resulted in reduced protein levels of Cdc25C. It is possible that p30 may repress Cdc25C expression at the transcriptional or posttranscriptional level. We also observed that p30 expression was associated with enhanced phosphorylation of Cdc25C at serine 216. Interestingly we found an increase in nuclear accumulation of Cdc25C phosphorylated at serine 216, which primarily is localized in the cytoplasm. It is possible that p30 might interfere with nuclear export of the protein and therefore cause an accumulation pCdc25C serine 216 in the nucleus. It is also possible that p30 might result in decreased 14-3-3 expression or directly bind with 14-3-3 and result in lesser amounts of 14-3-3 available for binding to and shuttling of pCdc25C serine 216. Our data indicates that p30 expression results in increased amounts of cellular Chk1 serine 345 phosphorylated forms to accumulate, consistent with the increased phosphorylation of Cdc25C.
Polo-like kinase 1 phosphorylates Cdc25C at serine 198 and allows the nuclear retention of Cdc25C by concealing the nuclear export signal [41]. ATR kinase inactivates PLK1 in response to DNA damage. Our data indicates that p30 expression was associated with decreased PLK1 and its threonine 210 phosphorylated form. Reduced total levels of PLK-1 may be because p30 may modulate PLK-1 expression at the transcriptional or posttranscriptional level or may affect stability of PLK-1 protein. Future studies are directed towards understanding the role of p30 in PLK-1 expression.
Consistent with the less active form of PLK-1, reduced phosphorylation of Cdc25C at serine 198 was associated with p30 expression. We expected that if PLK1 expression is low, we should see more cytosolic form of Cdc25C. However in our studies we find that Cdc25C phosphorylated at serine 216 is increased in the nucleus. These data suggests that p30 may inhibit the nuclear export of Cdc25C similar to its effects on tax/rex mRNA [8,21].
Our data suggests parallels between the function of HTLV-1 p30 and HIV-1 Vpr, which is associated with G2 arrest [56]. The biological significance of this arrest during the natural infection is not well understood but the HIV-1 LTR seems to be more active in the G2 phase, suggesting that G2 arrest confers a favorable cellular environment for efficient transcription of HIV-1 [59]. Vpr induced cell cycle arrest requires ATR kinase for the activation of Chk1 that results in phosphorylation and inactivation of Cdc25C [60]. In this regard it will be important to test HTLV-1 LTR activity during the G2 phase of cell cycle and the viral proteins effect upon proviral integration. HIV-1 Vpr expression may increase Survivin expression during G2/M to regulate cell viability during HIV-1 infection [61]. Similarly, p30 may serve to prolong the survival of HTLV-1 infected cells by up regulating key cellular gene products like Survivin to prevent apoptosis and elimination of HTLV-1 infected cells.
Conclusion
Overall, our data indicates a role for p30 in modulating cell cycle parameters of T-cells providing new insights how HTLV-1 regulates its cellular environment and balances the effects of Tax, which if unchecked would result in rapid immune elimination of virus producing host cells or cause cell death by apoptosis, both detrimental for viral persistence, a hallmark of the natural infection.
Cell cycle analysis
p30 or GFP transduced Jurkat T-cells were synchronized with 2 mM hydroxyl urea (Sigma) for 14 h followed by release for 6 h and a second hydroxy urea block for 16 h. Synchronized cells were released from the block and collected at 0, 2, 4, 6, 8, 10, 12, 24 h. Cells were fixed in 70 % ethanol at kept at 4°C for 14 h. Cells were then washed with PBS and treated with DNAse free RNAse (Roche, Indianapolis, IN) for 30 min in PBS containing 0.1 % triton X-100 at 37°C followed by staining with propidium iodide (Sigma) and analyzed by BD FACS Calibur system ® (BD Biosciences, San Jose, CA).
Flow cytometry
ACH.1, ACH.30.1, and Jurkat T cells were prepared for flow cytometry by labeling with Annexin V Alexa Fluor ® 488 conjugage (Molecular Probes, Eugene, OR) and propidium iodide (PI) (Molecular Probes) or Annexin V Alex-aFluor ® 647 conjugate (Molecular Probes) according to the manufacturer's protocol. In brief, the cells were collected, washed once with PBS, and re-suspended at 1 × 10 6 cells/mL in 100 μL of Annexin-binding buffer (Molecular Probes), followed by incubation with 5 μL Annexin V conjugate solution and 1 μL 100 μg/mL PI for 15 min at room temp. After the incubation period, 400 μL of Annexinbinding buffer was added, and samples were kept on ice. The samples were analyzed by flow cytometry (Coulter Epics Elite, Beckman Coulter Inc., Fullerton, CA) and data were analyzed using Coulter Flow Center software (Beckman Coulter Inc.). For each sample, 10,000 gated cells were examined for Annexin V and PI staining, and the percentage of cells in early apoptosis was defined by high Annexin V-and low PI-staining cell population. All Annexin V assays were performed in a minimum of three independent experiments. Nonparametric Wilcoxon rank sum test was used for statistical analysis of significant apoptosis induction and comparison of apoptosis induction between cell lines.
Western blot assays
Expression of cell cycle regulators were analyzed using nuclear and cytosolic extracts. Briefly, cells were swelled in hypotonic buffer (10 mM HEPES pH 7.9, 1.5 mM MgCl 2 , 10 mM KCl) followed by shearing with a 27 gauge needle followed by centrifugation at 14000 rpm for 15 sec. Supernatant were saved as cytosolic fractions and nuclear pellets were incubated with high salt buffer for 1 h (20 mM HEPES pH 7.9, 25 % glycerol, 1.5 mM MgCl 2 , 1.2 M KCl, 0.2 mM EDTA) followed by low salt buffer (20 mM HEPES pH 7.9, 25 % glycerol, 1.5 M MgCl 2 , 0.02 M KCl, 0.2 mM EDTA) and centrifuged at 14000 rpm for 30 min to get nuclear extract. Membranes were blocked with 5 % non-fat dry milk and 10 % fetal bovine serum in Tris-buffered saline with 0.1 % Tween (TBST) for 2 h at room temp., then incubated with primary antibody overnight at 4°C. Immunodetection was performed using the following antibodies: mouse anti-HA monoclonal antibody anti-Histone H1, clone AE-4 (1: 1000, Upstate). Western blots were developed with horseradish peroxidase-labeled secondary antibody (1:1000) and enhanced chemiluminescence reagent (Cell Signaling Technology). All western blots were repeated at least four times.
Reverse transcriptase PCR
RNA was isolated from p30 Jurkat T-cells and ACH.2 cells using RNAqueous™(Ambion, Auatin, TX)). Two step RT-PCR was performed by using random primers to prepare cDNA followed by PCR using specific primers for indicated genes. p30 primers were used as described [25]. For β-2 microglobulin the following primer set (Invitrogen) was used F5-ACCCCCACTGAAAAAGATAC-3 and R5-ATCTTCAAACCTCCATGATG-3. Cycles were varied from 15 cycles to 30 cycles in order to compare transcript levels between p30 Jurkat T-cells and ACH.2 cells. β-2 microglobulin was used as a control to compare expression levels. PCR products were run on an agarose gel and stained with ethidium bromide and p30 band was quantified and normalized to β-2 microglobulin band using Alphamager ® 3.24.
|
2017-08-03T02:07:41.877Z
|
2007-07-16T00:00:00.000
|
{
"year": 2007,
"sha1": "aa94b42d471d3ca8ff83209d4d051a5989b4cdd1",
"oa_license": "CCBY",
"oa_url": "https://retrovirology.biomedcentral.com/counter/pdf/10.1186/1742-4690-4-49",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56f0da213dc63f0ad5f9034207b64c1ae5638c8d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
260176662
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy and safety of acupuncture for pregnancy-related low back pain: A systematic review and meta-analysis
Background Pregnancy-related low back pain (PLBP) is a common musculoskeletal disorder, affecting people's physical and psychological health. Acupuncture is widely used in clinical practice as a treatment for PLBP. This study aimed to evaluate the efficacy and safety of acupuncture or acupuncture combined with other treatments for PLBP patients. Methods The Cochrane Library, PubMed, EMBASE, Web of Science, the Chinese Biological Medicine Database, China National Knowledge Infrastructure, WanFang Database, and VIP information database were searched from inception to January 31, 2022. Randomized controlled trials (RCTs) were eligible, without blinding and language restriction. Cochrane's risk of bias tool was used to assess the methodological quality. Meta-analysis was performed using RevMan 5.3. Results Twelve randomized controlled trials involving 1302 patients were included. The results showed that compared to the control group, the VAS score was significantly decreased after acupuncture treatment. In addition, no significant difference was found in the preterm delivery rate (RR = 0.38, 95%CI: 0.24 to 0.61, P = 0.97) after acupuncture treatment. Compared with other therapies, acupuncture or acupuncture plus other therapies revealed a significant increase in the effective rate (OR: 6.92, 95%CI: 2.44 to 19.67, I2 = 0%). No serious adverse events owing to acupuncture were reported. Conclusion Acupuncture or acupuncture combined with other interventions was a safe and effective therapy for treating PLBP. However, the methodological quality of the RCTs was low. More rigorous and well-designed trials should be conducted.
Introduction
Pregnancy-related low back pain (PLBP) is a recurrent or constant pain, lasting for more than one week from the lumbar spine or pelvis [1]. PLBP occurs frequently in the middle and 3rd trimesters. Besides, these symptoms persist during the all postpartum period.
Types of participants
According to the existing diagnostic criteria, women with a diagnosis of low back pain and pelvic pain during pregnancy or postpartum would be included. The gender, age, race, nationality, duration of the disease, etc., were not restricted.
Types of interventions
Acupuncture or acupuncture plus conventional therapy as an intervention for PLBP was included. No restriction was imposed regarding the conventional regimen. In addition to intervention measurements, other background treatment measurements were identical in both groups.
Types of comparators
The following interventions were considered in the control group: conventional treatments (the same conventional regimen as the intervention group in the same original trial), medication, physiotherapy, herbal formulations, placebo, or no treatment (e.g., waiting list).
Types of outcome measures
The primary outcome was the change in the Visual Analog Scale (VAS). Secondary outcomes included effective rate, preterm delivery rate, and adverse events.
Exclusion criteria
The following criteria were excluded: animal experiments, literature review, case reports, case series, observational studies; opinion trials and conference proceedings; incomplete original data or full trail; duplicated publications.
Selection of studies
Two investigators (RL and LPC) independently checked the titles and abstracts of the included RCTs by using EndNote software (X.9.3.3). They excluded studies that did not refer to acupuncture and PLBP. Identified studies were retrieved for full-text assessment. Any discrepancy was resolved by a third party (Prof. R) or by contacting the authors of the original article. Study selection was summarized in a PRISMA flow diagram [38].
Data extraction
Data were extracted using a predefined data-extraction form (Excel software) that assessed RCTs details (publication year, nationality, journal, year of publication, study design), patient demographics (sample size per arm, median age of patients, gender, height, weight, gestational weeks), treatment information (duration, frequency, types of acupuncture, acupuncture points, types of comparators), primary and secondary outcomes, and adverse reactions. Two independent investigators (RL and LPC) extracted the data in duplicate. Any disagreements were arbitrated by a third party (Prof. R). If any of the study information was unclear or missing, the corresponding author was contacted through email.
Risk of bias assessment
The Cochrane Handbook for Systematic Reviews of Interventions was utilized to evaluate the methodological quality of the included studies [39]. The following items were assessed: random sequence generation, allocation concealment, blinding of participants and personnel, blinding of outcome assessors, incomplete outcome data, selective reporting, and other sources of bias. Each domain was assessed and graded as "low risk", "unclear", or "high risk". The evaluation was performed independently by two investigators (RL and LPC). Any difference encountered was arbitrated by a third investigator (Prof. R).
Statistical analysis
Statistical analyses were performed using the Review Manager (V.5.3.0) and Stata (17.0). A risk ratio or odds ratio with 95% confidence intervals was utilized for dichotomous data, whereas a mean difference or standardized mean difference with 95% confidence intervals was used for continuous data. The Chi-square and I 2 statistics were applied to investigate statistical heterogeneity. The fixed-effects model was used for low heterogeneity (I 2 <50%), and the random-effects model was applied if heterogeneity was moderate (50% <I 2 <75%). α = 0.05, P < 0.05 was considered a statistically significant difference. A meta-analysis would not be performed when heterogeneity was considerably high (I 2 >75%). Sensitivity analysis was conducted based on different levels of bias in the included studies to validate the robustness of our findings. Funnel plots were used to evaluate the publication bias of the primary outcome indicators when more than ten eligible studies were included. *T, treatment group; *C, control group; *NR, not reported; *VAS, Visual Analog Scale; *Cw2, Location of this point was not described in the article. Local hematoma, tiredness, weakness, drowsiness, nausea, and headache were considered adverse events (AEs) of acupuncture. Among them, no significant difference was found in the preterm delivery rate after acupuncture or not (P = 0.97). No serious adverse events occurred.
Subgroup analyses
To further explore the potential resource of heterogeneity, subgroup analysis was explored based on the different treatment periods, gestational weeks, and age of the patient.
Assessment of evidence strength and certainty
The Grading of Recommendations Assessment, Development and Evaluation (GRADE) system was used for assessments of the quality of primary outcomes [40]. In detail, a general "confidence of evidence" rating that was split into 4 categories (ie, high, moderate, low, and extremely low) will be used to characterize the strength and certainty of the evidence.
Eligible studies and characteristics
A total of 374 records were initially detected, and 174 studies were deduplicated with the Note Express software. By browsing the abstract and reading the full text, they were screened according to the inclusion and exclusion criteria, and finally, 12 RCTs were included, with a total of 1302 patients (634 in experimental groups and 646 in control groups) [41][42][43][44][45][46][47][48][49][50][51][52]. The characteristics of the included RCTs were outlined in Table 2. The detailed research flow chart was shown in Fig. 1 (see Table 3).
Risk of bias of included studies
12 RCTs were included, all trials having a comparable baseline. Cochrane risk of bias assessment was performed on the included literature. Adequate methods of random sequence generation were described in 6 trials. Specifically, these procedures were random number tables such as computer random number generators, a coin toss random sampling, or shuffling sealed envelopes. The remaining 6 trials were assessed as unclear without the specific randomization method. Single-blinded was used in two trials [43,44], double-blinded was used in one trial [50], and then the remaining trials did not describe blinding. No reporting bias was found. In terms of other risks of bias, two RCTs were assessed as unclear, due to unclear baseline between groups. Figs. 2 and 3 demonstrated the risk of bias in the included trials.
Adverse effects
.
The quality of evidence
The GRADE tool was used to assess each outcome's certainty evidence of quality. The evidence quality of the effective rate was moderate. For serious limitations: most trials were assessed as an unclear or low bias of risk, so the evidence was downgraded. No serious inconsistency: no statistically significant heterogeneities were found (P > 0.05). The effective rate was directly associated with clinical outcomes. No serious imprecision: the effect size (OR) was significantly different (P > 0.05). No serious other considerations were found. For the VAS, the evidence quality was assessed as moderate. Most trials were assessed as an unclear or low bias of risk, therefore the evidence was downgraded. The VAS was used to measure PLBP pain intensity directly. No serious inconsistency or serious imprecision was found in those trials.
Discussion
In this systematic review with meta-analyses, we present evidence of the efficacy and safety of PLBP, based on 12 RCTs including 1302 patients. The pooled results revealed that the therapeutic effect of acupuncture was superior to physiotherapy, conventional treatment, stabilizing exercise, or other drug treatment. In addition, acupuncture or acupuncture combined with other therapies has better efficacy in relieving the pain of PLBP. Besides, no significant adverse events were reported to have been treated with R. Li et al. acupuncture during pregnancy. In recent years, acupuncture had been confirmed as a safe therapy.
The American Academy of Family Physicians (AAFP) endorsed the American College of Physicians (ACP) Guidelines recommending acupuncture as a first option for acute, subacute, and chronic low back pain [53]. Acupuncture provides analgesia for several types of chronic pain with lower cost, lower risk, and higher patient satisfaction than drug treatment [54,55]. Acupuncture analgesia is a manifestation of integrative processes at different levels in the CNS between afferent impulses from pain regions and impulses from acupoints. Extensive experimental evidence indicates that acupuncture stimulates endogenous pain-control mechanisms. Diverse signal molecules promote acupuncture analgesias, including opioid peptides, glutamate, 5-hydroxytryptamine, and cholecystokinin octapeptide [3]. Among these, the opioid peptides and their receptors modulate acupuncture analgesia. Opioids desensitize peripheral nociceptors, decrease proinflammatory cytokines in peripheral sites, cytokines, and SP in the spinal cord as well as promote pain inhibition. Acupuncture has also been shown to reduce inflammation locally which in turn impacts pain processing by the central nervous system [56][57][58]. Besides, acupuncture downregulates GluN1 phosphorylation to inhibit pain by inducing serotonin and norepinephrine.
Low back pain (LBP) refers to muscle tension or stiffness that is localized below the costal margin and above the inferior gluteal folds [59]. On the other hand, pelvic girdle pain (PGP) is a type of pain between the posterior iliac crest and the gluteal fold, specifically in the vicinity of sacroiliac joints (SIJ) [60]. The painful nature of LBP and PGP are usually similar and overlapping. Both are associated with lumbopelvic stabilization. In our study, based on the contention and uncertainty of etiology and treatment, we selected people with low back pain and pelvic girdle pain as participants [14,61], as many investigators do.
Pain is a subjective experience and clinicians often rely on patients' verbal reports [62,63]. The change in pain intensity is the primary outcome in trials of pain-specific therapies, managing and detecting the patient's life. In a recent survey, clinicians and patients preferred the VAS to other scales for measuring LBP pain intensity [64]. The VAS is a continuous scale that quantifies pain intensity. It comprises a 10 cm horizontal or vertical line with anchor points of 0 (no pain) and 10 (extreme pain) [65]. The pain intensity and pain affect were key dimensions of the pain experience. So far, VAS is the most commonly used in LBP clinical trials to measure pain intensity and assess "unpleasant" feelings. We confirmed its reliability and efficacy in pain assessment [66], including cancer pain, degenerative joint pain, and other chronic pain. Thus, VAS represents the primary outcome of this work.
In line with our current report, the efficacy of acupuncture had been proven in previous systematic reviews. Complementary and Alternative Medicine (CAM) is a mainstream therapy for PLBP [18] and has been verified its efficacy [32,36,37,67]. Nevertheless, acupuncture is only effective as a supplementary therapy. In our analysis, we discovered that acupuncture or integrated with other treatments for PLBP is effective and safe.
In conclusion, acupuncture effectively ameliorates pain in PLBP patients compared to the control group. For AEs, no adverse effects occurred in the trials. In contrast with the two groups in the trial, the preterm delivery rate does not increase after acupuncture treatment.
Study limitations
There were several limitations should be considered in the study. Firstly, considering different diagnostic criteria and gestational weeks, which might result in high heterogeneity. Besides, due to the small sample size and low quality of RCTs, the results might be inconclusive. Acupuncture was usually an adjunct therapy and rarely used in isolation. Further research needs to improve the methodology. More large-scale and high-quality RCTs will be needed. The high quality of acupuncture trials requires to be reported. The future study design should use acupuncture in isolation to explore the efficacy of PLBP.
Conclusions
In summary, this meta-analysis found that acupuncture may be a potential adjustive option for PLBP with minimal side effects. Acupuncture can relieve pain and improve the effective rate. However more well-designed research will be needed to support our findings.
Author contribution statement
Rong Li: conceived and designed the experiments; Wrote the paper. Yulan Ren: conceived and designed the experiments. Liping Chen performed the experiments; Wrote the paper. Xiaoding Lin; Yuqi Xu: analyzed and interpreted the data. Runchen Zhen; Jinzhu Huang: Contributed reagents, materials, analysis tools or data.
Data availability statement
Data associated with this study has been deposited at PROSPERO under the accession number CRD42022307865.
Declaration of competing interest
The authors have no competing interests to declare for this review.
|
2023-07-27T15:05:39.568Z
|
2023-07-01T00:00:00.000
|
{
"year": 2023,
"sha1": "08345a149b7bf0ffe4a0d1053c38e4df5f7f8ec0",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "fedf3bd725204efb1b3295b4c05bf7632fdbde18",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
117210749
|
pes2o/s2orc
|
v3-fos-license
|
Direct collapse black hole formation via high-velocity collisions of protogalaxies
We propose high-velocity collisions of protogalaxies as a new pathway to form supermassive stars (SMSs) with masses of ~ 10^5 Msun at high redshift (z>10). When protogalaxies hosted by dark matter halos with a virial temperature of ~ 10^4 K collide with a relative velocity>200 km/s, the gas is shock-heated to ~ 10^6 K and subsequently cools isobarically via free-free emission and He^+, He, and H line emission. Since the gas density (>10^4 cm^{-3}) is high enough to destroy H_2 molecules by collisional dissociation, the shocked gas never cools below ~ 10^4 K. Once a gas cloud of ~ 10^5 Msun reaches this temperature, it becomes gravitationally unstable and forms a SMS which will rapidly collapse into a super massive black hole (SMBH) via general relativistic instability. We perform a simple analytic estimate of the number density of direct-collapse black holes (DCBHs) formed through this scenario (calibrated with cosmological N-body simulations) and find n_{DCBH} ~ 10^{-9} Mpc^{-3} (comoving) by z = 10. This could potentially explain the abundance of bright high-z quasars.
INTRODUCTION
The existence of bright quasars (QSOs) at z 6 − 7 (Fan 2006;Willott et al. 2010;Mortlock et al. 2011;Wu et al. 2015) presents an intriguing question: how do supermassive black holes (SMBHs) with masses a few × 10 9 M⊙ form within the first billion years after the Big Bang? Perhaps the simplest possible explanation is that the 10 − 100 M⊙ black hole remnants of the first generation of stars grow into these supermassive black holes via gas accretion. However, this requires essentially uninterrupted Eddington limited accretion for the entire history of the Universe, which seems unlikely due to radiative feedback from the accreting BH (e.g. Johnson & Bromm 2007;Milosavljević, Couch & Bromm 2009;Park & Ricotti 2011Tanaka, Perna & Haiman 2012). Major mergers of BHs are not expected to accelerate growth significantly. This is because the kick velocity of the merged BH is typically larger than the escape velocity of its host galaxy (e.g. Herrmann et al. 2007; Koppitz et al. 2007) lead-⋆ E-mail: inayoshi@astro.columbia.edu † Japan Society for the Promotion of Science Postdoctoral Fellow ‡ E-mail: visbal@astro.columbia.edu § Columbia Prize Postdoctoral Fellow in the Natural Sciences ¶ E-mail: kashiyama@berkeley.edu Einstein Fellow ing to ejection and halting gas accretion (but see also Tanaka & Haiman 2009).
A different pathway to H2 suppression is collisional dissociation (H2 + H → 3H), which can occur if the metal poor gas reaches high density and temperature, satisfying (n/10 2 cm −3 ) × (T /10 6 K) 1 (the so-called "zone of no return"). Inayoshi & Omukai (2012) proposed that galacticscale shocks can satisfy this condition. If the shock happens at the central region of a massive halo 0.1 Rvir, the density and temperature of the shocked gas become n 10 4 cm −3 and T 10 4 K, and efficient collisional dissociation of H2 can occur. However, the simulations of Fernandez et al. (2014) showed that for several examples of less massive halos with Tvir 10 4 K, shocks do not reach the center preventing SMS formation. It was also pointed out that in typical halos (not the high-velocity collisions discussed in this paper), the zone of no return cannot be reached without radiative cooling, which may lead to star formation and prevent SMS formation (Visbal, Haiman & Bryan 2014a). SMS formation may still be possible in larger halos (Tvir 10 4 K) if shocks reach the centers of the halos before significant amounts of stars of formed. This requires further study with numerical simulations.
Another proposed SMS formation scenario is based on massive-galaxy mergers (Mayer et al. 2010(Mayer et al. , 2014. A merger can drive strong gas inflow and supersonic turbulence in the inner galactic core, which prevents significant fragmentation of the gas even with some metals (but see Ferrara, Haardt & Salvaterra 2013). If efficient angular momentum transport can be sustained in the inner ∼ 0.1 pc for a sufficiently long time (which requires confirmation from further numerical simulations) a SMS of 10 8 M⊙ could form. DCBHs from such SMSs might explain the observed abundance of high-z QSOs.
In this paper, we propose high-velocity collisions of protogalaxies as a new pathway to form SMSs and DCBHs at high redshift. As observed in the local Universe, a fraction of galaxies and also clusters of galaxies collide with a much larger velocity than the typical peculiar velocity (e.g., the Taffy galaxy and the bullet cluster; Condon et al. 1993;Condon, Helou & Jarrett 2002;Tucker et al. 1998;Markevitch et al. 2002). At the interface of such colliding galaxies, shock-induced starbursts have been confirmed (e.g., Larson & Tinsley 1978;Saitoh et al. 2009). We show that, when a similar collision with a high-velocity 200 km s −1 happens between metal-poor galaxies, a hot gas (∼ 10 6 K) forms in the post-shock region and the subsequent radiative cooling makes the gas dense enough that any H2 molecules are destroyed by collisional dissociation. Once a gas clump of ∼ 10 5 M⊙ with a low concentration of H2 due to collisional dissociation forms, its gravitational collapse can be triggered and a SMS forms. Note that our scenario does not require supersonic turbulence or extremely efficient angular momentum transfer as in the galaxy merger scenario. We estimate the abundance of SMSs and DCBHs produced by high-velocity galaxy collisions, and show that it can be comparable to that of high-z QSOs.
This paper is organized as follows. In §2, we derive the necessary conditions to form SMSs in protogalaxy collisions. In §3, we estimate the number density of protogalaxy collisions resulting in the SMS formation, and show that the DCBHs from such SMSs could be the seeds of high-redshift QSOs. Finally, we summarize and discuss our results in §4. Throughout we assume a ΛCDM cosmology consistent with the latest constraints from Planck
SMS FORMATION VIA PROTOGALAXY COLLISIONS
Generally speaking, a SMS can form from a 10 5 M⊙ metal-poor gas clump without H2. Since H2 can form via the electron-catalyzed reactions (H + e − → H − + γ; H − + H → H2 + e − ), it must be efficiently dissociated. We propose a SMS formation scenario where H2 is dissociated from the shocks produced by a high-velocity collision of two dark matter halos. In this section, we show that SMS formation requires the relative velocity of the colliding protogalaxies to be in a specific range. If the collision velocity is too low, the gas will not be shocked to sufficient temperature and density to dissociate H2. On the other hand, if the velocity is too high and the shock too violent, the gas will be disrupted before it can cool via atomic hydrogen and form a SMS. This velocity window depends on redshift due to the evolution of the typical gas properties of pre-shocked gas within dark matter halos.
Protogalaxy properties
Next, we describe the properties of dark matter halos and the gas within them as a function of redshift. This sets the collision velocity bounds for SMS formation which are derived below. We consider protogalaxies hosted by darkmatter halos with virial temperatures of Tvir ∼ 10 4 K, cor-responding to the atomic cooling threshold. Larger halos undergo radiative cooling which triggers star formation and metal enrichment, inhibiting SMS star formation (see Sec. 4.2.1 for further discussion). On the other hand, for halos Tvir ≪ 10 4 K, the small gas mass is not sufficient to form a SMS (see equation 5).
The virial mass of a dark matter halo is given by (2) (Barkana & Loeb 2001).
Simulations show that before cooling becomes efficient the central regions of dark matter halos contain a gas core with approximately constant density and radius (see e.g.
Visbal, Haiman & Bryan 2014a)
Rcore ≃ 0.1 Rvir. (3) The gas core is surrounded by an envelope with a density profile roughly given by ∝ r −2 . The entropy profile, defined as K = kBT n −2/3 0 , also has a core with K/Kvir ∼ 0.1. Here m times mean number density of baryons (e.g., Visbal, Haiman & Bryan 2014a). Since T ≃ Tvir in the (pre-shock) gas core, the gas density can be estimated as The total core gas mass is Mgas,core ∼ 3.0 × 10 5 M⊙ Tvir which is ∼ 10 per cent of the total gas mass inside the darkmatter halo.
Lower velocity limit
The lower collision velocity limit is set by the requirement that H2 is collisionally dissociated. This will happen if the shock is strong enough for the gas to enter the so called "zone-of-no-return" (see Inayoshi & Omukai 2012, and Appendix A). This region in temperature-density space is defined by for n 10 4 cm −3 , where T and n are the post-shock temperature and density, respectively. For a collision velocity, v0, much larger than the sound speed of the pre-shocked gas (∼ 10 km s −1 ), and n = 4n0, where we set the mean molecular weight as µ = 0.6. From equations (4), (6), (7), and (8) The solid curve shows the lowest velocity required to induce collisional dissociation of H 2 (equation 9). The dashed curve shows the highest velocity that meets the radiative shock condition (equation 13).
limit of the collision velocity for SMS formation, which is given by
Upper velocity limit
The upper collision velocity limit leading to SMS formation is set by the radiative shock condition. If the collision velocity is too large, the shocked gas starts to expand adiabatically before the radiative cooling sets in. In this case, the shocked gas cannot become dense enough to dissociate H2 collisionally, and the shock-induced SMS formation cannot be triggered. Therefore, the radiative cooling time needs to be shorter than the dynamical time of the shock. The dynamical time of the shock can be estimated as while the radiative cooling time of the shocked gas is given by Here, Λ rad (n, T ) = n 2Λ (T ) is the cooling rate (in units of erg s −1 cm −3 ). The cooling functionΛ(T ) consists of the contributions from atomic H line emission at T ∼ 10 4 K, atomic He + and He line emissions at T ∼ 10 5 K, and the bremsstrahlung emission ∝ T 1/2 at T 10 6 K (Sutherland & Dopita 1993;Glover & Jappsen 2007). In our scenario, the shocked gas temperature initially ranges up to 5 × 10 5 K T 5 × 10 6 K, which corresponds to the collision velocity of 150 km s −1 v0 500 km s −1 (see equation 7). For such gas, the cooling function can be well approximated byΛ0 ≃ 5 × 10 −24 erg s −1 cm 3 . Given that n ∝ T −1 during the radiative contraction, the cooling time becomes shorter for a lower-temperature, i.e. the gas contracts in a thermally unstable way. Thus, the radiative cooling time is essentially given by substitutingΛ0 into equation (11); (12) From equations (2 − 4), (10), and (12), the radiative shock condition (t dyn t cool ) can be rewritten as v0 620 km s −1 Tvir Fig. 1 shows the collision velocity window for SMS formation as a function of redshift (the shaded region). For collisions of Tvir ∼ 10 4 K halos in this window, gas is shocked into the zone-of-no-return and H2 molecules are destroyed by collisional dissociation. Additionally, the shocked gas radiatively cools to ∼ 10 4 K within the shock dynamical time. Once the 10 5 M⊙ gas cloud is assembled and cools, it becomes unstable due to self-gravity, and a SMS can be formed (e.g. Inayoshi, Omukai & Tasker 2014;Becerra et al. 2015). Note that since the total mass of the colliding gas in dark matter halos with Tvir ≃ 10 4 K can be as large as ∼ 10 6 M⊙, several SMSs may be formed at once in the collisions we consider. These SMSs would result in DCBHs of 10 5 M⊙ at z > 10 , and as we show in the following section could potentially be the seeds of high-z QSOs.
DCBH ABUNDANCE FROM HIGH-VELOCITY COLLISIONS
Precisely estimating the number of collisions which result in SMSs and DCBHs is very challenging because it depends on detailed nonlinear physics. This most likely necessitates Nbody simulations, however, the rarity of these collisions (we estimate ∼ 10 −9 Mpc −3 from z = 10 − 20) requires simulations much larger than are feasible with current computers.
Here we address this issue by performing a simple order of magnitude estimate with an analytic formula based on idealized assumptions and calibrated with a N-body simulation. We find that the number density of DCBHs could be high enough to explain observations of high-z QSOs. However, we emphasize that our estimate has large uncertainties which we discuss in §4.
Collision rate
We estimate the high-velocity protogalaxy collision rate by considering the number of dark matter halos just below the atomic cooling threshold that collide with a relative velocity in the range shown in Fig. 1. For simplicity, we consider one halo moving with a very high peculiar velocity and the other with a typical velocity (determined with the N-body simulation described below) in the opposite direction.
Making the idealized assumption that halo positions and velocities are randomly distributed (i.e. ignoring clustering and coherent velocities, which are discussed in Sec. 4.2.3), the collision rate per volume is given by where n h is the number density of all halos near the cooling threshold, v fast is the velocity of the fast-moving halo necessary to form one (or several) SMS(s), n fast = f fast n h is the number density of halos with peculiar velocity greater than this value (but below the maximum value), and b is the impact parameter required for SMS formation. Note that these values are all initially calculated in physical units and the abundance is later converted to comoving units to compare with QSO observations. We compute the halo number density with the Sheth-Tormen mass function (Sheth & Tormen 1999) and consider a mass range of (0.5 − 1) × M cool (equation 1). For the impact parameter, b, we use ten per cent of the virial radius (equation 3). We assume v fast = vmin −vtyp, where vtyp is the typical halo peculiar velocity (assumed to be 40 km s −1 ) and vmin is shown in Fig. 1 (solid curve). The fraction of fast-moving halos is given by where vmax is given in Fig. 1 (dashed curve) and p(v) is the peculiar velocity probability density function (PDF) which we estimate from a N-body simulation as described in the following subsection.
N-body simulation and velocity PDF
To estimate the dark matter halo peculiar velocity PDF, we ran a cosmological N-body simulation with the publicly available code gadget2 (Springel 2005). The simulation has a box length of length 10 h −1 Mpc (comoving) and mass resolution of 768 3 particles, corresponding to a particle mass of 1.96 × 10 5 h −1 M⊙. Snapshots were saved at z = 20, 15, 12, and 10. We used the rockstar halo finder (Behroozi, Wechsler & Wu 2013) to locate dark matter halos and determine their masses and velocities. In Fig. 2, we plot the velocity PDF of M = (0.5 − 1) × M cool dark matter halos at z = 20, 15, 12, and 10. We also plot fits with a form guided by Sheth & Diaferio (2001) and Hamana et al. (2003) who argue that the PDF is a Gaussian distribution in each velocity component (Maxwell-Boltzmann distribution for the total 1D velocity) at fixed halo mass and local overdensity. For halos near the cooling threshold this leads to where p(δ) is the cosmological overdensity PDF, b h is the Sheth-Tormen dark matter halo bias (Sheth & Tormen 1999), and p(v|δ) is the velocity PDF at fixed δ. The overdensity PDF is assumed to be a lognormal distribution (see e.g. Coles & Jones 1991) The velocity PDF at fixed overdensity is assumed to be a Maxwell-Boltzmann distribution with variance We set σ 2 δ = ln[1+0.25/(1+z)] (Hamana et al. 2003) (which determines the size of the region corresponding to δ) and fit two parameters to our data, σv and µ. For redshifts of z = 20, 15, 12, and 10, we find σv =24. 04, 27.27, 30.91, and 33.54 and µ = 0.8687, 1.2404, 1.0949, and 1.2081, respectively.
DCBH number density
Using the best fit p(v) discussed above, we find the number of DCBHs produced from z = 10 − 20. We assume that the LW background at z < 20 suppresses star formation and subsequent metal enrichment in halos below the atomic cooling threshold. At higher redshift we assume that star formation in minihalo progenitors prevents DCBH formation. The total number density formed as a function of redshift is given by To get the velocity PDF at intermediate redshifts between our simulation snapshots, we linearly interpolate the results from Fig. 2. In Fig. 3, we plot the number density of DCBHs as a function of redshift. Most DCBHs come from high redshift due to the lower minimum velocity given in Fig. 1. We find a total density by z = 10 of 10 −9 Mpc −3 (comoving). Thus, it seems possible that these DCBHs could potentially explain the abundance of high-z QSOs.
DISCUSSION AND CONCLUSIONS
We have shown that high-velocity collisions of metal-poor galaxies may result in the formation of supermassive stars (SMSs). When dark matter halos with a virial temperature ∼ 10 4 K collide with a relative velocity 200 km s −1 , gas is heated to very high temperature (∼ 10 6 K) in the shocked region. The shocked gas cools isobarically via free-free emission and forms a dense sheet ( 10 4 cm −3 ). In this dense gas, H2 molecules are collisionally dissociated, and the gas never cools below ∼ 10 4 K. Such a clump of gas with mass ∼ 10 5 M⊙, once assembled, becomes gravitationally unstable and forms a SMS, which would directly collapse into a black hole (DCBH) via general relativistic instability. We estimated the abundance of DCBHs produced by this scenario with a simple analytical argument calibrated with cosmological N-body simulations and found a number density of ∼ 10 −9 Mpc −3 (comoving) by z = 10. This is large enough to explain the abundance of high-redshift bright QSOs.
Observational Signatures
Next, we briefly discuss the possible observational signatures of SMSs formed through high-velocity collisions of protogalaxies. The temperature of the shocked gas in the collisions we consider above is T 10 6 K (equation 7). The gas cools initially via bremsstrahlung, then atomic He + and He line emissions, and finally atomic H line emission. The intrinsic bolometric luminosity can be estimated as L bol ∼ 10Mgas,corev0 2 /t cool,max 10 43 erg s −1 for our representative case (see equations 5 and 12). Given that the colliding gas in dark matter halos is mostly neutral, the cooling radiation is reprocessed into various recombination lines, e.g., Lyα, Hα, and He II λ1640. The Hα and He II λ1640 emission lines are particularly interesting, since the intergalactic medium would be optically thin to them. If ∼ a few per cent of the bolometric luminosity goes into these lines, which is reasonable (see, e.g., Johnson et al. 2011, for numerical simulations of cooling radiation from hot metalpoor gas with ∼ 10 5 K), the emission could be detected from z 15 by the Near-Infrared Spectrograh (NIRSpec) on the James Webb Space Telescope (JWST) with an exposure time of 100 h. Due to the high cooling temperature, the ratio of He II λ1640 to Hα flux is expected to be large, which may make it distinguishable from other sources (e.g. population III galaxies). In principle it may also be possible to detect H Lyα emission from protogalaxy collisions. Lyα emission could constitute a large fraction of the bolometric luminosity (perhaps 10 per cent). However, even if a collision is observed in Lyα it may be difficult to distinguish from other objects such as accreting massive dark matter halos.
A detailed radiative transfer calculation is required to accurately predict the emission spectrum, which is beyond the scope of this paper. Even though protogalaxy collisions may be bright enough to observe, recent collisions are expected to be extremely rare. At most there will be ∼ a few in the whole sky, given the event rate, dn coll /dt(z = 15) ∼ 10 −11 Mpc −3 (comoving) Myr −1 and the emission duration, ∼ 0.1 Myr. Thus, it will be extremely challenging to detect the signal described above in the near future.
Impact of assumptions
Here we discuss some of the key assumptions we made, and how changes to these assumptions would affect our results.
Metal enrichment
In §2, we calculated the thermodynamics of the shocked gas after protogalaxy collisions assuming zero metallicity. This assumption is valid for gas metallicity smaller than 10 −3 Z⊙ (Inayoshi & Omukai 2012). If the metallicity is higher than this critical value, the shocked gas can cool down to below ∼ 10 4 K via metal-line emissions (CII and OI) and fragment into clumps of ∼ 10 M⊙, preventing SMS formation.
In general, the metal-enrichment of gas in massive dark matter halos proceeds in two different ways. The first is internal enrichment by in situ star formation.
Although not yet completely understood, the earliest star formation is expected to be triggered by H2 cooling in progenitor "minihalos" with Tvir < 10 4 K, which eventually assemble into the more massive dark matter halos we consider in this paper. The level of self enrichment in minihalos is sensitive to the initial mass function (IMF) of population III stars (e.g. Hirano et al. 2014;Susa, Hasegawa & Tominaga 2014). If the IMF is extremely top heavy, the metal enrichment is predominately provided by pair-instability SNe. In this case, the metallicity at the gas core inside the dark matter halo could be as large as ∼ 10 −4 − 10 −3 Z⊙ at z ∼ 10 (e.g. Greif et al. 2010;Wise et al. 2012). On the other hand, if the IMF is mildly top heavy, core-collapse SNe from ∼ 40 M⊙ stars would be the dominant source (Hosokawa et al. 2011;Stacy, Greif & Bromm 2012;. In this case, the metallicity may be one order of magnitude lower ( 10 −4 Z⊙) at the same redshift (Heger & Woosley 2002;Nomoto et al. 2006).
In our abundance estimates of SMS formation through high-velocity collisions, we only consider dark matter halos with Tvir 10 4 K. We assume that below z = 20 the abundance of H2 required for star formation in minihalos is sufficiently suppressed by LW background radiation (e.g. Haiman, Abel & Rees 2000;Machacek, Bryan & Abel 2001;Wise & Abel 2007;O'Shea & Norman 2008). The required LW background flux for this to occur is estimated to be JLW ∼ 0.2 − 2 (3 × 10 −4 − 4 × 10 −2 ) at z = 15 (20) (Visbal, Haiman & Bryan 2014b). The anticipated LW background flux is JLW ∼ 0.1 − 10 (0.01 − 20) at z ∼ 15 (20) (e.g. Ahn et al. 2009;Johnson, Dalla & Khochfar 2013;Visbal et al. 2014), which depends on the detailed properties of Pop III stars and the efficiency with which they are produced. While there are certainly large uncertainties in the LW background, we find our assumption of minihalo star formation suppression to be reasonable.
The second way in which halos can obtain metals is through external enrichment by galactic winds from nearby massive galaxies. Semi-analytic models predict that the intergalactic medium can be polluted by this effect leading to an average metallicity of Z ≃ 10 −4 Z⊙ by z 12 (e.g. Tornatore, Ferrara & Schneider 2007;Maio et al. 2010). However,the fraction of the intergalactic medium that has been polluted is expected to be small at the high redshifts important for our calculation (e.g the estimated volume filling factor is ∼ 10 −4 for z > 12; Pallottini et al. 2014). Thus, external metal enrichment is unlikely to impact our assumption of zero metallicity.
In summary, it is reasonable to neglect the effect of metal cooling for dark-matter halos with a virial temperature Tvir 10 4 K at z 10 after the LW background suppresses star formation in minihalos.
Gas thermodynamics
In this paper, we derived the conditions for SMS formation in protogalaxy collisions (Fig. 1) based on the"zone of no return" shown in Appendix A. This zone is obtained from a one-zone calculation of thermodynamics of the shocked gas. Of course, galaxy collisions are actually three-dimensional phenomena; detailed hydrodynamical simulations are necessary to confirm our scenario.
We obtained equation (6) by assuming that the shock is plane-parallel and steady. This assumption would be valid for nearly head-on collisions and timescales shorter than the shock dynamical timescale. Accordingly, we set the maximum impact parameter as b ∼ 0.1 Rvir, which corresponds to the size of the gas core of an atomic-cooling halo. However, galaxy collisions occur typically with a larger impact parameter b ∼ Rvir. A critical impact parameter for SMS formation needs to be identified by numerical simulation of protogalaxy collisions. We note that the formation rate of SMSs and DCBHs in our scenario is somewhat sensitive to this critical value (∝ b 2 ).
As mentioned in §2, the shocked gas in the zone of no return is thermally unstable. Once the instability is triggered, fluctuations in the shocked gas grow and form clumpy structures with a length scale of cst cool (Field 1965). As a result of this, the structure of the shocked gas deviates from the plane-parallel sheet in a cooling time. Unfortunately, our one-zone calculation cannot capture these effects. Note that, as long as the H2 abundance is suppressed, the cooling length is kept shorter than the Jeans length, thus the thermal instability does not necessarily result in a smaller fragmentation mass. Nevertheless, the effects of the thermal instability on SMS formation need to be studied using high-resolution simulations.
We implicitly assumed that after radiative cooling of the shocked gas becomes irrelevant (i.e. t cool t ff ), the corresponding Jeans mass is assembled, perhaps within ∼ t ff , and the gas clump collapses due to its self-gravity. Our one-zone calculation cannot address how the mass assembly process actually proceeds in detail. Even when the mass budget is large enough > 10 5 M⊙, the mass assembling may be halted, e.g., due to the angular momentum of the gas. A detailed numerical simulation is also required to clarify this point.
Subsequent mass accretion onto the DCBHs formed in protogalaxy collisions is also uncertain at this stage. This needs to be clarified in order to address whether such DCBHs can be the seeds of high-z QSOs. The initial mass of the DCBHs is ∼ 10 5 M⊙ whereas the total gas mass of each colliding galaxy is 10 6 M⊙. We also note that the DCBH is unlikely to be hosted by the dark matter halo, at least just after the formation, because the collision velocity of the parent halos is much larger than the virial velocity. Nevertheless, continuous mass accretion from the intergalactic medium may be expected since high-velocity collisions typically occur in over-dense regions. Additional galactic and intergalactic-scale calculations including radiative feedback from accreting BHs are required to confirm this.
High-velocity collision rate estimate
There are a number of uncertainties associated with the various assumptions we made to estimate the number density of DCBHs produced from high-velocity protogalaxy collisions. First, we note that our estimate depends strongly on the precise values of vmin. Due to the steepness of p(v) at large v, a 20 per cent decrease in vmin increases the abundance of DCBHs by more than an order of magnitude. The abundance also depends strongly on the value of the impact parameter needed to create a black hole (nDCBH ∝ b 2 ). Future hydrodynamic simulations of individual collision events are needed to constrain vmin and b.
Additionally, the small box size of our simulation may systematically reduce the abundance of halos with high peculiar velocity. This is because large-scale density fluctuations, corresponding to modes larger than the box are artificially removed. We leave it to future work to determine how much this effect could enhance the number density we compute here.
We considered the case of one fast-moving halo and one halo at typical peculiar velocity in the opposite direction. Of course there can be other combinations of peculiar velocities and collision angles which lead to a DCBH. We find that we get similar results performing the more complicated analysis of adding up the contribution from all angles and different combinations of peculiar velocities. There is a factor of a few enhancement compared to the simple calculation discussed above.
We note that our idealized assumptions of random positions and velocity directions are not expected to be accurate and estimate their impact on our number density estimate here. We expect that fast halos will preferentially be found in over dense regions, possibly regions which will soon virialize. The halo density enhancement (ignored in our estimate) is given by (1 + b h δ) and the density of DCBHs will depend on this value squared. At z = 15, in a region that is about to virialize (δ ≈ δc = 1.686), the density enhancement of DCBHs is roughly 100.
Our assumption of random velocities most likely overestimates the number density of DCBHs formed. This is because on small scales there will be some velocity coherence between nearby halos, reducing their relative velocities. To estimate the impact of this effect we recompute p(v) from our simulation, and for each halo subtract away the massweighted mean velocity of all other nearby halos in the mass range M = (0.5 − 1)M cool . We include all halos within the typical separation length of these halos, Rc = n −1/3 h . Increasing the value of Rc by a factor of two does not significantly affect our results. If this distance is taken to be significantly smaller there are not enough neighbors to compute the coherent velocity. This PDF at z = 15 is shown in Fig. 4. At high v, it is more than an order of magnitude lower than p(v) obtained without subtracting coherent velocities. The typical peculiar velocity is reduced by ∼ 25 km s −1 . The relative changes at z = 20 are similar. We find that these two effects (the high-v p(v) and the typical v) lower the abundance of DCBHs by approximately two orders of magnitude, which may roughly cancel when combined with the correction due to the density enhancement discussed above.
Despite the large uncertainties described above, the high-velocity collision of protogalaxies is an interesting pathway to form SMSs and DCBHs without extremely strong LW radiation and could explain the abundance of high-z bright QSOs. In future work, we plan to perform detailed numerical studies on the gas dynamics of colliding galaxies and the event rate of appropriate collisions to determine if these events could truly be responsible for the first SMBHs. X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for NASA under contract NAS8-03060 (KK). EV is supported by the Columbia Prize Postdoctoral Fellowship in the Natural Sciences. Our N-body simulations were carried out at the Yeti High Performance Computing Cluster at Columbia University.
APPENDIX A: ZONE OF NO RETURN: EQUATION (5) Following Inayoshi & Omukai (2012), here we calculate thermal evolution of the shocked gas flow discussed above. We assume that the flow is steady and plane-parallel (Shapiro & Kang 1987), which is appropriate for almost head-on collisions in the shock dynamical time. The conservation of mass and momentum between the density ρ0, pressure p0 and flow velocity v0 in the pre-shock flow and those in the post-shock flow ρ, p and V give: After crossing a shock front, the gas looses the thermal energy via radiation following the energy equation, where e is the specific internal energy, d/dt the Lagrangian time derivative, and Λnet the net cooling rate per unit volume (in units of erg s −1 cm −3 ). Assuming a strong shock (i.e. ρ0v 2 0 ≫ p0), the initial temperature and density for the post-shock flow are given by equations (7) and (8) using the velocity and density of the pre-shock flow.
As long as the cooling time of the shocked gas is shorter than the free-fall time (t ff = 3π/32Gρ), which is the growth time-scale for gravitational instability (e.g. Larson 1985), we follow the thermal evolution solving the equations (A1) and (A2). When the cooling becomes inefficient and t cool exceeds t ff , the contraction of the layer halts and dense clouds begin to develop inside the post-shock region. Once the mass exceeds the Jeans limit, the cloud collapses owing to its self-gravity in a runaway fashion following the equation We consistently solve the chemical reaction networks among primordial species (H, H2, e − , H + , H + 2 , H − , He, He + and He ++ ). We consider radiative cooling by free-free emission, atomic lines (H, He, He + ) and H2 lines as well as chemical cooling/heating. Fig. A1 shows the thermal evolutionary tracks of the post-shock gas. The solid curve which starts from the open Figure A1. Density and temperature conditions required for shocked gas to form supermassive clouds. The solid curve with an open circle shows the thermal evolution of shocked gas with a lower initial density, and the solid curve with a filled circle shows the evolution for a higher initial density. The zone of no return is shown by the shaded region. Its boundary is given by cross symbols (numerical results) and the dashed line (fit given by equation 6). The diagonal dotted lines indicate constant Jeans masses.
(filled) circle corresponds to the case where the initial conditions of the post-shocks gas are n = 25 (100) cm −3 and T = 10 6 K. The corresponding density and velocity of the pre-shock flow are n0 = 6.3 (25) cm −3 and v0 = 270 km s −1 , respectively. Initially, hydrogen is fully ionized and helium is neutral. For both cases, the gas cools down to ∼ 10 4 K by free-free emission and atomic line emission (He + , He, and H). In the case of the lower initial density (open circle), the gas temperature decreases further to ∼ 300 K by H2-line cooling. On the other hand, in the case of the higher initial density (filled circle), H2 formation is suppressed by the collisional dissociation (H + H2 → 3H). The cooling becomes irrelevant when t cool t ff . Then, the gas cloud collapses by the self gravity once the corresponding Jeans mass is assembled. In the case of the lower initial density, t cool t ff occurs at n ≃ 2 × 10 5 cm −3 and T ∼ 100 K, and the corresponding Jeans mass is a few ∼ 100 M⊙. On the other hand, in the case of larger initial density, t cool t ff occurs at n ≃ 4 × 10 4 cm −3 and T ∼ 8000 K, and the corresponding Jeans mass is MJ ≃ 10 5 M⊙. Such a massive cloud collapses almost isothermally, mediated by H atomic cooling, and forms a proto-SMS at the center without a major episode of fragmentation and subsequent H2 formation (Inayoshi, Omukai & Tasker 2014;Becerra et al. 2015).
The shaded region in Fig A1 represents the "zone of no return". If the gas jumps into this region by a strong shock, H2 formation is suppressed by the collisional dissociation and massive clouds with 10 5 M⊙ are form. The boundary is identified numerically (cross symbols), and the dashed line is the fitting, T 5.2 × 10 5 K (n/10 2 cm −3 ) −1 for n 10 4 cm −3 (equation 6).
|
2015-09-22T20:59:41.000Z
|
2015-04-02T00:00:00.000
|
{
"year": 2015,
"sha1": "425bece850969714869b8f3fce94e543ca390ed5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.00676",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "425bece850969714869b8f3fce94e543ca390ed5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
146062547
|
pes2o/s2orc
|
v3-fos-license
|
Mathematical Modeling for Leaf Area Estimation From Papaya Seedlings ‘Golden THB’
The aim of this study was to select the most suitable model for leaf area estimation from papaya seedlings cv. ‘Golden THB’ using linear dimensions of leaves with unilobular and trilobular morphology. It was used leaves of 60 seedlings with 30 days after sowing produced in nursery of the Fazenda Santa Teresinha which belongs to company Caliman Agrícola S.A., in the municipality of Linhares, state of Espírito Santo, in March 2016. The measurement of the length (L) was performed along the midrib, the maximum width (W) of the leaf blade, the product of the length by the width (LW) and the observed leaf area (OLA). From these results, first degree and power linear regression models was adjusted. From the proposed regression models, the validation was performed with a leaves sample of 60 seedlings produced in June 2016, obtaining, thus, the estimated leaf area (ELA). The following criteria were used to choose the best model: the highest coefficient of determination (R2), the values do not significant of the comparison of means of OLA and ELA and values of MAE and RMSE closer to zero. The leaf area estimation from papaya seedlings cv. ‘Golden THB’ can be represented through equation ELA = -0.402619 + 0.612525(LW) for trilobular leaves and through equation ELA = 0.623355 + 0.610552(LW) for unilobular leaves.
Introduction
Papaya (Carica papaya L.) cv.'Golden THB' is characterized by great planting uniformity, vigorous plants and high yield, whose destination is mainly to external Market (Serrano & Cattaneo, 2010).
Knowing the leaf area is fundamental to evaluate the plants growth and development, being important in works involving physiology, photosynthesis efficiency, perspiration and behavior related to fertilizer and irrigation (Blanco & Folegatti, 2005).
The leaf area may be measured by direct or indirect method, depending on the objective of the study.The direct methods are destructive because the plant leaves are removed, and this method, mostly, is expensive for requesting specific equipment.The indirect methods are non-destructive, allowing successive leaf area estimation, and less costly (Norman & Campbell, 1989).
One of the non-destructive and indirect methods to estimate leaf area is through mathematical equations from linear dimensions as leaf length and width, and both dimensions in combination, whose high degree of accuracy is shown in most cases (Gamiely, Randle, Milks, & Smittle, 1991;Blanco & Folegatti, 2005).
Mathematical models that aim the indirect leaf area estimation have been used for different plant species as cocoa (Asomaning & Lockard, 1963), Cucumis sativus L. (Cho, S. Oh, M. M. Oh, & Son, 2007), Vicia faca L. (Peksen, 2007), Tabebuia and Handroanthus (Monteiro et al., 2017), colza (Tian et al., 2017) and Coffea canephora (Schmildt, Amaral, Santos, & Schmildt, 2015;Espindula et al., 2018).Methods have been described to estimate leaf area of papaya from adult plants, as mentioned by Campostrini and Yamanishi (2001) (2) For the data validation, a new sample with 287 leaves was used, being 144 trilobular leaves and 143 unilobular leaves of 60 seedlings with 30 days after sowing produced in June 2016.The variables L, W, LW and OLA were measured according to previously proposed methodology.The estimated leaf area (ELA, in cm 2 ) was obtained replacing all the values of L, W and LW in the obtained equation for modeling.A simple linear regression for each proposed model was generated, as well as the respective coefficient of determination (R 2 ), where ELA was the dependent variable and OLA was the independent variable.The means of OLA and ELA were compared by Student t-test at 5% probability level.The mean absolute error (MAE) and root mean square error (RMSE) were determined by the following equations: The choice of the best mathematical model that estimates the leaf area from papaya seedlings cv.'Golden THB' as a function of the length (L) along the midrib, the maximum width (W) of the leaf blade or the product of the length by the width (LW) considered the value of coefficient of determination (R 2 ) closest to the unit, the values do not significant of the comparison of means of OLA and ELA and values of MAE and RMSE closer to zero.The statistical analyses were perfor¨using R software (R Core Team, 2018) with scripts developed by data package ExpDes.ptversion 1.2 (Ferreira et al., 2018).
Results and Discussion
In table 1, it can be observed that in relation to the trilobular leaves used for the modeling, the value of the length (L) ranged from 1.600 to 6.200 cm, with a mean of 4.113 cm.The width (W) varied from 1.400 to 5.700 cm, average of 3.659 cm.The product of length and width (LW) varied from 2.240 to 35.340 cm 2 with an average of 15.958 cm 2 and the leaf area observed (OLA) varied from 1.200 to 20.700 cm 2 with a mean of 9.372 cm 2 .For the unilobular leaves L values varied from 2.100 to 5.300 cm with an average of 3,446 cm.The W ranged from 1.700 to 4.500 cm with a mean of 2.703 cm.LW ranged from 3.780 to 23.850 cm 2 with a mean of 9.749 cm 2 .OLA ranged from 1.900 to 15.100 cm 2 with an average of 6.576 cm 2 .All variables of the leaf sample used for validation presented values close to those used for modeling, and this practice is recommended by Levine, Berenson, Krehbiel, and Stephan (2012), since the values used for the validation should not extrapolate those used for the modeling.
In relation to the coefficient of variation (CV) of the trilobular and unilobular leaves samples, used in modeling, it was observed that the values ranged from 21.98 to 46.75%, whose values are classified as high and very high, according to Pimentel-Gomes (2009).However, these values are recommended in works that aim the leaf area modeling for characterizing different plant growth stages (Pezzini et al., 2018).
Table 1.Minimum, maximum and mean coefficient of variation (CV) of the variables length (L), width (W), product of the length by the width (LW) and observed leaf area (OLA) for papaya seedlings trilobular and unilobular leaves cv.'Golden THB' The accuracy of the leaf area estimation depends on the equation model used (Borghezan, Gavioli, Pit, & Silva, 2010).According to Tsialtas, Koundouras, and Zioziou (2008), in a few cases the equations may be used to estimate the leaf area of leaves with different morphologies, however, the adjusts do not always show efficiency when a high degree of accuracy is desirable.Thus, obtaining individual equations for papaya seedlings leaves cv.'Golden THB' with trilobular and unilobular shape become necessary.
When we analyzed the behavior of the first degree linear model for the trilobular leaves we saw that the lowest value of R 2 was obtained using W as independent variable and the highest value was used for LW.In relation to the behavior of equations with quadratic adjustment and power for trilobular leaves, the lowest value of R 2 was observed based on W, and the highest value was used as an independent variable (Table 2).Although the largest values of R 2 for quadratic and power adjustments were observed using L as the independent variable, the values were not very different from those found on the basis of LW as an independent variable.Montero, Juan, Cuesta, and Brasa (2000), studying non-destructives methods for leaf area estimation of Vitis vinifera L., verified that the use of only one variable such as the width, for instance, shows an inconstant method with the vegetative growth, being necessary making adjusts for different phenological stages.
Thus, models used to determine leaf area that takes into consideration only one linear dimension show lower degree of efficiency, being used only in a few cases.Thus, equations based on the set of dimensions and several leaves size, such as the product of the length by the width, are more desired for showing better adjusts for leaf area estimation (Espindula et al., 2018).
For the behavior of the proposed models for the unilobular leaves (Table 2), it was observed that the highest values of R 2 were achieved using LW as independent variable and the lowest values were obtained based on W as independent variable for all the equations.Schmildt et al. (2015), studying allometric model for leaf area estimation of Coffea canephora, also found higher values of R 2 using LW as independent variable, verifying that this characteristic better represents the modeling for this species and shows better adjust in the first degree linear model.Therefore, based on the R2 value of the mathematical models and the validation equations closest to the unit, the non-significant valuesof the comparison of the means of ELA and OLA, besides the values of MAE and RMSE closer to zero, the models of linear equation of first degree, quadratic and power using LW as independent variable are the most suitable to estimate leaf area of papaya seedlings of cv.'Golden THB' for trilobular and unilobular shaped sheets, attesting to a high degree of accuracy and efficiency.However, due to the ease of the calculations, the first degree linear model, represented by the ELA = -0.402619+ 0.612525 (LW) and ELA = 0.623355 + 0.610552 (LW) equations for trilobular and unilobular leaves, respectively, is recommended.
Conclusion
The leaf area estimation from papaya seedling cv.'Golden THB' can be determined with accuracy by the first degree linear model taking into consideration the product of the length by the width for trilobular leaves through equation ELA = -0.402619+ 0.612525(LW) and for unilobular leaves through equation ELA = 0.623355 + 0.610552(LW).
Table 3 .
Observed leaf area (OLA) and estimated leaf area (ELA) of first degree, quadratic and power linear equations for the independent variables length (L), width (W), product of the length by the width (LW), beyond the p value, mean absolute error (MAE) and root mean square error (RMSE) for papaya seedlings trilobular and unilobular leaves cv.'Golden THB' used in validation Note.*P values higher than 0.05 indicate that the observed leaf area (OLA) and the estimated leaf area (ELA) do not differ by Student t-test.
|
2019-05-07T13:41:01.072Z
|
2019-04-15T00:00:00.000
|
{
"year": 2019,
"sha1": "c876bfa73b4815632c733d5ec2bf9f3e1826f319",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/jas/article/download/0/0/39131/39915",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "c876bfa73b4815632c733d5ec2bf9f3e1826f319",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
202335717
|
pes2o/s2orc
|
v3-fos-license
|
Identity Security in Romania from Concept to Implementation in the Public Policies of Integration of Minorities / Roma Population
The preoccupation for the field of social security / identity security ("a particular form of security of human communities in the absence of which it would not be possible to survive in history, protecting the memory and collective identity, maintaining social and cultural-symbolic cohesion in a society" identity security has been steadily increasing over the last decades as a result of the European Union's enlargement and the interest shown by the European institutions for the protection of ethnic and religious specificities, the ethno-cultural identity of the communities and the prevention of democracy, exclusion and discrimination.The concept of social security belongs to the constructivist current (the current trend in which ethnicity is a phenomenon of continuous development built in everyday life that is manifested throughout life) and was developed in the early 1980s, starting with redefining security by certain such as COPRI - Copenhagen.Social security refers to the survival of a community as a cohesive unity; his referent object is "large scale collective identities that can function independently of the state". Societal security is concerned with the ability to sustain, within acceptable acceptability conditions, the traditional elements of language, culture, identity, cultural and religious customs.Ole Waever's identity security (social security) refers to "preserving, in acceptable conditions of evolution, the traditional patterns of language, culture, association, and national, religious and customary identity." Thus, we can say that social security refers to situations where companies perceive a threat to identity.Regarding the situation of Romania, though, during the two decades of transition to a democrathic regime, the responsibility of the Romanian citizen has come to be pursued with minority integration (wishing to ensure the identity security for them), adopting 200 dectrets by setting up institutions to deal in the areas of minority inclusion and allocating funding to support an organization that considers the role of the intrusional civil society to be more effective is still deficient in this area.The purpuse of this article is to explore the concepts of mionority, ethnicity, social integration, public minority integration policies, citizenship, integration and identity security, starting from the idea that identity should be understood both as a social process and as a power instrument.It will also review the impact of minority integration policies focusing on the Roma minority on identity security and outline the possible threats / opportunities for understanding and implementing the concept of identity security in public policies for minorities / Roma.
Introduction
Any debate about security / insecurity should leave the social perception of risk if we admit that security / insecurity is a construction socially massive perceptions. For a sociological approach, common knowledge, responsible for how various risks are decrypted, is encapsulated in social representations specific to a social group. In the view of security studies, social representations to which sociology refers are conceptualized as a security imaginary, a concept that has both a formal and an informal dimension. From a formal point of view, the security imagination refers to the elite's perception of the position of a state in the distribution of world power.This perception, strongly influenced by the experience of statehood, is reflected in the security culture of a state, through which exploration can draw conclusions about the doxes (the automatism of thought) interleaved in the perception of the elite, which strongly influence the geostrategic decision in a state. From an informal point of view, the security imagination focuses on how an ethno-religious group is in the state, as well as on interactions with other ethno-religious groups on the territory of a State or outside that State. The two levels of security imagery are fundamental components of any collective identity. The central endorsement of the article is that identity security can be studied either formally, focusing on strategic narratives that reveal the elite's security imagination, or from an informal point of view, in order to highlight the perceptions of an ethno-religious group in relation to its relative power. More specifically, studying identity security as an informal security imaginary involves shifting the emphasis from the security policy area and security discourse to day-to-day or day-to-day security, starting from the premise that security is "a social construct based on certain connections, emotions, trust and intimacy.
Theoretical and methodological aspects
In the framework of this subchapter, I will present the minorities, ethnicity, integration and social integration notions and the main sources regarding etnicity and the main models of integration, pluralism / multiculturalism, assimilation, marginalization, formal inclusion.
Ethnic minority and ethnicity. The main currents of ethnicity.
Ethnicity has been defined as: "the social group a person belongs to, and either identifies with or is identified with by others, as a result of a mix of cultural and other factors including language, diet, religion, ancestry and physical features traditionally associated with race" (Bhopal, 2004, p. 442).
It is important to make a distinction between the concepts of 'race' and 'ethnicity'. Race is a socially meaningful category of people who share biologically transmitted traits that are obvious and considered important. In contrast to the idea of race, ethnicity simply means a shared cultural heritage (Goodfriend, 2010, p. 19).
Ethnic minorities are people with ethnic origins different from the majority of the public. People of first, second or later generations, who can be distinguished from the majority of people living in a specific country or region, through their color of skin, family names, specific habits or behavior and who can be identified as a minority in regard to most inhabitants of a specific country. Ethnic minority covers a wide range of people in certain situations: historical national minorities, migrants, immigrant workers, refugees and asylum seekers or people from former colonies and people with trans-national identities.
In the european area, "ethnicity" is not perceived like a synonymous of "ethnic minority", but as a determinant element of the nation1.
Making an inventory of the main definition and concepts regarding ethnicity, there are four main theoretical approaches that underpin the study of ethnicity. These are primordialism, instrumentalism, materialism and constructivism.
The theory of Primordialism, in relation to ethnicity, argues that "ethnic groups and nationalities exist because there are traditions of belief and action towards primordial objects such as biological factors and especially territorial location". 2 This argument relies on a concept of kinship, where members of an ethnic group feel they share characteristics, origins or sometimes even a blood relationship. "Primordialism assumes ethnic identity as fixed, once it is constructed". 3 Instrumentalist theory is based on the idea that national identity, nationalism and ethnicity were created by elites and that ethnicity is a phenomenon that can be changed, built or even manipulated to achieve economic benefits and to achieve certain political goals. According to the Elite Theory, the leaders of a modern state use and manipulate the perception of ethnic identity in order to promote their own goals and to maintain the power. Thus, ethnicity, according to this approach, is determined by the struggle of elites within a particular entity, in a certain political and economic context. 4 . In line with this concept, ethnic groups are considered to be policy creations, created and manipulated by elites' culture to gain access to power and resources.
Materialist approaches to ethnicity are relatively underdeveloped in the literature. 'Crude' Marxist theories, including the work of Michael Hechter (1978), view ethnicity as an epiphenomenon, or a result, of class relations. These crude Marxist theories also suggest that violence between ethnically aligned groups is the result of economic inequalities and elite exploitation. The claims of crude Marxists received heavy empirical criticism from a wide range of scholars. It is now generally 2002). Acknowledged that ethnicity is not a product of class relations and that there is no one-to-one relationship between the two categories.
Constructivist theory is based on the idea that ethnicity is a constantly changing phenomenon, not being a basic human condition. The supporters of this trend claim that "ethnic groups are only products of social-human interaction, maintained only to the extent that they are sustained in quality of social constructs. The idea of ethnicity serves as an umbrella for different communities because individuals as part of an ethnic group can obtain additional rights. 1 The existence of these notions regarding the concepts of "ethnicity" and "ethnicity" allows for a multidimensional interpretation of these enormous enomens. The approach through the three conceptual trends allows a better structure of the main theoretical perceptions. Thus, according to primordialist theory, ethnicity is determined at birth and remains unchanged throughout life. Instrumentalist theory addresses ethnicity as a phenomenon based on symbols and myths that is exploited by leaders for the purposes of pragmatics and for achieving their own interests. The third approach is illustrated by the constructivist theory, which claims that ethnic identity is something that people "build" in specific social and historical contexts to promote their own interests, ethnicity being fluid and subjective. Therefore, each of these currents shows that ethnicity and ethnicity remain basic elements in the constitution of the nation-state.
Integration. Social integration
Integration was first studied by Park and Burgess in 1921 through the concept of assimilation. They defined it as "a process of interpenetration and fusion in which persons and groups acquire the memories, sentiments, and attitude of other persons and groups and, by sharing their experience and history, are incorporated with them in a common cultural life." 2 While some scholars offered an assimilation theory, arguing that immigrants would be assimilated into the host society economically, socially and culturally over successive generations, others developed a multiculturalism theory, anticipating that immigrants could maintain their ethnic identities through the integration process to shape the host society with a diversified cultural heritage. Extending from the assimilation theory, a third group of scholars proposed a segmented integration theory, stressing that different groups of migrants might follow distinct trajectories towards upward or downward mobility on different dimensions, depending on their individual, contextual and structural factors.
Social integration is a complex idea, which means different things to different people. To some, it is a positive goal, implying equal opportunities and rights for all human beings. In this case, becoming more integrated implies improving life chances. To others, however, increasing integration may conjure up the image of an unwanted imposition of conformity. And, to still others, the term in itself does not necessarily imply a desirable or undesirable state at all. It is simply a way of describing the established patterns of human relations in any given society. Thus, in the latter view, one pattern of social integration may provide a more prosperous, just or humane context for human beings than another; but it is also possible for one pattern of social integration to be markedly different from another without being either better or worse.
Sociological theories of social integration
The rapport of an individual, as social actor and society, as intrinsection order, also affirms the role of the individual actor and the intriguing statesman of the intrinsection order. Social theorist moves between the order of the divine, on the theme of integrating the individual in a rallied state in the institutions through the "effector of the soldier" of the establishment / instilance of the analysis. Autors give the alliance a complete move, indicating that they have the ability to operate / distribute the infrastructure in a sustainable manner, allied to the breakaway.
Functionalist sociologists
As a theoretical orientation of a traditionalist approach to social phenomena, functionalist analysis was mainly grounded in the work of American sociologists (B.Μalinоwѕki, Radϲliffе-Brоwn) with a broad development in the concept of American functionalist structuralism represented by T. Parѕоnѕ și R.Κ. Μеrtоn.
Constructivist sociologist
The analysis of the type of struggle that has been achieved in the protection of the earth has given rise to an individual capable of dispersal in the retreatment, endeavoring and rhetoric of the rulluri, of the extraordinary procession of the eu and the univier. They involve the idea that the individual and the group to be integrated are equally actors and agents of action, actors capable of selecting to spawn to produce and communicate information in the practical form so they become transmitters and simple recipients of the message. Social actors are considerated capabile of thinking, always aware of what they are doing, free to pots for an beheivior or another.
Main social integration models
Human rights always refer to relationships between members of a social group. These are perceived as a command that tells what is "normal" what is expected in relationships he state of "normality" is characteristic of historical and cultural variations. Modern law describes the relationships and cooperative behavior of individuals living in a community, group. These relationships ensure the stability and continuity of the community and are therefore "fixed" and transmitted from one generation to the next through different forms of collective social memory: custom or tradition, knowledge, values and ideologies, jurisprudence, customs, moral-legal norms, interpretation. By favoring ommunity groups and different cultural traditions, different phenomena of regional integration and globalization transform sociological and anthropological analysis into direct sources of law. States that promote the minority rights of the minorities can enjoy several advantages, such as: effective assumption of values, such as recognition and pluralism, in terms of institutional practices; ethno-linguistic accommodation of ethno-linguistic minorities involving an increase in internal political stability, increasing political rating on the international arena in terms of assessing the liberal nature of democracy, increasing citizens' trust in the various institutions of local or central administrations; a reduction in the situations of subjective discrimination Μultiсulturaliѕm In the multiculturalism the cultures, races, and ethnicities, particularly those of minority groups, deserve special acknowledgement of their differences within a dominant political culture.That acknowledgement can take the forms of recognition of contributions to the cultural life of the political community as a whole, a demand for special protection under the law for certain cultural groups, or autonomous rights of governance for certain cultures. Multiculturalism is both a response to the fact of cultural pluralism in modern democracies and a way of compensating cultural groups for past exclusion, discrimination, and oppression. Most modern democracies comprise members with diverse cultural viewpoints, practices, and contributions. Many minority cultural groups have experienced exclusion or the denigration of their contributions and identities in the past. Multiculturalism seeks the inclusion of the views and contributions of diverse members of society while maintaining respect for their differences and withholding the demand for their assimilation into the dominant culture. Some more-radical multicultural theorists have claimed that some cultural groups need more than recognition to ensure the integrity and maintenance of their distinct identities and contributions. In addition to individual equal rights, some have advocated for special group rights and autonomous governance for certain cultural groups. Because the continued existence of protected minority cultures ultimately contributes to the good of all and the enrichment of the dominant culture, those theorists have argued that the preserving of cultures that cannot withstand the pressures to assimilate into a dominant culture can be given preference over the usual norm of equal rights for all. Multiculturalism is closely associated with identity politics, or political and social movements that have group identity as the basis of their formation and the focus of their political action. Those movements attempt to further the interests of their group members and force issues important to their group members into the public sphere. In contrast to multiculturalism, identity politics movements are based on the shared identities of participants rather than on a specifically shared culture. However, both identity politics and multiculturalism have in common the demand for recognition and a redress for past inequities. Multiculturalism raises important questions for citizens, public administrators, and political leaders. By asking for recognition of and respect for cultural differences, multiculturalism provides one possible response to the question of how to increase the participation of previously oppressed groups.
Pluralism
Pluralism assumes that diversity is beneficial to society and that autonomy should be enjoyed by disparate functional or cultural groups within a society, including religious groups, trade unions, professional organizations, and ethnic minorities. Arend Lijphart considers that only a certain form of democracy, the consociational one, makes it possible to maintain democracy in a plural society.In such a democracy, "the centrifugal tendencies inherent in a plural society are neutralized by the attitudes and cooperative behavior of the leaders of different segments of the population." In modern democratic society, the connection between people is a political one. Living together does not mean sharing the same religion, culture, or obeying with the same authorities, but assuming to be a citizen of the same political organization "Citizenship is the source of social bonding." Only citizens of a democratic nation see their political rights fully recognized.
Marginalization
Marginalization is the process of pushing a particular group or groups of people to the edge of society by not allowing them an active voice, identity, or place in it. Through both direct and indirect processes, marginalized groups may be reletated to a secondary position or made to feel as if they are less important than those who hold more power or privilege in society. Individuals and groups can be marginalized on the basis of multiple aspects of their identity, including but not limited to: race, gender or gender identity, ability, sexual orientation, socioeconomic status, sexuality, age, and/or religion. Some individuals identify with multiple marginalized groups, and may experience further marginalization as a result of their intersecting identities. Gerry Roggers has identified categories or patterns of social exclusion present in various definitions, with the statement that their use varies depending on the regional specificity, ie the continent where the definition is being developed. The top five categories are marginalization from goods and services, the labor market, land ownership, security and human rights. The sixth category is more vaguely formulated, namely the marginalization / exclusion relationshipeconomic and social development strategies and refers to the social costs of the social adjustment programs. In Romania, the definition proposed in the Social Policy Dictionary refers primarily to the failure to fully achieve citizens' rights, both due to structural causes of socio-economic nature and individual causes.
Asimilation
Assimilation is the one-way process by which a group receives, internalizes and shares values, norms and patterns of behavior or lifestyles specific to another group they are in contact with, the process of which the first group is absorbed in the dominant culture and its cultural identity is replaced by that of the dominant group.
Social inclusion
Social inclusion is the process of improving the terms on which individuals and groups take part in society-improving the ability, opportunity, and dignity of those disadvantaged on the basis of their identity. An inclusive society should be based on mutual respect and solidarity, with equal opportunities and decent living standards for all -where diversity is seen as a source of strength and not as a divider. In every country, certain groups-whether migrants or minorities-confront barriers that prevent them from fully participating in their nation's political, economic, and social life. These groups are excluded through a number of practices ranging from stereotypes, stigmas, and superstitions based on gender, race, ethnicity, religion, sexual orientation and gender identity, or disability status. Such practices can rob them of dignity, security, and the opportunity to lead a better life. There is a moral imperative to address social exclusion. Left unaddressed, exclusion of disadvantaged groups can also be costly. And the costs-whether social, political, or economic-are likely to be substantial. One study found that exclusion of the ethnic minority Roma cost Romania 887 million euros in lost productivity. In addition, exclusion also has damaging consequences for human capital development.
Citizenship
Citizenship is the status of a person recognized under the custom or law as being a legal member of a sovereign state or belonging to a nation. A person may have multiple citizenships. A person who does not have citizenship of any state is said to be stateless, while one who lives on state borders whose territorial status is uncertain is a border-lander. Nationality is often used as a synonym for citizenship in English 1 -notably in international law -although the term is sometimes understood as denoting a person's membership of a nation (a large ethnic group). [3] In some countries, e.g. the United States, the United Kingdom, nationality and citizenship can have different meanings (for more information, see Nationality versus citizenship).
Comparing the models of integration presented above under the legal (citizenship), social, religious aspects the following occurred:
Integration public policies of minorities / ethnic groups from the perspective of the three main institutions of integration, namely education, employment and civic participation.
Romania has created the legal framework to guarantee and secure the rights of national and ethnic minorities, the Framework Convention for the Protection of National Minorities, adopted by the Council of Europe, being ratified since 1995. Since 1993, based on the Copenhagen criteria, Romania has started the preparations to join NATO and the EU. That socio-political context has allowed the politics of the Government to be reoriented towards various categories of population that were severely affected by the transition from the planned economy to the market economy, for example the Roma minority. The Roma minority from Romania is the most exposed to the risks of social exclusion, is discriminated and has an unequal access to education, to the labor market, to decent housing conditions, to social and health services. Mainly influenced by the evolution of the Romanian and international political scene, the method of approaching the Roma minority was put into legislative and institutional practice and meant the enacting of some solutions, such as: the set up of some institutions to represent the Roma minority and to observe their rights, the drafting of some public policies explicitly for the Roma or implicitly for the vulnerable groups, attracting and managing funds from the European Commission, World Bank, BIRD and other international organizations. In the period 2001-2011 several public policies were drafted, where the Roma represented the target group (targeting). (ex. The national strategies for the Roma from the year 2001 and 2011, the Inclusion Decade).
Education
The access to education for all members of society irrespective of their psycho-physical, intellectual, socio-economic, family, ethnic or religious particularities is a priority objective for all education systems in most countries, but none can demonstrate that it has managed to meet.In Romania, the most affected category of population are the children from the rural areas, especially the Roma children. The rate of kindergarten enrollment of the Roma children is 40% smaller than the rate of the majority population1. 44% of Roma children aged between 7-11 years present a risk of school dropout2. In 2012, approximately 400.000 Roma children from primary school were not going to school on a regular basis1. Over 75% of Roma children do not graduate from gymnasium2. Two of ten Roma children do not go to school, and the most frequently reason invoked by the parents is related to the lack of financial resources. One of six Roma parents explains the weak participation of the children in schools through ethnical discrimination. 3 The schools do not have efficient strategies to prevent the dropout phenomenon, they take action only when it is already too late and also, in the moment when the share of Roma children in schools is growing there is the occurrence of a segregation phenomenon at the class level, accompanied by a decrease in the quality of education and of the material endowments of the respective institution.
The Ministry of National Education (MEN) has identified in the Roma inclusion strategy in 2011 a set of 11 measures that refer to including the preschool and the school aged children in some form of education, reducing the absenteeism in the pre-university education, at the same time with the measures that ensure the quality of the education with an emphasize on the management of the inclusive education. The disaggregation, non-discrimination, the continuation of the affirmative measures and the monitoring of the educational system structures would respond to the indicator of the 2020 Strategy that has as objective to include until the year 2020 all the children in the education system.
Alongside the measures from the 2011 Roma strategy, other measures to promote the participation of the children in schools, applied according to the Law of education, are as follows:summer camps for the children aged between 3-6 years old; The second chance" for those who exceeded the school age; "School after school" for the pupils included in the primary education;"Functional teaching";"Bagel and milk" for preschool and school children;Scholarships for high school students;Affirmative measures for high school and university students;The network of inspectors, professor and teachers for the Romani language and the history of the Roma people;Summer schools for the Romani language;School contests for the Romani language;Distance learning;The school mediatorThe school counselor and assistant; Scholarships for the Roma students (in general).
Employment
Along with the process of joining the European Union, Romania has adopted strategies and measures to ensure the achievement of the first objective of the European Employment Strategy.The Roma people from Romania have a reduced participation on the official labor market, but have a high participation on the unofficial labor market, without social security mechanisms. The INS data from 2002 are showing that 4 the employment rate was 36%, while other 36% were looking for a job and 28% were inactive (in comparison with an employment rate of 58%, and an unemployment rate of 7,7%, at national level). Regarding the situation of the unemployed people and of the people looking for a job, the share of Roma unemployed people is 21%.5 As employed persons, the Roma work on their own, only 10-15% of them are wage workers. Of these, most of them have no formal qualification, they either carry out activities that do not require a qualification, for example cleaning lady, janitor, garbage man or park worker. Per total, of the employed population, the young Roma of 15 years and over, 38% work as unqualified workers, 32% hold qualified jobs (workers, salespersons), 9% work in agriculture and 13% have traditional Roma jobs. 6 The economical activities that the young Roma carry out are mostly temporarily, seasonal or occasional, fact that indicates a massive underemployment at the level of this population category.
Part of the Roma inclusion strategy from 2011 Ministry of Labor, Family, Social Protection and Elderly has enacted 22 measures such as active measures, according to Law nr. 76/2002 regarding the insurance system for unemployment and the incentives for employment, updated (information, counseling, qualification courses) and measures in the field of social economy (the law project on social economy is in the process of being approved) for developing of businesses, setting up SME, schemes for micro-grants and activities that produce income, apprenticeships and tutorships, job opportunities for women based on flexicutity, including partnerships between the MMFPSPV through its local structures and the relevant players on the labor market.
Civic participation
The representation of ethnic minorities is an important mechanism for accommodating diversity at national level. Active participation in political decisions, especially in areas that concern them directly, is one of the essential rights of persons belonging to national minorities. This principle is also enshrined in the most important international treaty on minorities, the Framework Convention for the Protection of Minorities.Romania ensures the participation of national minorities in the decision-making process that does not otherwise represent. This mechanism was considered to have mainly symbolic value because it offers the possibility of representing the National Minorities in the Parliament. 1
The theory of identitare security
The concept of social security belongs to the constructivist trend (current ethnicity is a phenomenon of continuous development, built in the day-to-day life with a lifelong manifestation) and was developed in the early 80's, starting with the redefinition of security by some institutes for example COPRI -Copenhagen. In the paper "Security: a new framework of analysis, Buzan et all" delimitated state security in 5 distinct sectors, conceptualized around objects and actors (military, environmental, economic, social and political). Societal security is influenced by the other four sectors of state security (the military which concerns the dual interaction of the state of offensive and defensive army capability, a policy aimed at organizational stability of states, governing systems and ideologies that legitimize them, economic regarding access to resources, finances and markets, necessary to support the state at an acceptable level of welfare and power, environment that refers to the maintenance of local and world biosphere as the essential support on which all human actions depend) but does not overlap with them. Social security refers to the survival of a community as a cohesive unit; his referent object is "large scale collective identities that can operate independently of the state".
Societal security is concerned with the capacity to support traditional language, culture, identity, cultural and religious customs within acceptable acceptable conditions. According to Buzan, "The organizational concept of the social sector is identity. Societal insecurity exists when communities of any kind are defining an evolution or potential as a threat to their survival as community [entities]. Social insecurity occurs when "a society fears that it will not be able to live as such" and comes from:
Migration: The influx of people will "overcome or dilute" the identity of a group, the need to define Britishness; Vertical competition: Integration of a group into a wider organization, Euroscepticism in terms of EU integration, national-separatist claims; Horizontal competition: The group is forced to integrate more influential identities into their own identities, minority groups in a country.
The first researcher to use the term identity identity for the first time is Barry Buzan in 1994 in his work "Identity, Migration and the New Security Agenda in Europe" with reference to "collectives and their identity". According to him, identity security emerged as a result of interethnic conflicts in the 1990s in the former socialist countries, the states of East Africa and the former Soviet republics of Central Asia and Cauza. This term of identity security (social security) referred to "the ability of a society to maintain its essential character in a context of uncertainty and real or potential threats" and referred to the threats that may arise in the collective identity of social groups large, from peoples and nations to civilizations. Ole Waever's identity security (social security) refers to "preserving, in acceptable conditions, the traditional patterns of language, culture, association, and national, religious, and habitual identity." Thus, we can say that social security refers to situations where companies perceive a threat to identity.
Conclusion
Regarding the situation of Romania, though, during the two decades of transition to a democrathic regime, the responsibility of the Romanian citizen has come to be pursued with minority integration (wishing to ensure the identity security for them), adopting 200 dectrets by setting up institutions to deal in the areas of minority inclusion and allocating funding to support an organization that considers the role of the intrusional civil society to be more effective is still deficient in this area. The dialogue between all the targeted actors that would be needed to achieve these objectives would ensure the settlement of the national minority regime in Romania on the basis of solid consensus and social acceptance, preventing the risks of a vulnerability that could demolish in the event of political changes an important part of the achievements so far. The highlighted measures involve effort, patience and costs but would certainly contribute to strengthening a tolerant interethnic climate based on acceptance, mutual respect and interethnic co-operation in Romania. The desideratum at the European Union level regarding ethnic integration is the further development of the objectives set in 2000: increasing the number and quality of jobs, developing flexibility and security in the context of a changing working environment, modernizing social protection, promoting gender equality, combating poverty, discrimination and social exclusion.
|
2019-09-11T08:12:15.646Z
|
2019-05-30T00:00:00.000
|
{
"year": 2019,
"sha1": "f1a54538209f316cd3ad8ca6bd83d8bc2ab17d0c",
"oa_license": null,
"oa_url": "https://doi.org/10.26417/ejss-2019.v2i2-67",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "e61048c2bffcc18191a43d61b0bae8a8a4e12dba",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
235322749
|
pes2o/s2orc
|
v3-fos-license
|
SP1-induced lncRNA ZFPM2 antisense RNA 1 (ZFPM2-AS1) aggravates glioma progression via the miR-515-5p/Superoxide dismutase 2 (SOD2) axis
ABSTRACT Glioma is a common life-threatening tumor with high malignancy and high invasiveness. LncRNA ZFPM2 antisense RNA 1 (ZFPM2-AS1) was confirmed to be implicated in numerous tumors, while its biological function and mechanism have not been thoroughly understood in glioma. The gene expression was measured by RT-qPCR. Cell proliferation, cell cycle, and cell apoptosis of glioma cells were validated by CCK-8, colony formation, flow cytometry and TUNEL assays. The effect of ZFPM2-AS1 on tumor growth was verified by in vivo assay. The exploration on ZFPM2-AS1-mediated mechanism was carried out via ChIP, luciferase reporter, and RIP assays. In the present study, ZFPM2-AS1 was demonstrated as a highly-expressed lncRNA in glioma tissues and cells. ZFPM2-AS1 silencing suppressed cell proliferation and cell cycle, but facilitated cell apoptosis. In addition, the inhibitive effect of silenced ZFPM2-AS1 was also observed in tumor growth. Furthermore, we found that SP1 interacted with ZFPM2-AS1 promoter to transcriptionally activate ZFPM2-AS1 expression. Moreover, ZFPM2-AS1 was identified as a competing endogenous RNA (ceRNA) for miR-515-5p to target SOD2. Rescue assays verified that SOD2 overexpression partially abolished the suppressive impact of ZFPM2-AS1 silencing on glioma cell growth. In conclusion, this study corroborated the regulatory mechanism of SP1/ZFPM2-AS1/miR-515-5p/SOD2 axis in glioma, indicating that targeting ZFPM2-AS1 might be an effective way to treat glioma.
Introduction
Glioma, a common brain tumor derived from the central nervous system, exhibits numerous heterogeneous partitions [1]. Among which, high-grade glioma (such as glioblastoma) is fatal and accompanied with unfavorable prognosis; moreover, it also constitutes a majority of deaths related to brain tumors [2]. Glioblastoma leads to pathological changes in the brain vasculature, which, in turn, facilitates glioblastoma development, enhances tumor aggressiveness, and exacerbates therapeutic resistance [3]. Although great advances in glioma treatment have been achieved in recent years, the improvement in therapeutic outcomes is still limited [4]. Patients with advanced-stage glioma face poor prognosis because of its drug resistance and infiltrative nature [5,6]. Therefore, further research is quite necessary to identify novel molecular mechanisms that might be useful to improve glioma diagnosis and prognosis, thereby prolonging the long-term survival [7,8].
Long non-coding RNAs (lncRNAs), exceeding 200 nucleotides, are identified as heterogeneous transcripts that are widely participate in pathophysiological processes [9]. Extensive studies suggested that lncRNAs exerted essential functions in the occurrence and development of glioma. For example, lncRNA NEAT1 sponged miR-132 to target SOX2, thus facilitating glioma cell migration and invasion [10]. LncRNA HOTAIR was recognized as a diagnostic and prognostic biomarker for glioma [11]. In addition, lncRNA MIR22HG contributed to cell growth in glioma by activating Wnt/β-catenin signaling [12]. Previously, lncRNA ZFPM2 antisense RNA 1 (ZFPM2-AS1) has been reported to accelerate cell migration and invasion of hepatocellular carcinoma through the miR-139/GDF10 axis [13]. Furthermore, ZFPM2-AS1 contributed to epithelial-mesenchymal transition (EMT) and cell migration by regulating the miR-511-3p/ AFF4 axis in lung adenocarcinoma [14]. However, the function and mechanism of ZFPM2- AS1 have not yet been elucidated in glioma.
MicroRNAs (miRNAs) are a class of singlestranded RNAs (20-23 nucleotides), which can act as either oncogene or tumor suppressor in various cancers, including glioma [15,16]. For example, miR-451 inhibited glioma cell proliferation, invasion, and apoptosis by regulating the PI3K/AKT signaling pathway [17]. MiR-4516 predicted unfavorable prognosis and acted as an oncogene in glioblastoma by targeting PTPN14 [18]. miR-515-5p has been reported to act as a tumor suppressor in various human cancers, such as prostate cancer [19], non-small cell lung cancer [20], and breast cancer [21]. Nevertheless, the biological role of miR-515-5p in glioma remains unclear.
The present study aimed to explore the clinical significance of ZFPM2-AS1 expression in glioma, and to investigate the biological function and underlying mechanism of ZFPM2-AS1 in regulating the malignant phenotypes of glioma.
Clinical specimens
The glioma tissues and adjacent normal tissues were obtained from 30 patients who were newly diagnosed at The Affiliated Sir Run Run Hospital of Nanjing Medical University. Besides, serum samples were also collected from 30 healthy volunteers as a healthy control group. The informed consent was signed by each patient, and this study was permitted by the Ethics Committee of the Affiliated Sir Run Run Hospital of Nanjing Medical University. The collected specimens were immediately frozen by liquid nitrogen and kept at −80°C after surgical resection.
Cell lines
Human glioma cell lines (A172, LN229, U87, T98G) and normal human astrocytes (NHA) used in our study were obtained from ATCC (Manassas, VA). The cell lines mentioned above were cultured in the DMEM (Invitrogen, Carlsbad, CA) supplemented with 10% FBS (Gibco, Waltham, MA) under a humid atmosphere with 5% CO 2 at 37°C.
RT-qPCR
Total RNAs from tissues and cells were extracted by TRIzol (Invitrogen) and used for cDNA synthesis performed with Reverse Transcription Kit (Toyobo, Osaka, Japan). The qPCR experiment was performed via SYBR Green Super Mix (Bio-Rad, Hercules, CA). MiR-515-5p expression was standardized to a U6 transcript, and GAPDH was set as the negative control for ZFPM2-AS1, SP1 and SOD2. The 2 −ΔΔCT method [22] was applied to calculate the relative expression.
Colony formation assay
A172 and LN229 cells (1000 cells/well) were planted in the 6-well plates and incubated for 14 days at 37°C. Then, colonies (over 50 cells) were immobilized in 4% formaldehyde (Sigma-Aldrich) for sequential staining via 0.5% crystal violet (Sigma-Aldrich). After staining, the colonies were manually counted.
Flow cytometry analysis
Briefly, transfected A172 and LN229 cells were collected and immobilized with 75% ethanol overnight at 4°C. After that, cells were rinsed and re-suspended with PBS. Then, the cells were incubated with RNase (10 mg/ml) and PI (1 mg/ml) for half an hour at 37° C in the dark. The analysis of cell cycle was conducted by flow cytometry [24].
TUNEL assay
After transfection, A172 and LN229 cells were treated with paraformaldehyde (4%) for 15 min and Triton-X 100 (0.25%) for 20 min. Then, TUNEL detection kit (Roche, Basel, Switzerland) was used to treat the cells. After DAPI staining, the cells were observed using a fluorescent microscope [25].
Xenograft tumors
Male nude mice (18-22 g, 4 weeks old) were obtained from SJA Laboratory Animal Co., Ltd. (Hunan, China) and randomly divided into two groups. A172 cells stably transfected with sh-ZFPM2-AS1#1 and sh-NC were hypodermically injected into the mice. Every 7 days, tumor volume was estimated through measuring the width and length of tumors by caliper. Four weeks later, every mouse was sacrificed by dislocation, and Xenografted tumors were collected for weighting [26]. The study was approved by the Ethics Committee of the Affiliated Sir Run Run Hospital of Nanjing Medical University.
Immunohistochemistry (IHC)
After fixation and paraffin-embedment, the tissues acquired from xenografts were cut into 4-µm sections. Then, anti-Ki67 (ab16667; Abcam) was used to incubate the sections. Later, the sections were washed with PBS, incubated with HRP-conjugated secondary antibody (Abcam), and finally photographed using a light microscope [27].
ChIP assay
Following the manufacturer's protocol, ChIP Assay Kit (Beyotime) was applied for ChIP assay in transfected A172 and LN229 cells [28]. The cells were collected, immobilized in formaldehyde, and then lysed in ChIP lysis buffer. After the crosslinking of DNAs and proteins, the formaldehyde was quenched in glycine buffer. Then, sonication was used to generate DNA fragments (200-400 bp). Afterward, the fragmented DNAs were precipitated by anti-SP1 (Abcam). Anti-IgG (Abcam) was applied as the negative control. Finally, precipitated DNAs were determined by qPCR.
RNA pull-down assay
RNA pull-down assay was conducted through applying biotin-labeled ZFPM2-AS1 (Bio-ZFPM2-AS1) as probe and examining the potential miRNAs by qPCR analysis later [29]. Biotin-control (Bio-NC) was utilized as control biotin-labeled lncRNA. Briefly, lysis buffer (650 μl) was added into the collected cells to obtain cell extracts, and then the cell lysates (2 μg) were respectively mixed with Bio-ZFPM2-AS1 (ZFPM2-AS1 biotin probe) or Bio-NC (ZFPM2-AS1 no-biotin probe). Afterward, 100 μl of pierce streptavidin agarose beads (Baili Biotech) was added for each binding reaction, and then the complexes were incubated for 45 min at room temperature. Later, the beads were gathered and the precipitated RNAs were eluted and detected by RT-qPCR.
RIP assay
RIP experiments were using the Magna RIP kit (Millipore, Bedford, USA) according to the manufacturer's protocol [30]. Briefly, A172 and LN229 cells were lysed in RIP lysis buffer, and then incubated with magnetic beads conjugated with anti-Ago2 or anti-IgG (control). After purification, the immunoprecipitated RNA was subjected to RT-qPCR.
Statistical analysis
SPSS 20.0 (SPSS, Chicago, IL, USA) was used for all statistical analysis, and results were expressed as mean ± SD. The differences between two or among multiple groups were examined via Student's t-test or one-way ANOVA, and considered as statistical significance when P < 0.05. In vitro experiments were repeated at least three times. Pearson's correlation analysis was applied to evaluate gene expression correlation.
ZFPM2-AS1 knockdown inhibited the tumorigenesis of glioma
At first, the ZFPM2-AS1 expression level in glioma tissues was measured via RT-qPCR. Compared with the control tissues, ZFPM2-AS1 was highly expressed in glioma tissues (Figure 1a). Then, we analyzed the clinical potential of ZFPM2-AS1 in glioma patients. As presented in Table 1, the expression of ZFPM2-AS1 was associated with the tumor size, kamofsky performance score (KPS), and WHO stage. In addition, we assessed the diagnostic value of serum ZFPM2-AS1, and the results showed that the area under curve (AUC) was 0.9214, implying that ZFPM2-AS1 might be used as an indicator for glioma diagnosis (Figure 1b). Moreover, ZFPM2-AS1 expression was highly expressed in glioma cell lines (A172, LN229, U87 and T98G) compared to NHA cell line ( Figure 1c). Then, A172 and LN229 cell lines were selected for the following experiments due to the higher expression of ZFPM2-AS1 in A172 and LN229 cell lines. To examine the biological function of ZFPM2-AS1 in glioma, loss-of-function assays were performed. RT-qPCR indicated that ZFPM2-AS1 expression was decreased in A172 and LN229 cells after sh-ZFPM2-AS1#1/2 transfection ( Figure 1d). CCK-8 assay indicated that the viability of A172 and LN229 cells was hindered by silencing ZFPM2-AS1 (Figure 1e). Furthermore, the decreased colonies were found in ZFPM2-AS1-knockdown group by colony formation assay ( figure 1f). In addition, the flow cytometry results implied that ZFPM2-AS1 depletion boosted cell ratio in G0/G1 phase, whereas reduced cells in S phase (Figure 1g and Supplementary Figure S1). TUNEL assay suggested that apoptosis rate was enhanced in A172 and LN229 cells upon ZFPM2-AS1 deficiency ( Figure 1h). All data indicated that ZFPM2-AS1 silencing restrained cell growth in glioma.
ZFPM2-AS1 depletion hindered tumor growth in glioma
In this section, the effect of ZFPM2-AS1 on glioma tumor growth was further analyzed in vivo. As demonstrated in Figure 2a, tumors harvested from the mice in sh-ZFPM2-AS1#1 group were smaller than those from the mice in sh-NC group. Moreover, the tumor volume and weight were also decreased by knocking down ZFPM2-AS1 expression (Figure 2b and c). Additionally, the coloration intensity of Ki-67 (proliferation antigen) was significantly inhibited with sh-ZFPM2-AS1#1 transfection ( Figure 2d). Collectively, these results showed that ZFPM2-AS1 deficiency impaired tumorigenesis in glioma.
ZFPM2-AS1 was transcriptionally induced by SP1
Subsequently, we inspected the upstream mechanism that caused the upregulation of ZFPM2-AS1. Previously, numerous reports have proved that transcription factors could transcriptionally activate the expression of lncRNAs [31,32]. Then, we used UCSC website (http://genome.ucsc.edu/) to predict the promising transcription factor for ZFPM2-AS1, and SP1 was identified. According to JASPAR (http://jaspar. genereg.net/), the DNA motif and binding sequence between ZFPM2-AS1 promoter and SP1 were obtained (Figure 3a). Data from ChIP assay further validated the interactivity of SP1 with ZFPM2-AS1 promoter (Figure 3b). Subsequently, we transfected oe-SP1 vectors into A172 and LN229 cells, and found the up-regulated expression of SP1 in such cells (Figure 3c). Luciferase reporter assay uncovered that SP1 overexpression increased the luciferase activity of ZFPM2-AS1 promoter-WT, while that of ZFPM2-AS1 promoter-Mut displayed no remarkable differences (Figure 3d). Then, we detected ZFPM2-AS1 expression in A172 and LN229 cells with SP1 overexpression, and data displayed that ZFPM2-AS1 was upregulated in SP1-overexpressed A172 and LN229 cells (Figure 3e). These data revealed that SP1 activated ZFPM2-AS1 expression through transcriptional regulation.
SOD2 was targeted by miR-515-5p
To investigate the downstream target gene of miR-515-5p, 12 potential targets of miR-515-5p were predicted using StarBase (Supplementary Figure S2). Then, the expression of these genes was examined in miR-515-5p-overexpressed glioma cells, and the results showed that SOD2 expression was significantly lower than other genes (Figure 5a). In addition, it was detected that SOD2 was highly expressed in glioma tissues and cells (Figure 5b and c). Luciferase reporter assay revealed the decreased luciferase activity of SOD2-WT in miR-515-5p-overexpressed cells, while that of SOD2-Mut remained unchanged (Figure 5d). By RIP assay, we found that miR-515-5p and SOD2 were both abundant in anti-Ago2-precipitated complex (Figure 5e). Finally, it was found that SOD2 expression was negatively correlated with miR-515-5p and positively correlated with ZFPM2-AS1 in glioma tissues (figure 5f). Conclusively, miR-515-5p targeted SOD2 in glioma.
ZFPM2-AS1 accelerated glioma cell growth via increasing SOD2 expression
Finally, rescue assays were conducted to verify whether SOD2 impacted the regulatory effect of ZFPM2-AS1 on cellular processes in glioma. At first, the overexpression efficiency of SOD2 was confirmed by RT-qPCR ( Figure 6a). As revealed in Figure 6b, the cell viability of glioma inhibited by ZFPM2-AS1 knockdown was rescued via overexpressing SOD2. Colony formation assay indicated that SOD2 overexpression counteracted ZFPM2-AS1 silencing-mediated inhibitive effect on cell colonies (Figure 6c). The knockdown of ZFPM2-AS1 promoted cell cycle arrest, which was recovered by SOD2 upregulation (Figure 6d and Supplementary Figure S3). Furthermore, the accelerating effect of ZFPM2-AS1 silencing on cell apoptosis was neutralized by SOD2 augment ( Figure 6e). Generally, ZFPM2-AS1 facilitated glioma cell growth by upregulating SOD2.
Discussion
High proliferative potential and aggressive angiogenesis are typical characteristics of glioma and cause frequent recurrence and unfavorable prognosis [34]. Understanding the specific molecular events underpinning glioma development could be beneficial for earlier detection and better prognosis. Previous studies have indicated that lncRNAs participate in the progression of glioma. For example, Li et al. reported that LINC00319 promoted tumorigenesis of glioma and was associated with poor prognosis in glioma [35]. LINC01116 facilitates the proliferation, migration, and invasion of glioma cells by targeting VEGFA [36]. Herein, we inspected the level and functional potential of lncRNA ZFPM2-AS1 in glioma. It was found that ZFPM2-AS1 was highly expressed in glioma. In vitro and in vivo assays exhibited that ZFPM2-AS1 deficiency diminished cell proliferation and cell cycle, enhanced cell apoptosis, and retarded tumor growth in nude mice. Importantly, accumulating studies have demonstrated that the upregulation of lncRNAs could be attributed to the function of transcription factor [37,38]. As a key transcription factor, SP1 was widely recognized to control the expression of lncRNAs [39,40]. Herein, our experiments confirmed the interaction between SP1 and ZFPM2-AS1 promoter. All the data confirmed the oncogenic role of ZFPM2-AS1 in glioma, and ZFPM2-AS1 was transcriptionally induced by SP1. Numerous studies have revealed that lncRNA and miRNA can form a control network to exert regulatory functions in human cancers [41]. This novel mechanism known as ceRNA network was emerged as an important modulator involved in epigenetic modification, in which lncRNAs altered tumor-related genes by sponging miRNAs [42,43]. Therefore, we assumed that ZFPM2-AS1 might act reporter assay was performed to verify the binding site between miR-515-5p and ZFPM2-AS1. (e) RIP assay was conducted to testify the interaction between miR-515-5p and ZFPM2-AS1. (f) Expression association between miR-515-5p and ZFPM2-AS1 was validated by Pearson's association analysis. *p < 0.05, **p < 0.01, ***p < 0.001. as a ceRNA to participate in glioma progression. Through bioinformatics, we found ten promising miRNAs, and then miR-515-5p was identified with the highest possibility. Previous studies uncovered that miR-515-5p served as a cancer-inhibitor by suppressing cell migration and invasion in hepatocellular carcinoma [44]. MiR-515-5p was also reported to diminish cell proliferation in lung squamous cell carcinoma [45]. In our study, we observed that miR-515-5p was downregulated in glioma tissues and cells. In addition, miR-515-5p was found to interact and negatively associate with ZFPM2-AS1. Overall, these results implied that ZFPM2-AS1 displayed functional importance in glioma via serving as a ceRNA to regulate miR-515-5p.
Superoxide dismutase 2 (SOD2) has been reported as an oncogenic regulator in non-small cell lung cancer [46]. Moreover, SOD2 was found to improve liver cell detoxification capability [47]. In addition, SOD2 was also confirmed to play an important part in HER2-positive breast cancer [48]. Our research revealed that SOD2 was targeted by miR-515-5p, and rescued the inhibitive effect of silenced ZFPM2-AS1 on glioma cell growth.
Conclusions
Our study was the first to explore the function and mechanism of ZFPM2-AS1 in glioma, and discovered that ZFPM2-AS1 upregulated by SP1 promoted glioma cell growth via targeting miR-515-5p/SOD2 axis. This discovery indicated that ZFPM2-AS1 might be a prospective biomarker for glioma treatment. The binding of SOD2 to miR-515-5p was proved by luciferase reporter and RIP assays. (f) Pearson's association analysis was used to validate the expression correlation between SOD2 and miR-515-5p (or ZFPM2-AS1). *p < 0.05, **p < 0.01, ***p < 0.001.
Disclosure statement
No potential conflict of interest was reported by the author(s).
Funding
This work was supported by Nanjing Health Science and Technology Development Special Fund Project (YKK19166 and YKK19167).
|
2021-06-04T06:16:19.951Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "24abc86ebeda80269d6635b64b0d43de3f45ae6f",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21655979.2021.1934241?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "845dff36c3ab152c8ccfb0951f288205d51996c4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240234724
|
pes2o/s2orc
|
v3-fos-license
|
Metformin Use and Survival in Patients with Advanced Extrahepatic Cholangiocarcinoma: A Single-Center Cohort Study in Fuyang, China
Aims Metformin is an oral antidiabetic agent that has been widely prescribed for the treatment of type II diabetes. In recent years, anticancer properties of metformin have been revealed for numerous human malignancies. However, there are few indications available regarding the feasibility and safety of these studies in an advanced extrahepatic cholangiocarcinoma (EHCC) population. This study is aimed at evaluating the feasibility, safety, and value of metformin use and survival in patients with advanced EHCC. Methods All patients with advanced EHCC observed at Fuyang People's Hospital between January 2015 and November 2020 were included in the study. Case data, clinical information, and imaging results were abstracted from the self-administered questionnaire and electronic medical record. All patients were divided into study subjects and control subjects, and the study subjects were given metformin, 0.5 g, three times a day, while control subjects were without metformin. The metformin use and survival time of the subjects were asked by telephone, out-patient, or door-to-door visit, after they left the hospital. Results One hundred and thirty-three study cases and 589 controls were included in the analysis. This study showed that metformin use cannot improve the overall survival rate of patients with advanced EHCC ([95% CI]: -17.05-0.375, t = −1.889, P value = 0.061), but the survival time of patients with drainage treatment from control group (n = 496) was significantly shorter than that of patients with drainage treatment from the study group (n = 113), and the difference was statistically significant (z = −2.230, P value = 0.026). There were significant differences between metformin used before or after the diagnosis of advanced EHCC (OR[95% CI], 3.432[2.617-4.502]; P value = 0.001) in survival time. And there was significant difference between the duration of metformin use and survival prognosis (OR[95% CI], 2.967[1.383-6.368]; P = 0.005). Conclusion Metformin can improve the survival of advanced EHCC patients who underwent drainage treatment, especially for metformin use after diagnosis of advanced EHCC and long duration of metformin.
Introduction
Cholangiocarcinoma (CCA) is categorized as an distal cholangiocarcinoma, perihilar, and intrahepatic [1]. CCA clinical presentation depends on the anatomic location and macroscopic growth pattern [1]. Early CCA often has no special clinical symptoms. Perihilar and distal CCA often have jaundice, which gradually deepens with time; light stool color; dark yellow urine color; and skin pruritus. Intrahepatic CCA is dominated by nonspecific symptoms like abdominal pain, fatigue, weight loss, night sweats, and cachexia; however, cirrhotic patients can be asymptomatic [1]. Histologi-cally, 90% of CCA are adenocarcinomas, while known variants contain signet-ring type, papillary adenocarcinoma, squamous cell carcinoma, clear cell type, oat cell carcinoma, intestinal type adenocarcinoma, and adenosquamous carcinoma [1]. Surgery is the main curative method, whereas stent implantation by endoscopic retrograde cholangiopancreatography, percutaneous transhepatic cholangial drainage, systemic chemotherapy, and radiofrequency ablation are a curative option for advanced CCA [1,2].CCA is a malignant tumor originating in the biliary tree, and it is the second most common cancer after hepatocellular carcinoma [2,3]. The Bertuccio et al. study showed that mortality from extrahepatic (ECC) levelled off or decreased [4], while the EHCC appears to be one of the most rapidly increasing tumors in China [5,6]. Although some treatment approaches were developed as therapeutics for EHCC, the prognosis of patients with unresectable or advanced EHCC is poor [7,8]. More than 50% of cases with jaundice are inoperable at the time of first diagnosis [7].
Metformin is an oral antidiabetic agent that has been widespread prescribed for treatment of type II diabetes [9,10]. This drug lowers hyperglycemia through the inhibition of hepatic glucose production. Compared to normal cells, cancer cells preferentially metabolize glucose to lactate, even in aerobic conditions. Such metabolic alterations not only promote the growth and invasion of tumor cells but also support their chemoresistance [11]. Kaewpitoon et al. suggested that metformin might influence tumorigenesis, both indirectly, through the systemic reduction of insulin levels, and directly, via the induction of energetic stress [12]. A recent epidemiologic survey indicated that metformin use was associated with reduced tumor incidence in patients with type II diabetes [13][14][15][16]. The anticarcinogenic activity of metformin has been attributed to many mechanisms, including the activation of the liver kinase B1 (LKB1)/AMP-activated protein kinase (AMPK) pathway, inhibition of the unfolded protein response, inhibition of protein synthesis, induction of cell cycle arrest and/or apoptosis, activation of the immune system, and potential eradication of cancer stem cells [12,15,17]. LKB1/AMPK pathway activation inhibits the mammalian target of rapamycin (mTOR), which negatively affects protein synthesis in tumor cells [15]. Several in vivo and in vitro studies have demonstrated metformin to inhibit the proliferation of various cancer cell types, including gastric, esophageal, breast, prostate, hepatocellular carcinoma, and colon cancer cells [12]. Anticancer properties of metformin have been revealed for numerous human malignancies including CCA with antiproliferative effects in vitro [3]. Moreover, some studies further found that metformin effectively sensitized CCA cells based on certain chemotherapies [18].
In this study, the effects of metformin on survival and prognosis of patients with advanced EHCC were assessed.
Study Population.
All patients with advanced EHCC observed at Fuyang People's Hospital between January 2015 and November 2020 were included in the study. We searched for EHCC cases using an imaging diagnosis system and electronic medical record system. The diagnosis of advanced EHCC was confirmed by abdominal enhanced CT, abdominal enhanced MR, and PET-CT; all patients were confirmed to transfer (blood vessel, nerve, lymph gland and organ tissue). After screening, 722 patients with confirmed advanced EHCC were included in the analysis. They were divided into study subjects and control subjects, and the study subjects (n = 133) agreed to participate in the study and take metformin, 0.5 g, three times each day, which was produced by Guizhou Shengjitang Pharmaceutical Co., Ltd. Control subjects (n = 589) were without metformin. Cases were matched by sex, age, ethnicity, and residence to subjects who enrolled in the Fuyang People's Hospital between January 2015 and November 2020.This study was approved by the ethics committee of the Fuyang People's Hospital, China.
Clinical
Information. Case data, clinical information, and imaging results were abstracted from the selfadministered questionnaire and electronic medical record. Relevant factors abstracted included diabetes, history of liver disease (HBV or HCV infection), tumor location, radiotherapy, other drugs that may affect tumor (aspirin, regulating immunity, antitumor, and Chinese herbal medicine, etc.), drainage treatment, family history of tumor, drinking history, and smoking history.
We collected the results of tests for HBV and HCV infection for all subjects. HBV infection was defined as a positive hepatitis B surface antigen. And HCV infection was defined as positive HCV antibody or HCV RNA. We abstracted the results of tests for total bilirubin (TBil), transaminase (ALT and AST), bile duct enzyme (ALP and γ-GT), tumor index (AFP, CA199, and Hsp90α), jaundice, and ascites for all cases.
Previous or current use of metformin was ascertained from the questionnaire and physician's notes. And duration of metformin use was ascertained from the follow-up of patients or their families.
The subjects were given metformin, and the medication was recorded in detail; basing on the diagnosis of advanced EHCC was taken as the starting point of observation. The metformin use and survival time of the subjects were asked by telephone, out-patient, or door-to-door visit, after they leave the hospital. Six-month observation was regarded as the end point of the event, and the death was recorded as 0 and the survival was recorded as 1.
Statistical Analysis
t-test was used to compare the data of normal distribution among groups, and mean + SDwas used for expression. Mann-Whitney U test was used to analyze the difference between groups for nonnormal distribution data, and median (IQR) was used for skew distribution data. Pearson chi-square test (χ 2 ) was used to analyze the differences between the groups. Odds ratios (OR) and 95% confidence intervals (95% CI) were used to estimate the relationship between metformin use and survival prognosis.
Kaplan-Meier survival analysis, log-rank tests, and Breslow were performed to analyze the overall survival (OS) of subjects. Box plot was used to compare and analyze the differences of different groups. Statistical analyses were done using SPSS software, version 20.0.All tests were two-sided or Fisher's exact test, with P < 0:05 defined as statistically significant.
Patient Characteristics.
One hundred and thirty-three study cases and 589 controls were included in the analysis. 2 Gastroenterology Research and Practice
Gastroenterology Research and Practice
Tables 1 and 2 summarize the baseline characteristics and risk factors for the study and control groups that may affect the survival and prognosis. Demographics were comparable between groups. Trend analysis showed that there were no differences in baseline characteristics (P value > 0.05). And trend analysis showed that there were no differences in laboratory results (P value > 0.05).
Study Subject Characteristics.
In the study group, 6 cases were treated with metformin before EHCC because of diabetes, while 127 cases were given metformin after EHCC. In the study group (n = 133), there was no significant difference between the age when metformin was started and survival time ( (Figure 1). As expected, the survival time of patients with drainage treatment from the control group (n = 496) was significantly shorter than that of patients with drainage treatment from the study group (n = 113), and the difference was statistically significant (z = −2:230, P value = 0.026) ( Table 4 and Figure 2). There was no significant difference in the survival time between patients without drainage treatment from the study group (n = 20) and patients without drainage treatment from the control group (n = 93) ([95% CI]: -9.012-13.442; P value = 0.697) (Figure 3). Compared to countryside patients, the survival time of town patients from the study group (n = 59) and the control group (n = 291) was significantly longer (101:03 ± 44:94 vs. 132:56 ± 44:59; 100:84 ± 41:27 vs. 112:66 ± 36:96), and the difference was statistically significant (P value < 0.01), Table 4.
The Value of Metformin Use in Feasibility and Safety.
This study showed that the number of patients who actively withdrew from using metformin due to intolerance was only 7 (5.26%), while survival time was shorter (Table 5).
Discussion
EHCC is a highly aggressive epithelial malignancy and usually has a poor prognosis because of the insensitivity to therapies and difficulty in detection [19], particularly for advanced EHCC. The diagnosis of EHCC is very complex and usually requires a combination of clinical symptoms, endoscopic techniques, imaging techniques, and cytopathological tests. In recent years [3], metformin has received growing attention due to its promising anticancer potential observed in many human tumors. A number of epidemiologic studies showed that metformin use in patients with diabetes was associated with a decreased incidence of various cancers, including CCA, gastroenterological cancers, pancre-atic cancer, and breast cancer [16]. To our knowledge, this is the first time to study the relationship between metformin use and the survival in advanced EHCC. This study showed that metformin use cannot improve the overall survival rate of patients with advanced EHCC ([95% CI]: -17.05-0.375; t = −1:889, P value = 0.061, Figure 1), but the survival time of patients with drainage treatment using metformin was significantly longer than that of patients without metformin (z = −2:230, P value = 0.026). In recent decades, study showed that treatment with the antidiabetic drug metformin has been recently associated with decreased incidence of intrahepatic CCA. Metformin reverts the mesenchymal and epithelial-to-mesenchymal transition (EMT) traits in intrahepatic CCA by activating AMPK-FOXO3-related pathways suggesting it might have therapeutic implications [20]. Metformin treatment reverses EMT and downregulates the proteolytic enzyme matrix metalloproteinase (MMP-2), resulting in suppression of CCA cell migration and invasion. Some studies [10,20] demonstrated that metformin exerted antitumoral effects by (1) inhibiting adenosine deaminase that converts AMP into IMP, resulting in AMP accumulation with a subsequent activation of AMPK; (2) activating AMPK that plays a role in cellular energy homeostasis [21]; (3) blocking the mitochondrial respiratory chain complex (NADH dehydrogenase) that impairs ATP synthesis and increasing the AMP/ATP ratio [15]; and (4) metformin targeting the AMPK/mTORC1 pathway in cholangiocarcinoma cells [9,22]. Trinh et al. and Saengboonmee et al. [3,23] studies showed that metformin exposure significantly reduced cancer cell proliferation, migration, and invasion [9], possibly involving the signal transducers and activators of the transcription 3(STAT3) pathway and nuclear factor-kappa B (NF-κB) pathway and reversal of EMT marker expression. STAT3 plays important roles in cancer development and progression, and its expression was associated with shorter survival of patients with CCA. And they further suggest that metformin may be useful for CCA management.
Complexes of Cdk6 and Cdk4 with cyclin D1 are required for G1 phase progression [13]; however, complexes of Cdk2 with cyclin E are required for the G1 to S transition [21]. Metformin has been demonstrated to downregulate cyclin D1 in various tumor cell lines, including stomach, colon, liver, breast, and prostate cancer lines [12]. The findings shown here indicate that these major cell cycle regulators (Cdk4,cyclin D1, and phosphorylated Rb) may be intracellular targets of the metformin-mediated antiproliferative effect in people CCA cell lines. Metformin has been demonstrated to alter the phosphorylation of many proteins, including c-Src, β-catenin, CREB, Chk2, and Akt, in various cell lines. Fujimori et al. [13] findings indicate that metformin inhibits people CCA cell proliferation and cancer growth, potentially by suppressing cell cycle-related molecules through miRNA alterations. In the present work [15], Zhang et al. demonstrated that metformin treatment profoundly suppressed proliferations of two human CCA cell lines (QBC939 and MZ-CHA-1) in dose-dependent ways. Through comparing metformin-induced changes of metabolite levels between the CCA cells and normal HUVEC cells,
5
Gastroenterology Research and Practice they indicate that metformin profoundly aggravate the Warburg effect and promote glycolysis in CAA cells [11]. In the Tang et al. study, they found that metformin could suppress the Warburg effect in CCA, which promotes oxidative phosphorylation and decreases aerobic glycolysis, thus making CCA cells vulnerable to chemotherapy [11]. Moreover, metformin specifically increases UDP-GlcNAc and BCAAs, indicating the occurrence of autophagy and cell cycle arrest in metformin-treated CAA cells. Ling et al. showed that met-formin sensitizes arsenic trioxide to suppress intrahepatic cholangiocarcinoma through the regulation of AMPK/p38 MAPK-ERK3/mTORC1 pathways [18]. Metformin altered the miRNA (mir124, 182, and 27b; let7b, 221, and 181a) expression to inhibit tumor proliferation [2].
This study showed that the patients who had used metformin before the diagnosis of advanced EHCC could not improve the survival, which may be related to advanced EHCC tolerance or noninsensitivity to metformin ( [9]. Metformin intake after starting chemotherapy can improve the clinical outcome in advanced cholangiocarcinomas [24]. Metformin could change the metabolic status of cancer cells and reverse the Warburg effect via the inhibition of lactate dehydrogenase A(LDHA), which was overexpressed in CCA tissues and indicated a shorter survival time [11]. However, a study showed that [25] the survival of forty-nine patients who continued taking metformin after CCA diagnosis was not different from that of one hundred and sixty-five patients never taking metformin ( Figure 2: The box-plot distribution and comparative analysis of the survival time between patients with drainage treatment from study group (n = 113) and patients with drainage treatment from control group (n = 496) was different (z = −2:230, P value = 0.026). The Kaplan-Meier method was used to estimate survival rate and compare survival curve of patients with drainage treatment from two groups: the curve of control group was below, and the curve of study group was above, which showed that metformin use can improve survival rate. Log-rank tests and Breslow were performed to check; P value was less than 0.05. 6 Gastroenterology Research and Practice use before CCA diagnosis (n = 79) also did not affect survival. So, metformin did not improve the survival of CCA patients with diabetes mellitus. Our study is consistent with that of Yang et al., but the data of the above study is too few, which needs multicenter verification. This study also showed that the survival time of patients living in town is higher than patients living in the countryside, whether metformin is used or not, which may be related to the patients' cultural literacy, attention to the disease, scientific and effective modern medical intervention, and the patients' affordability.
In conclusion, we elucidated that metformin can improve the survival and prognosis of advanced EHCC patients who have undergone drainage treatment. Metformin is an inexpensive drug, and its use has been proven safe without severe adverse effects in people. Thus, our findings show that the use of metformin might be beneficial for advanced EHCC patients who have undergone drainage treatment and might be a potential therapeutic agent for the treatment of EHCC (Table 5).
Conclusion
In conclusion, this research article, for the first time, reports the use of metformin in advanced EHCC patients. The results demonstrate that metformin can improve the survival and prognosis of advanced EHCC patients who have undergone drainage treatment. It is a feasible, practical, and safe therapeutic agent for the treatment of advanced EHCC.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon request.
|
2021-10-31T15:10:34.355Z
|
2021-10-29T00:00:00.000
|
{
"year": 2021,
"sha1": "2905c6dacf0d0fac3f0edf6e3f82a53eecd3beec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1155/2021/9468227",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d56117fe0f91d81c4039f7f0cf432d4185c22755",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54588483
|
pes2o/s2orc
|
v3-fos-license
|
SPECTRAL BAND SELECTION FOR URBAN MATERIAL CLASSIFICATION USING HYPERSPECTRAL LIBRARIES
In urban areas, information concerning very high resolution land cover and especially material maps are necessary for several city modelling or monitoring applications. That is to say, knowledge concerning the roofing materials or the different kinds of ground areas is required. Airborne remote sensing techniques appear to be convenient for providing such information at a large scale. However, results obtained using most traditional processing methods based on usual red-green-blue-near infrared multispectral images remain limited for such applications. A possible way to improve classification results is to enhance the imagery spectral resolution using superspectral or hyperspectral sensors. In this study, it is intended to design a superspectral sensor dedicated to urban materials classification and this work particularly focused on the selection of the optimal spectral band subsets for such sensor. First, reflectance spectral signatures of urban materials were collected from 7 spectral libraires. Then, spectral optimization was performed using this data set. The band selection workflow included two steps, optimising first the number of spectral bands using an incremental method and then examining several possible optimised band subsets using a stochastic algorithm. The same wrapper relevance criterion relying on a confidence measure of Random Forests classifier was used at both steps. To cope with the limited number of available spectra for several classes, additional synthetic spectra were generated from the collection of reference spectra: intra-class variability was simulated by multiplying reference spectra by a random coefficient. At the end, selected band subsets were evaluated considering the classification quality reached using a rbf svm classifier. It was confirmed that a limited band subset was sufficient to classify common urban materials. The important contribution of bands from the Short Wave Infra-Red (SWIR) spectral domain (1000 2400 nm) to material classification was also shown.
INTRODUCTION
1.1 Some needs for urban materials maps During last decade, needs for high resolution land cover data have been growing.Indeed, such knowledge is necessary to answer several societal, regulatory and scientific needs, to produce environmental indicators to manage ecosystems and territories, to monitor environmental or human phenomena, or to be able to have a picture of an initial situation and to evaluate the impacts of public policies.Thus, to answer these needs, national mapping or environment agencies, in many countries, have undertaken the production of such large scale land cover databases.Nevertheless, these databases provide a general classification and may not suit some specific (often new) applications requiring a finer semantic or geometric level of details.That is to say that, on one hand, additional land cover classes should sometimes be specified, whereas, on the other hand, some existing classes should be delineated at a finer level.Indeed, in urban areas, both semantic and spatial finer knowledge about land cover and especially maps of urban materials are required by several city modelling applications.Urban environment is indeed strongly influenced, in terms of ecology, energy and climate by the present materials.These materials can be either natural or artificial.Such material maps would be useful to derive indicators to monitor public policies impacts, or to feed urban simulation models.Indeed, at present, most applications are still experimental scientific ones such as micro-meteorology, hydrology, pollutants flow monitoring and ground perviousness monitoring.Several possible applications requiring very high resolution knowledge about urban land cover and materials are listed in (Heldens et al., 2011) and(Shafri et al., 2012) and described below.
1.1.1Quantification of pollutant flows Some roofing materials can generate pollutant elements.Reducing the production of pollutants at their sources implies to identify sources and to quantify emissions.Several kinds of pollution are generated by roofing materials.First, metallic elements are generated by corrosion of roof materials before being swept away by rainwater: roofing materials could be a major source of zinc, cadmium, lead and copper during wet weather (Chebbo et al., 2001).Especially, zinc emissions are mainly in the labile form (Heijerick et al., 2002), which is bioavailable and harmful to aquatic organisms.Copper roofs have also been identified as a possible source of pollution.Last, some other kinds of roofing materials can help to release organic polluting elements (polycyclic aromatic compounds, organic carbon) due to a not visible bitumen layer (Lemp and Weidner, 2005).Laboratory experiments have often already been done to model pollutant runoff rates for roofing materials (Robert-Sainte, 2009).Knowledge about the different roofing materials coverage areas is thus required so as to able to extrapolate these results to whole drainage areas: a map of roofing materials is thus needed.
1.1.2Monitoring of dangerous materials: asbestos-cement roofs Another possible application in the field of urban materials concerns the monitoring of asbestos-cement roofing materials (Heldens et al., 2011, Bassani et al., 2007).Asbestos-cement based materials can indeed be dangerous for human health, especially when they are deteriorated.Therefore, it is important to be at least able to evaluate the amount of buildings covered by asbestos-cement roofing sheets.Evaluating their deterioration status is also a useful issue.
1.1.4Monitoring of ground perviousness On one hand, it has been shown that the continuous development of impervious areas (especially in the periphery of cities), such as wide parking areas, plays an important role in the aggravation of flooding events, both in terms of magnitude and speed.Thus, having tools to monitor the extension of impervious areas and to check their appliance to new legislation would be useful.On the other hand, perviousness maps are required as input data by (micro) hydrological models (Heldens et al., 2011).
1.1.5Determination of road type and monitoring of road condition At least, maps of road types (cobblestone, asphalt, ...) can be useful for some of the above mentioned applications.A more important and complex application focuses on the monitoring of road condition: such information indeed offers great interest for authorities in charge of the planning of road network renovation projects.Extracting this knowledge out of aerial data could be a way to avoid expensive and long field investigation (Herold et al., 2004b, Mohammadi, 2012).
1.1.6Monitoring of photo-voltaic development On one hand, knowledge about roofing materials is a way to estimate the potential of a city to develop photo-voltaic energy (Roy, 2010, Jochem et al., 2009).On the other hand, detecting already installed panels is necessary to monitor the development of this technology.
1.2 Toward a superspectral camera dedicated to urban material applications ?
Thus, very high resolution urban land cover is required to provide knowledge about the roofing materials and the different kinds of ground areas.Such information can be a map of urban material (i.e. a classification).Since no existing map contains such information, airborne remote sensing techniques appear to be convenient for obtaining such a map at a large scale.However, remote sensing of urban environments from airborne acquisitions namely still remains a major issue, since on one hand, urban areas are characterised by a high variety of materials which can appear very similar on images, and on the other hand, by a strong intra-class variability due for instance to material aging and uses (Lacherade et al., 2005).Thus results provided by most traditional processing methods based on usual red-green-blue-near infrared multispectral images remain limited for such applications.
A possible way to improve classification results is to enhance the imagery spectral resolution using superspectral or hyperspectral sensors.
Hyperspectral imagery consists of hundreds of contiguous spectral bands.Nevertheless, most of these spectral bands are highly correlated to each other and thus contain redundant information.Thus using all of them for a particular classification problem is not necessary.Therefore, only a subset of well selected spectral bands would be sufficient for urban materials classification (Herold et al., 2004a).It would then be possible to design from this optimised band subset a superspectral aerial camera system dedicated to urban material classification.Such superspectral system could offer some advantages compared to most hyperspectral sensors.It could first make it possible to combine the use of suitable spectral bands for a specific application with a higher spatial resolution and a larger swath.It could also be a photogrammetric system, making it possible to capture multistereoscopic images, offering thus a possible calculation of BRDF models (Martinoty, 2005).
This paper presents experiments that were performed to define the optimal band subset for such superspectral sensor dedicated to urban material classification.The used automatic band selection framework and criterion are first presented.Second, data sets and experiments are described: experiments were performed on data sets generated from material reference reflectance spectra from available spectral libraries.These libraries and the way they were used to generate synthetic spectra are presented.Then, obtained results are presented, evaluated and discussed.
SPECTRAL OPTIMISATION
The selection of an optimal set of spectral bands is called spectral optimisation.To achieve this task, automatic feature selection (FS) methods can be used.FS methods will here be applied to select the most relevant band subset among the original bands of a hyperspectral data set for a specific classification problem.
Feature selection: state-of-the-art
Feature selection (FS) can be seen as a classic optimisation problem involving both a metric (that is to say a FS score measuring the relevance of feature subsets) to optimise and an optimisation strategy.
FS methods and criteria are often differentiated between "filter", "wrapper" and "embedded".It is also possible to distinguish supervised and unsupervised ones, whether classes are taken into account.
Filters Filter methods compute a score of relevance for each feature independently from any classifier.Some filter methods are ranking approaches: features are ranked according to a score of importance, as the ReliefF score (Kira and Rendell, 1992) or a score calculated from PCA decomposition (Chang et al., 1999).
Other filters associate a score to feature subsets.In supervised cases, separability measures such as Bhattacharyya or Jeffries-Matusita (JM) distances can be used in order to identify the feature subsets making it possible to best separate classes (Bruzzone andSerpico, 2000, Serpico andMoser, 2007).High order statistics from information theory such as divergence, entropy and mutual information can also be used to select the best feature subsets achieving the minimum redundancy and the maximum relevance, either in unsupervised or supervised situations: (Martínez-Usó et al., 2007) first cluster "correlated" features and then select the most representative feature of each group, while (Battiti, 1994, Estévez et al., 2009) select the set of bands that are the most correlated to the ground truth and the less correlated to each other.
Wrappers For wrappers, the relevance score associated to a feature subset corresponds to the classification performance (measured by a classification quality rate) reached using this feature subset.Examples of such approaches can be found in (Estévez et al., 2009, Li et al., 2011) using SVM classifier, (Zhang et al., 2007) using maximum likelihood classifier, (Díaz-Uriarte and De Andres, 2006) using Random Forests.
Embedded Embedded FS methods are also related to a classifier, but feature selection is performed using a feature relevance score different from a classification performance rate.Some embedded approaches are regularisation models associating a fit-todata term (e.g. a classification error rate) to a regularisation function, penalising models when the number of features increases (Tuia et al., 2014).Other embedded approaches progressively eliminate features from the model, as SVM-RFE (Guyon et al., 2002) that considers the importance of the features in a SVM model.Other approaches have a built-in mechanism for feature selection, as decision trees using only the most discriminative feature when splitting a tree node (Breiman, 2001).
Another issue for band selection is the optimisation strategy to determine the best feature subset corresponding to a criteria.An exhaustive search is often impossible, especially for wrappers.Therefore, heuristics have been proposed to find a near optimal solution without visiting the entire solution space.These optimisation methods can be divided into incremental and stochastic ones.
Several incremental search strategies have been detailed in (Pudil et al., 1994), including the Sequential Forward Search (SFS) starting from one feature and incrementally adding another feature making it possible to obtain the best score or on the opposite the Sequential Backward Search (SBS) starting from all possible features and incrementally removing the worst features.Variants such as Sequential Forward Floating Search (SFFS) or Sequential Backward Search (SBFS) are proposed in (Pudil et al., 1994).Among stochastic optimisation strategies used for feature selection, several algorithms have been used for feature selection, including Genetic algorithms (Li et al., 2011, Estévez et al., 2009), Particle Swarm Optimisation (PSO) (Yang et al., 2012) or simulated annealing (De Backer et al., 2005, Chang et al., 2011).
Proposed feature selection approach
The proposed approach (Le Bris et al., 2014) relies on generic optimisation heuristics.It works in two steps (as reminded in fig.1): 1. First, the optimal number of spectral bands is identified using the incremental algorithm SFFS (Pudil et al., 1994).Indeed, in the context of sensor design, first step consists in optimising the number of band.SFFS starts from empty band subset, and incrementally add bands to the subset, considering a FS score, and questioning the current band subset solution each time a new band is selected.Thus, this algorithm makes it possible to see the influence of the number of selected bands on the classification results.
2. Optimised band subsets solutions are then proposed genetic algorithm (GA) for the optimal number of bands identified at previous step.GA is a stochastic algorithm and it is here used to provide several good solutions.At the end, the solution involving the less correlated bands to each other is retained as the final solution.Besides, intermediate good band subsets candidates proposed by GA are used to derive band importance profiles, assessing the importance of bands considering the frequency at which they appeared among these intermediate solutions.
Used FS criterion
The score used to evaluate the relevance of band subsets within the previous framework is adapted from the one proposed in (Le Bris et al., 2015).It is a wrapper score that relies on Random Forests classifier and takes into account classification confidence.
Identify the optimal number of bands using an incremental algorithm (SFFS) Band optimization for a fixed number of bands Select several optimized band subsets using a stochastic algorithm (GA)
Band importance profile
Hyperspectral data + ground truth
Several band subset solutions
Optimal number of bands Final solution (the one minimizing band correlation) Figure 1: Proposed feature selection approach 2.3.1 Random Forests Random Forests (RF) (Breiman, 2001) is a modification of bagging applied with decision trees.It can achieve a classification accuracy comparable to boosting (Breiman, 2001), or SVM (Pal, 2005).It does not require assumptions on the distribution of the data, which is interesting when different types or scales of input features are used.It was successfully applied to remote sensing problems involving multispectral, hyperspectral or multisource data.This ensemble classifier is a combination of tree predictors built from T multiple bootstrapped training samples.For each node of a tree, a subset of features is randomly selected.Then, the best feature with regard to Gini impurity measure is used for node splitting.For classification, each tree gives a vote for the most popular class at each input instance and the final label is determined by a majority vote of all trees.Thus, for each sample to classify, the number of votes obtained by each possible label can be used as a class membership measure.Besides, it is provided by Random Forests at no additional computational cost.Let C = {c1, ...., cnc} be the set of possible classes and v(x, c) the number of votes obtained by class c when classifying sample x.A class membership score m can then be obtained by normalising the number of votes by the number of trees: m(x, c) = v(x,c) T .RF also provides a classification confidence measure named unsupervised margin and defined as the difference between the two best class memberships, that is to say: The more confident the classifier, the more the margin.This score has the advantage to measure both the ability to well classify the test samples for a given feature set and the separability between classes.Indeed, the more the samples are well classified, the more the score increases.The more the classifier is confident for well classified samples, the more the score increases.The more the classifier is confident for bad labelled samples, the more the score decreases.
DATA SET
Spectral optimisation was performed from a library of reference spectra of urban materials.These spectra were collected from several available existing spectral libraries listed in section 3.1.Such data offers several advantages, compared to another alternative such as the use of aerial hyperspectral scenes over several urban landscapes.On one hand, these spectra were captured through field or laboratory measurements, and are thus pure and "clean" reflectance measures.On the other hand, they are generally well described, without ambiguity about their class.Besides, it is a way to have spectra of rare (but thematically important) materials.
Spectral libraries
The reference spectra used in this study were collected from several available existing spectral libraries.The number of spectra per original library is shown by table 1.
• ASTER Spectral Library1 : The ASTER spectral library (Baldridge et al., 2009) is made available by the Jet Propulsion Laboratory.It contains more than 2400 spectra of natural and artificial materials from 3 other spectral libraries: the Johns Hopkins University (JHU) Spectral Library, the Jet Propulsion Laboratory (JPL) Spectral Library and the USGS Spectral Library.
• SLUM2 : The Spectral Library of impervious Urban Materials (SLUM) (Kotthaus et al., 2014) is produced within the London Urban Micromet data Archive (LUMA).It contains reflectance spectral measures of 74 impervious materials collected in London.
• MEMOIRES3 and ONERA data : Many urban materials spectra were made available by ONERA, and especially from the spectral library MEMOIRES (Moyen d'Echange et de valorisation de Mesures de propriétés thermiques, Optiques et InfraRouges d'Echantillons et de Scènes) (Martin and Rosier, 2012).Most of them were collected in Toulouse (France).
• Santa Barbara libraries4 : Many urban materials spectra collected (only field measures) on Santa Barbara (Herold et al., 2004a) are available.Two libraries can be distinguished: one dedicated to spectral optimisation for urban classification (Herold et al., 2004a) and the other dedicated to the analysis of road conditions (Herold et al., 2004b).
• Ben Dor spectral libray Spectra collected in Tel Aviv by (Ben-Dor et al., 2001) for urban classification were also used.
• DESIREX Spectra from field measurements campaign DE-SIREX 08 (ESA) (Sobrino, 2008) in Madrid were also available.First, all collected spectra were integrated into a common data base.This required to define a common legend, to be able to have a homogeneous spectral collection.Interesting taxonomies for urban materials have been proposed in previous works such as (Heiden et al., 2007) or (Herold et al., 2004a).These taxonomies are often hierarchical ones, with a last level of detail corresponding to fine information about materials such as colour or condition.However, at this step, it is intended to keep as many information as possible to describe the collected spectra rather than to have a frozen nomenclature.Thus, it was decided to store collected spectra in our database, associating several attributes to each of them: • Material class • Variety (e.g."zinc" or "steel" for material "metal") • Colour • Condition (e.g.aging) • Corresponding land cover (e.g."ground" or "roof" for "gravels") It must here be kept in mind that it was not always possible to obtain all these information for most spectra.
Spectral domain: Only spectra concerning both the Visible Near Infra-Red (VNIR) (400-1000 nm) and the Short Wave Infra-Red (SWIR) (1000-2400 nm) spectral domains were kept.The spectral resolution of the collected spectra was generally comprised between 1 and 5 nm, and sometimes 10 nm in the SWIR domain.Besides, all collected spectra had not been measured under the same conditions.Only reflectance spectra were considered.However, for spectral optimisation experiments, it was necessary to remove the bands concerned by atmospheric absorption.Furthermore, other artefacts were present on some spectra, as for instance some transitions between the VNIR and SWIR sensor of an ASD.They also had to be removed.
Ignored classes: Experiments focused on artificial materials.Thus, some classes were let aside from the data base, even if they can be important in urban land cover.For instance, vegetation is a key element in urban landscape, but was let aside in next experiments, since the discrimination between vegetation and non vegetation is easy and because it was intended to be considered in further studies specifically dedicated to its characterisation.Water was also let aside, since few spectra were available and because its aspect can be very different depending on depth, turbidity and eutrophysation level.On the opposite, natural bare ground was considered since it is very important in perviousness studies.
At the end, the synthesis of the kept spectra is presented on figure 2. It can be seen that there is a strong heterogeneity in the number of available spectra per class: some classes (asphalt, concrete, stone pavements) are well represented, while other ones concern very few spectra (such as slates and asbestos).Furthermore, the number of spectra per class is generally not sufficient to correctly evaluate intra class variability and thus to be significant to perform spectral optimisation using the proposed method, and to validate results on test data sets.To cope with this insufficient number of available spectra, it was here proposed to generate new spectra from the ones in the data base.A random multiplicative factor was simply applied to reference spectra in order to generate more synthetic spectra from the data base (DB).It partly simulates intra-class variability, even though it does not simulate the totality of intra-class variability (such as colour or aging).For each generated synthetic spectrum, the multiplicative factor was randomly selected between 0.8 and 1.2, according to the standard deviations of the classes for which a sufficient amount of spectra was available.Finer quantitative analyses are available in (Lacherade et al., 2005).
At the end, the proposed process to generate an experimental data set (also reminded in fig. 3) is as follows: For each class c do Generate a set of synthetic spectra GTc for class c: GTc ← ∅ Create a query to list the spectra belonging to this class Create list Lc of spectra from the DB corresponding to this query For i from 1 to n do Randomly select spectrum s from Lc Variability generation: Apply to s a random multiplicative factor (between 0.8 and 1.2) s ← rand().sAdd this spectrum to the experimental data set: GTc ← GTc ∪s EndFor EndFor
EXPERIMENTS AND RESULTS
Experiments were performed for next legend.It consisted in classes corresponding both to the most common materials in the database and to other important classes (e.g.slate) frequently present in urban areas.Such classes would be the basic classes of a material map, because they are likely sure to be found in an urban area.
• Slate • Asphalt
In order to perform spectral optimisation, a data set was generated from the data base according to this legend.It contained 100 training spectra and 500 test spectra, resampled at a 10 nm spectral resolution ranging from 420 to 2400 nm.Band selection was first performed within the spectral domain ranging from 420 to 2400 nm.First, the optimal number of spectra was defined, owing to SFFS algorithm.Figure 4 shows the evolution of the FS score and of several classification quality rates reached by a RBF SVM classifier depending on the number of selected bands.It can be seen that up to a band subset size, selecting new bands has very few impact on results.Thus, in next experiments 10 bands were selected.
Then, several 10-band subsets (presented in figure 5 ) were proposed by GA.Band importance profile (displayed on figure 5) was calculated from intermediate results of GA.Some part of the spectrum were considered relevant, especially in the VNIR domain and in the 2000-2400 nm range of the SWIR domain, while, on the opposite, the 1000-1500 nm spectral domain is not considered relevant for this classification task.
At the end, the band subset with the less correlated bands was selected and evaluated.
4.2 Band selection in the VNIR domain (420-1000 nm) The same process was applied to the 420-1000 nm spectral domain.An optimal number of 10 bands was also identified by SFFS.Then, several band subsets (presented in figure 6 ) were proposed by GA for 10 bands, and band importance profile (displayed on figure 6) was calculated from intermediate results of GA.Although 10 bands were selected, a pattern of 4-5 important blobs appears along the spectrum, corresponding approximately to usual multispectral bands (blue, green, red and near infrared).The classification performance of the band subsets previously optimised was evaluated for a RBF SVM classifier.The classifier was applied to a test data set containing 1000 samples per class.Two scenarios were considered to train the classifier using either 100 or 50 samples per class, so as to have an easy case and a more difficult one.This quantitative evaluation was first performed both for the whole Nevertheless, obtained quantitative evaluations are really optimistic and must be considered carefully.Indeed, it must be kept in mind that some classes were represented by very few spectra in the spectral data base, and thus, their variability is not completely considered.For instance, "slates" represented by few spectra in the data base were very well classified, while on the opposite some classes, such as "asphalt", "cement/concrete" or "stone pavements", which were represented by larger amounts of reference spectra in the data base obtained the lowest classification rates.Thus, there is also a risk of overfitting.
CONCLUSION
In this study, band selection was performed to identify optimal band subsets for urban map classification in the context of designing a superspectral sensor dedicated to this application.Spectral optimisation was performed on data sets generated from a collection of reference reflectance spectra from several available spectral libraries.
A limited number of bands (10) was proven to be sufficient to obtain good discrimination between 9 common urban materials.The importance of the SWIR domain (and especially of the 1800-2400 nm) was also confirmed.Nevertheless, some classes were represented by very few spectra in the spectral data base, and thus, their variability can not be completely considered.Therefore, obtained quantitative evaluations are really optimistic and must be considered carefully.However, new urban material spectra measurement campaigns will occur within the French ANR HYEP5 project and will be integrated in the data base.Besides, further experiments will also be carried out using aerial hyperspectral scenes.They will bring score taking into account RF confidence measures Let X = {(xi, yi)} 1≤i≤n be a set a ground truth samples xi and their associated true label yi.A possible feature selection score R taking into account class membership measures and thus classification confidence can be defined as:R(X ) = n i=1 δ(yi, c(xi)).M(xi) ∈ [0; 1]with δ(i, j) = {−1 if i = j and 1 otherwise}, and c(x) the label given to x by the classifier.
Figure 2 :
Figure 2: Number of available spectra for the most important material classes 3.3 Generate new synthetic spectra from the data base
Figure 3 :
Figure 3: Synthetic spectra collection generation scheme
Figure 4 :Figure 5 :
Figure 4: Evolution of the FS score (top) and of the quality of RBF SVM classification depending on the number of selected bands
Figure 6 :
Figure 6: Selected band subsets of 10 bands (top) and band importance profiles (bottom) for spectral optimisation in the VNIR (420-1000 nm) spectral domain.(red frame = final solution)hyperspectral set of bands and for the subsets of 10 bands selected from the VNIR (420-1000 nm) and from the [VNIR-SWIR] (420-2400 nm) spectral domains, but also from the [VNIR-SWIR] spectral domain limited to the 420-1800 nm range.Indeed, it was interesting to evaluate the impact of a restriction to the first part of the SWIR domain, since it is less perturbed by atmospheric effects and receives more photons than the (1800-2400 nm) part.Results are presented on figure 7: 10 bands selected in the 420-2400 nm domain led to similar results as when using all the hyperspectral bands.The worst results were obtained for band subsets limited to the VNIR domain, while intermediate results were reached using bands from the 420-1800 nm.The differences between the classification precisions reached for the different spectral configurations tended to be more significant for the difficult training scenario (i.e. when the classifier is trained from only 50 samples per class), since Kappa is 0.90 for all bands, 0.90 for 10 bands from the [VNIR-SWIR] (420-2400 nm), 0.87 for 10 bands from the (420-1800 nm) range and 0.81 for 10 bands from the VNIR domain.Further experiments were performed to assess the relevance of bands from the SWIR domain for urban materials classification: only 4 individual original bands were selected (as for usual multispectral sensors).As for previous experiments, subsets of 4 bands were selected from the VNIR (420-1000 nm), the [VNIR-SWIR1] (420-1800 nm) and the the [VNIR-SWIR] (420-2400 nm) spectral domains.Their classification performances for a RBF SVM classifier were compared to the configuration of an existing multispectral sensor: the Pléiades satellite.Results are presented on figure 8.As previously, best results were obtained using bands selected in the [VNIR-SWIR] domain.Indeed, when the classifier is trained from 50 samples per class, Kappa reached 0.82 for 4 bands from the [VNIR-SWIR] (420-2400 nm), 0.81 for 4 bands from the (420-1800 nm) range and 0.78 for 4 bands from the VNIR domain.It can also be said that better results were reached using the optimised subset of 4 bands from the VNIR domain rather than the Pléiades configuration, for which Kappa reached 0.74.
Figure 7 :
Figure 7: Quantitative results reached using selected band subsets of 10 bands from different spectral domains: F-scores of the different classes and Kappa coefficient reached by RBF SVM classification.The classifier was trained using 100 samples (top) and 50 samples (bottom).
|
2018-12-03T16:49:08.822Z
|
2016-06-07T00:00:00.000
|
{
"year": 2016,
"sha1": "752a414bcf09eb8c633e55e8bf8cf0fab9893d08",
"oa_license": "CCBY",
"oa_url": "https://www.isprs-ann-photogramm-remote-sens-spatial-inf-sci.net/III-7/33/2016/isprs-annals-III-7-33-2016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "752a414bcf09eb8c633e55e8bf8cf0fab9893d08",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
13241412
|
pes2o/s2orc
|
v3-fos-license
|
Effects of sample arm motion in endoscopic polarization-sensitive optical coherence tomography
Motion of the sample arm fiber in optical coherence tomography (OCT) systems can dynamically alter the polarization state of light incident on tissue during imaging, with consequences for both conventional and polarization-sensitive (PS-)OCT. Endoscopic OCT is particularly susceptible to polarization-related effects, since in most cases, the transverse scanning mechanism involves motion of the sample arm optical fiber to create an image. We investigated the effects of a scanning sample arm fiber on the polarization state of light in an OCT system, and demonstrate that by referencing the state backscattered from within a sample to the measured state at the surface, changes in polarization state due to sample fiber motion can be isolated. The technique is demonstrated by high-speed PS-OCT imaging at 1 frame per second, with both linear and rotary scanning fiberoptic probes. Measurements were made on a calibrated wave plate, and endoscopic PS-OCT images of ex-vivo human tissues are also presented, allowing comparison with features in histologic sections. © 2005 Optical Society of America OCIS codes: (170.4500) Optical coherence tomography; (170.2150) Endoscopic imaging; (230.5440) Polarization-sensitive devices References and links 1. B. E. Bouma, G. J. Tearney, “Clinical imaging with optical coherence tomography,” Acad. Radiol. 9, 942953 (2002). 2. F. I. Feldchtein, G. V. Gelikonov, V. M. Gelikonov, R. V. Kuranov, A. M. Sergeev, N. D. Gladkova, A. V. Shakhov, N. M. Shakhova, L. B. Snopova, A. B. Terent’eva, E. V. Zagainova, Y. P. Chumakov, I. A. Kuznetzova, “Endoscopic applications of optical coherence tomography,” Opt. Express 3, 257-270 (1998). 3. G. J. Tearney, S. A. Boppart, B. E. Bouma, M. E. Brezinski, N. J. Weissman, J. F. Southern, J. G. Fujimoto, “Scanning single-mode fiber optic catheter-endoscope for optical coherence tomography,” Opt. Lett. 21, 543-545 (1996). 4. G. J. Tearney, M. E. Brezinski, B. E. Bouma, S. A. Boppart, C. Pitris, J. F. Southern, J. G. Fujimoto, “In vivo endoscopic optical biopsy with optical coherence tomography,” Science 276, 2037-2039 (1997). 5. B. E. Bouma, G. J. Tearney, “Power-efficient nonreciprocal interferometer and linear-scanning fiber-optic catheter for optical coherence tomography,” Opt. Lett. 24, 531-533 (1999). 6. A. M. Rollins, R. Ung-arunyawee, A. Chak, R. C. K. Wong, K. Kobayashi, M. V. Sivak, Jr., J. A. Izatt, “Real-time in vivo imaging of human gastrointestinal ultrastructure by use of endoscopic optical coherence tomography with a novel efficient interferometer design,” Opt. Lett. 24, 1358-1360 (1999). 7. P. R. Herz, Y. Chen, A. D. Aguirre, J. G. Fujimoto, H. Mashimo, J. Schmitt, A. Koski, J. Goodnow, C. Petersen, “Ultrahigh resolution optical biopsy with endoscopic optical coherence tomography,” Opt. Express 12, 3252-3542 (2004). 8. Y. Pan, H. Xie, G. K. Fedder, “Endoscopic optical coherence tomography based on a microelectromechanical mirror, Opt. Lett. 26, 1966-1968 (2001). 9. P. H. Tran, D. S. Mukai, M. Brenner, Z. Chen, “In vivo endoscopic optical coherence tomography by use of a rotational microelectromechanical system probe,” Opt. Lett. 29, 1236-1238 (2004). (C) 2005 OSA 25 July 2005 / Vol. 13, No. 15 / OPTICS EXPRESS 5739 #7740 $15.00 US Received 7 June 2005; revised 13 July 2005; accepted 14 July 2005 10. P. R. Herz, Y. Chen, A. D. Aguirre, K. Schneider, P. Hsiung, J. G. Fujimoto, K. Madden, J. Schmitt, J. Goodnow, C. Petersen, “Micromotor endoscope catheter for in vivo, ultrahigh-resolution optical coherence tomography,” Opt. Lett. 29, 2261-2263 (2004). 11. J. F. de Boer, T. E. Milner, M. J. C. van Gemert, J. S. Nelson, “Two-dimensional birefringence imaging in biological tissue by polarization-sensitive optical coherence tomography,” Opt. Lett. 22, 934-936 (1997). 12. M. J. Everett, K. Schoenenberger, B. W. Colston, Jr., L. B. Da Silva, “Birefringence characterization of biological tissue by use of optical coherence tomography,” Opt. Lett. 23, 228-230 (1998). 13. J. F. de Boer, T. E. Milner, J. S. Nelson, “Determination of the depth-resolved Stokes parameters of light backscattered from turbid media by use of polarization-sensitive optical coherence tomography,” Opt. Lett. 24, 300-302 (1999). 14. G. Yao, L. V. Wang, “Two-dimensional depth-resolved Mueller matrix characterization of biological tissue by optical coherence tomography,” Opt. Lett. 24, 537-539 (1999). 15. C. K. Hitzenberger, E. Götzinger, M. Sticker, M. Pircher, A. F. Fercher, “Measurement and imaging of birefringence and optic axis orientation by phase resolved polarization sensitive optical coherence tomography,” Opt. Express 9, 780-790 (2001). 16. J. Moreau, V. Loriette, A-C. Boccara, “Full-field birefringence imaging by thermal-light polarizationsensitive optical coherence tomography. II. Instrument and results,” Appl. Opt. 42, 3811-3818 (2003). 17. N. J. Kemp, J. Park, H. N. Zaatari, H. G. Rylander, T. E. Milner, “High-sensitivity determination of birefringence in turbid media with enhanced polarization-sensitive optical coherence tomography,” J. Opt. Soc. Am. A 22, 552-560 (2005). 18. C. E. Saxer, J. F. de Boer, B. H. Park, Y. Zhao, Z. Chen, J. S. Nelson, “High-speed fiber-based polarization-sensitive optical coherence tomography of in vivo human skin,” Opt. Lett. 25, 1355-1357 (2000). 19. J. E. Roth, J. A. Kozak, S. Yazdanfar, A. M. Rollins, J. A. Izatt, “Simplified method for polarizationsensitive optical coherence tomography,” Opt. Lett. 26, 1069-1071 (2001). 20. M. C. Pierce, B. H. Park, B. Cense, J. F. de Boer, "Simultaneous intensity, birefringence, and flow measurements with high-speed fiber-based optical coherence tomography," Opt. Lett. 27, 1534-1536 (2002). 21. S. L. Jiao, W. R. Yu, G. Stoica, L. H. V. Wang, “Optical-fiber-based Mueller optical coherence tomography,” Opt. Lett. 28, 1206-1208 (2003). 22. D. P. Davé, T. Akkin, T. E. Milner, “Polarization-maintaining fiber-based optical low-coherence reflectometer for characterization and ranging of birefringence,” Opt. Lett. 28, 1775-1777 (2003). 23. S. Guo, J. Zhang, L. Wang, J. S. Nelson, Z. Chen, “Depth-resolved birefringence and differential optical axis orientation measurements with fiber-based polarization-sensitive optical coherence tomography,” Opt. Lett. 29, 2025-2027 (2004). 24. B. H. Park, M. C. Pierce, B. Cense, J. F. de Boer, “Real-time multi-functional optical coherence tomography,”Opt. Express 11, 782-793 (2003). 25. B. H. Park, M. C. Pierce, B. Cense, S-H Yun, M. Mujat, G. J. Tearney, B. E. Bouma, J. F. de Boer, “Realtime fiber-based multi-functional spectral-domain optical coherence tomography at 1.3 μm,”Opt. Express 13, 3931-3944 (2005). 26. P. R. Wheater, H. G. Burkitt, V. G. Daniels, Functional Histology, 2 Ed., Ch. 8 (Churchill Livingstone, New York, 1987). 27. J. Strasswimmer, M. C. Pierce, B. H. Park, V. Neel, J. F. de Boer, “Polarization-sensitive optical coherence tomography of invasive basal cell carcinoma,” J. Biomed. Opt. 9, 292-298 (2004). 28. M. C. Pierce, J. Strasswimmer, B. H. Park, B. Cense, J. F. de Boer, “Advances in optical coherence tomography for dermatology,” J. Invest. Dermatol. 123, 458-463 (2004). 29. B. Cense, T. C. Chen, B. H. Park, M. C. Pierce, J. F. de Boer, “Thickness and birefringence of healthy retinal nerve fiber layer tissue measured with polarization-sensitive optical coherence tomography,” Invest. Ophthalmol. Vis. Sci. 45, 2606-2612 (2004).
Introduction
Fiber-optic probes compatible with medical endoscopes and catheters have enabled optical coherence tomography (OCT) imaging of previously inaccessible regions within the human body [1,2].The high-resolution imaging capability of OCT has been demonstrated in many applications related to internal organs and structures, and when coupled with high-speed interferometer systems and real-time image display, endoscopic OCT can provide a unique diagnostic tool for the clinic.
Several different probe designs and scanning configurations have been reported, each tailored to a particular medical specialty.Standard single-mode optical fiber delivers light to the site of interest, with micro-optical components used to focus and direct the optical beam at the distal end of the probe.In many probe devices, these fibers and optics are incorporated within and affixed to a wound cable, capable of transducing proximal motion from outside the body to distal motion at the catheter or endoscope tip.Proximal linear or circumferential scanning of the wound cable and associated optics produces a similar motion of the focused beam at the sample, thereby enabling formation of the OCT image [3][4][5][6][7].For applications in cardiology, probes of around 1 mm diameter can be scanned with an optical rotary junction, producing a radial image display, similar in format to intravascular ultrasound systems.In larger-lumen channels such as the gastrointestinal tract, circumferential scanning may result in distant surfaces appearing out of focus due to the limited depth of focus of the probe.For such situations, linear scanning devices were developed [5], producing OCT images in the familiar transverse geometry.While a few investigators have developed probes that incorporate miniature scanning mechanisms at the distal tip [8][9][10], transduction of proximal motion to the distal optics by means of a wound cable remains the most convenient method for transverse scanning in endoscopic and catheter-based OCT, especially when the probe diameter is constrained.
Recent extensions to conventional OCT imaging include polarization-sensitive (PS)-OCT, which may provide additional information on tissue functionality by determining the polarizing properties of a sample.Free-space PS-OCT systems [11][12][13][14][15][16][17] constructed using bulk optical components allow the polarization state of light to be controlled at each location in the interferometer.Traditionally, this has enabled uniform samples to be probed with a single circularly polarized incident state, ensuring detection of birefringence regardless of the sample's optic axis orientation.Multiple incident states can also be used to reduce the effects of speckle noise through averaging, and ensure reliable measurement in samples comprising multiple birefringent layers, including the eye [17].
Development of fiber-based PS-OCT [18][19][20][21][22][23][24][25] has relaxed issues associated with system handling and alignment, enabling construction of robust devices for clinical use.However, due to the random birefringence of conventional single-mode fiber, explicit knowledge of the polarization state of light in the system is lost, and certain fixed parameters in the bulk system, including circularity of incident light, cannot be guaranteed.In such circumstances, multiple incident polarization states and generalized analysis algorithms are required in order to correctly determine the polarizing properties of a sample.PS-OCT systems have also been constructed using polarization-maintaining (PM) fiber [22], where a linear polarization state launched along the fast or slow fiber axis is maintained on propagation to the sample.In this case, either multiple input states are again required, or a single linear input state can be used with a quarter wave plate immediately prior to the sample [22], to provide circularly polarized incident light.Due to the intrinsic birefringence of PM fiber, phase information is lost unless the sample arm comprises precisely-matched, orthogonally-spliced fiber sections, to ensure that light in orthogonal polarization channels travels equal optical path lengths [22].As with conventional single-mode fiber-based PS-OCT, PM fiber systems remain susceptible to additional dynamic birefringence induced by fiber motion and bending.
Incorporation of a scanning fiber probe in the sample arm presents additional difficulties, as the entire sample arm fiber is in motion during acquisition of an image.This motion dynamically changes the fiber birefringence, producing a continuous variation in the polarization state of light incident on the sample during image acquisition.PS-OCT data processing algorithms assuming a known or constant incident polarization state are likely to produce erroneous results if applied to fiber-based systems with significant sample arm motion during imaging.Changes in polarization induced by scanning motion in fiber probes may also lead to image artifacts in conventional OCT systems operating without polarization diversity detection.
We investigated the effects of using linear and rotary scanning fiber-optic probes in a fiber-based PS-OCT system.Motion of the sample arm fiber is shown to change the polarization state of incident light during imaging, and is quantified in terms of the evolution of Stokes vectors on the Poincaré sphere.Previous fiber-based PS-OCT systems held the sample arm stationary or incurred only minimal amounts of bending during image acquisition, performing transverse scanning externally in free-space at the distal end of the fiber.In such circumstances, the incident polarization state can be assumed to remain constant during acquisition of an image, and an average state at the sample surface can be determined from all or some range of A-lines within an image [18,20,24].Birefringence in the sample arm fiber can then be isolated by making a relative measurement between the average surface polarization state, and each state backscattered from within the sample for each A-line.We show here that when the incident state changes during imaging, fiber-based endoscopic PS-OCT imaging can still be performed.This is achieved by quantifying the variation in surface Stokes vectors between A-lines, again sampling the surface state for each A-line, and averaging Stokes parameters over the range of neighboring A-lines for which polarization changes remain small.
Methods
Two fiber-optic probes were incorporated with an existing PS-OCT system [20], which uses a semiconductor-based optical source centered at 1320 nm, resulting in a measured coherence length of 14.7 μm in air.This time-domain system acquires and displays images at a rate of 1 frame per second.The first fiber probe was a linear scanning device [5], consisting of a length of conventional single-mode optical fiber (Corning SMF-28) inside a wound, multilayer cable.The proximal end of the cable is translated by a linear motor to convey motion to the distal optics, containing a gradient index lens to focus the beam, which exits the probe in a direction perpendicular to the fiber axis.The entire probe was enclosed in a stationary transparent sheath, resulting in an overall outer diameter of 2.0 mm (6 F).The acquisition speed was 2000 A-lines per second, with a 1.2 mm axial scan range (n = 1.4).
The second probe used the same micro-optic components to focus and direct the beam perpendicular to the fiber axis, but in this case, performed circumferential scanning by means of an optical rotary junction at the proximal end [4].Designed for intravascular use, this probe has an outer diameter of 1.0 mm (3 F), and was used with an acquisition speed of 1000 A-lines per second and a 1.5 mm axial scan range (n = 1.4).Both fiber-optic probes comprised moving fiber sections 1.4 m in length, and were extended (but not fixed) in a straight position for imaging.
Incident polarization states
In a fiber-based OCT system, the polarization state of light is generally unknown, and consequently, the polarizing properties of a sample cannot be correctly determined under all circumstances with only a single, unknown incident state.To ensure that birefringence never goes undetected, we modulate the polarization state of light from the broadband source between two different states, orthogonal in the Poincaré sphere (e.g., linear horizontal and linear at 45°), for alternate A-lines [18,20].Figure 1 shows the endpoints of the calculated Stokes vectors describing these two polarization states of light at the detectors, as a single image is acquired.This signal was obtained from the small amount of light reflected from the surface of the gradient index lens or prism at the distal end of each probe, and therefore represents the polarization state on double-pass through the system.On the left, with the linear scanning probe held stationary, the Stokes vectors representing the two polarization states (colored green and blue) remain almost unchanged.In contrast, when the linear probe is scanned over a distance of 4 mm (center), the polarization states of light detected from the distal tip change during the 1 second required to generate a single image.This is indicated by an evolution of both Stokes vectors from their initial states, over the surface of the Poincaré sphere.In the right hand sphere, the polarization states of light detected from the tip of the rotary probe change more rapidly, as the probe undergoes a full 360º scan.Without motion of the sample arm fiber during imaging (Fig. 1, left), no time-dependent birefringence is induced, and in principle, the polarization states of light incident on the sample will remain constant.In practice, some spread in location of the Stokes vectors on the surface of the Poincaré sphere will arise due to the presence of noise in determining the individual Stokes parameters.In Fig. 2, we quantify how this inherent uncertainty affects our ability to measure the polarization state of backscattered light and ultimately, the polarizing properties of a sample.For each of the two polarization states shown in Fig. 1 (left), the absolute values of the angles between the mean Stokes vector and all 1024 individual Stokes vectors were determined, and displayed as a histogram in Fig. 2. The data demonstrated a mean angular deviation of 1.22° from the mean Stokes vector, with a standard deviation of 0.64°.A theoretical probability density function of the form (based on a two-dimensional Gaussian noise distribution) was fit to the experimental data, and found to be in good agreement; P(χ 2 ; N) > 0.24.A recent paper [25] presented a theoretical expression relating the same error in Stokes vector location on the Poincaré sphere, to the system SNR.When a fiber-optic probe is scanned, bend-induced birefringence produces a change in the polarization state of light, which can be visualized as a change in the Stokes vector on the surface of the Poincaré sphere.The amount of phase retardation incurred between orthogonal components of polarized light is described by the rotation angle of the corresponding Stokes vector, and a material's optic axis orientation is defined as the axis around which this vector rotation is made.In Fig. 3, we quantify the effect of sample arm motion on the Stokes vectors by displaying the intensity-weighted mean phase retardation angle incurred by our pair of incident states as a function of A-line pair number, while the probe is held stationary and during a linear or rotary scan.On the left, the phase retardation angle remains close to zero while the probe is stationary, in the center, the retardation increases at a rate of 0.019 °/A-line pair with the linear scanning probe, while on the right, the phase retardation angle increases at a maximum rate of 1.64 °/A-line pair with the rotary probe.If we model the rotary probe as an arbitrary linear retarder with a rotating optic axis, the Stokes vector is expected to map out the same change in polarization state as the probe rotates from 0°-180° as from 180°-360°, and this appears to be the case in Figs. 1 and 3. Blue arrows in the center and right-hand graphs indicate the A-line range over which a single image frame is acquired, corresponding to 775 A-lines with each input polarization state for the linear probe, and 655 A-line pairs for the rotary probe.To ensure that both our conventional and polarization-sensitive OCT images are not perturbed by motion-induced effects in the sample arm probe, we analyze interference fringe data using the Stokes vector formalism, with measurement of the polarization state of light at the sample surface for each A-line.Polarization-sensitive images are obtained by displaying the intensity-weighted mean phase retardation angle for our pair of incident states, relative to the polarization state of light at each location across the sample surface [18].
Measurements on a calibrated wave plate
To establish the magnitude of measurement errors associated with using a scanning fiber optic probe, we performed a series of measurements on a quarter wave plate, with the linear scanning device.The achromatic wave plate comprised two cemented pieces of different birefringent crystals, designed to minimize the variation in birefringence with wavelength.The plastic outer sheath of the linear scanning fiber probe was placed in contact with the wave plate.Strong reflections from the front and rear surfaces of the wave plate are seen in a plot of backscattered intensity (black) shown in Fig. 4, with a third peak at a depth of 867 μm originating from the interface between the two materials.The accumulated double-pass phase One would expect to obtain a phase retardation value of 180° for a λ/4 plate on double pass.However, an exact half-wave is the most difficult amount of retardance to measure, since due to wrap-around of the Stokes vector in the Poincaré sphere, values above 180° are never realized, and an actual retardation angle of (180 + θ)° will yield the value (180 -θ)°.
Averaging repeated measurements therefore produces a mean that does not converge on the expected value of 180°, but instead approaches some lower value.Table 1 presents measured double-pass phase retardation (DPPR) values, averaged over n = 10 successive B-scans, first with the probe held stationary, then scanning linearly in directions perpendicular and at 45° to the fast axis of the λ/4 plate.The results of Fig. 1 demonstrated that when the fiber probe is not scanned, the measured Stokes vectors exhibit a mean fluctuation of 1.22° on the Poincaré sphere during acquisition of the B-scan.When measuring the wave plate without scanning, a discrepancy is observed between the measured value of 174.56° and the expected value of 180°, which cannot be due to polarization effects associated with sample arm motion.This error amounts to around 3%, and is in agreement with an independent transmission measurement of the wave plate retardance, made using a Berek compensator.Deviation from exactly one-quarter wave retardance for spectral components different from the wave plate's design wavelength is also expected to result in measured DPPR values different from 180°.DPPR values obtained when the probe was scanning exhibit slightly higher errors of up to 4%, with this increase representing the magnitude of error attributable to motion-induced effects.
Measurements on biological tissue
In Figs. 1 and 3, we presented the polarization states of light measured at the detectors, arising from reflections at the surface of the GRIN lens or prism at the distal tip of each probe.The previous sections used reflections from these optical-quality surfaces to isolate and quantify the effects of fiber motion on the polarization state of light in the system.However, when performing PS-OCT measurements in biological tissue, the incident polarization states must be obtained from reflections at the tissue surface due to the limited depth scanning range, resulting in an increased degree of uncertainty in determining the corresponding Stokes vectors.This uncertainty was previously reduced by using a pair of average surface Stokes vectors formed from all, or some range of A-lines in an image [18,20].A measure of the accuracy of this pair of average surface states is given by the offset from zero degrees of phase retardation calculated between the surface and first data point within the tissue.This analysis is carried out with the understanding that the tissue itself may contribute to the retardation offset, due to birefringence at the surface.For the sample shown in Fig. 5, a retardation angle of 0.82° is expected, based on a spatial dimension of 2.9 μm per depth point and a measured double-pass phase retardation rate of 0.28 °/μm.In Fig. 5, the phase retardation offset is plotted as a function of the number of A-lines over which the Stokes vectors are averaged, for the tissue specimen imaged with the rotary probe, to be shown later in Fig. 7. Averaging is also performed over 3 and 6 points in depth for each case, where the term "points in depth" refers to the number of displayed data points in each A-line.As the number of averaged A-lines increases, the phase retardation offset is initially reduced, due to improved determination of the Stokes vectors.However, due to the fact that the surface states are continuously changing while the probe is scanning, averaging the Stokes vectors over increased numbers of A-lines does not reduce the offset beyond some limiting value.
It is worth noting that OCT imaging techniques with a sufficiently large depth scan range, such as optical frequency-domain imaging (OFDI), could use the reflections from optical components at the probe tip as surface states, and still maintain a useful imaging depth in the sample.However, this requires that any intermediate elements, including the sheath, do not introduce any change in the polarization state of light at the sample.PS-OCT images obtained with the scanning fiber probes are presented in Figs. 6 and 7. Conventional images of backscattered light intensity are displayed on a logarithmic gray scale, with black representing strongly backscattering regions.In the polarization-sensitive images, we display the measured phase retardation angle between components of light resolved along the optic axes of the tissue, evolving from 0° (black) at the tissue surface, to 180° (white) to 360° (black).Before applying our polarization-sensitive algorithm [24], we averaged all measured polarization states by applying a finite-impulse-response filter to each of the Stokes parameters in the image, using two-dimensional correlation.The filter size was 4 A-lines by 6 points in depth, corresponding to averaging the backscattered polarization states over an area of 20.6 μm x 28.2 μm for the linear scanning probe, and 2.82° x 17.6 μm for the rotary probe.
One potential application of endoscopic PS-OCT is in orthopedics, where minimallyinvasive procedures involving ligaments, tendons, cartilage and other birefringent tissues are commonplace.Figure 6 presents OCT images from a meniscal tissue sample, obtained under a protocol approved by the Institutional Review Board of Massachusetts General Hospital, from a patient undergoing knee replacement surgery.The medial and lateral meniscii are a pair of C-shaped pieces of fibrocartilage located at the peripheral aspect of the knee, providing stability, lubrication and shock absorption in the joint.As with articular cartilage, the meniscii are susceptible to damage and degeneration with age, requiring diagnosis and intervention to repair.In Fig. 6 (top left), with the probe sheath (s) placed in light contact with the tissue, the conventional OCT image from this meniscus specimen demonstrates a fairly homogeneous appearance.In the polarizationsensitive image (top right), the specimen appears strongly birefringent, with the polarization state of light seen to evolve rapidly on propagation in the tissue, as indicated by the multiple black-white bands.Although the frequency of these bands is seen to vary with location in the image, indicating variable levels of birefringence, a mean double-pass phase retardation rate of 1.33 ± 0.04 °/μm was measured within the region indicated.The corresponding histology (bottom) demonstrates the high density of collagen fibers in the meniscus, stained blue by the trichrome stain.
Another potential application of endoscopic PS-OCT is in cardiovascular imaging, where identification of vulnerable plaques and understanding the processes leading to their development is of considerable interest [1]. Figure 7 shows conventional and PS-OCT images from an ex-vivo coronary specimen, acquired using the rotary scanning probe.The main layers of the vessel wall are evident in the conventional OCT image (Fig. 7(a)).Moving outwards from the vessel lumen, the intima (i), media (m) and adventitia (a) are labeled [1], with each known to contain collagen in varying amounts with location [26].The importance of referencing the polarization state of light backscattered from within the sample to individual states at the sample surface is illustrated by the corresponding polarizationsensitive images (b, c).Individual surface states were used in (b), and a single pair of averaged surface states was used in (c).The Stokes vectors representing these surface states are shown in the Poincaré sphere (f), with the blue and green traces describing the continuously-changing surface states used in Fig. 7(b), and the red circles indicating the averaged states used in Fig. 7(c).
The use of averaged surface states is inappropriate in endoscopic PS-OCT and can lead to artifacts in the polarization-sensitive image.As can be seen in Fig. 7(c), retardation values at the sample surface appear as grayscale levels offset from 0° (black), corresponding to values significantly higher than expected based on the quantitative results shown in Fig. 5. Throughout the image, phase retardation values are incorrect in those A-lines where a non-zero angle exists between the average surface state and the true surface state.
The correct use of individual surface states (Fig. 7(b)) enables measurement of changes in the polarization state of backscattered light within the vessel wall, as the image pixels change from black at the inner surface, towards white, then to black again.The fact that phase retardation values of 180° (white) are not reached could be due to a change in collagen fiber orientation with depth in the sample.From the data displayed in Fig. 7(b), a mean doublepass phase retardation rate of 0.28 ± 0.005 °/μm was measured for this specimen.The corresponding H&E stained histology (d) shows the layers of the vessel wall, with collagenrich tissue appearing blue in the trichrome stained section (e).
Summary
Fiber-optic implementation has enabled OCT systems to be used in a wide range of clinical studies, including endoscopic procedures, where the technology represents a unique imaging tool for many internal organ systems.Polarization-sensitive OCT has also been demonstrated as an extension to conventional OCT, providing functional information by determining the polarization properties of tissues.To date, fiber-based PS-OCT systems have typically been used in the laboratory with a stationary sample arm and transverse scanning at the distal end, or in the clinic with only minimal fiber motion during acquisition of each A-line [27][28][29].When a length of the sample arm fiber is actively translated or rotated in order to achieve transverse scanning (as is typically the case for endoscopic and catheter-based imaging), extraction of polarization-sensitive becomes more challenging due to dynamic changes in fiber birefringence.
Our method reduces the influence of dynamic fiber birefringence by making a relative measurement, comparing the detected polarization state of light backscattered from within the sample, to the detected state from the tissue surface [18].Sampling the incident polarization state for every A-line isolates motion-induced polarization artifacts to changes occurring on this timescale.Previously, surface states were calculated by averaging individual surface states across all, or some number of A-lines within an image.Using an average surface state reduces the error in determining the initial Stokes vector, with a corresponding improvement in determination of sample retardation and optic axis orientation.This averaging method is valid when the incident state is not changing, which is a reasonable assumption for most reported fiber-based PS-OCT systems.
However, the evolution of incident states demonstrated in Fig. 1 indicates that this averaging technique should be limited to only a small number of neighboring A-lines when a scanning sample arm fiber is used.When only a few A-lines are utilized, we have demonstrated that high-quality PS-OCT image data may be obtained from scanning fiber probes in human tissue.While the specific form of the trace of the incident Stokes vectors on the Poincaré sphere will vary under different conditions and from one probe to another, the concepts for isolating motion-induced effects as presented in this manuscript remain applicable to polarization-sensitive imaging with any scanning fiber probe.This also applies to high-speed endoscopic imaging with spectral-domain PS-OCT systems, where an increase in sample arm scan speed will be accompanied by a proportional increase in the frequency of surface state sampling.In as much as most reported endoscopic OCT probes are based on the designs used here, we expect the principles laid out in this paper may be applied without loss of validity for accurate PS-OCT imaging with other endoscopic OCT systems.
This understanding and implementation of our PS-OCT algorithm in endoscopic imaging enhances the already growing application space for OCT.In otolaryngologic and orthopedic applications, the linear-scanning probe can be combined with standard endoscopic procedures to evaluate the location and integrity of collagen-rich tissues.Incorporation of polarizationsensitivity with the rotary probe will enable the collagen content of intravascular tissues to be investigated.
Fig. 1 .
Fig. 1.Evolution of the polarization states of light detected from the distal end of each fiber probe during image acquisition, for a stationary probe (left), linear scanning probe (center), and rotary scanning probe (right).Polarization states are displayed on the Poincaré sphere as endpoints of the calculated Stokes vectors at the detectors, on double-pass following reflection at the distal tip.
Fig. 2 .
Fig. 2. Distribution of absolute values of angles between each of the 2048 Stokes vectors shown in Fig. 1 (left) and the mean Stokes vector, for each of the two incident polarization states.The solid black line shows a theoretical probability density function (Eq. 1) based on a 2-dimensional Gaussian distribution, fit to the measured data.
Fig. 3 .
Fig. 3. Retardation angle (defined as the mean angle through which the Stokes vectors are rotated), for the pair of incident polarization states displayed in Fig. 1 left (stationary probe), center (linear scanning probe), and right (rotary scanning probe).Blue arrows indicate the scan range used to generate a single image.
pair [#] (C) 2005 OSA 25 July 2005 / Vol. 13, No. 15 / OPTICS EXPRESS 5744 retardation relative to the sample surface is shown in red, with Stokes parameters averaged over 64 A-lines and 5 points in depth.This curve begins at 0° at the sample surface, reaches around 85° at the cemented interface, and 174° at the lower surface of the wave plate.
Fig. 4 .
Fig. 4. Measured backscattered intensity and accumulated double-pass phase retardation for a quarter wave plate.
Fig. 5 .
Fig. 5. Phase retardation angle measured between the surface and first point in depth within the tissue sample, as a function of the number of A-lines over which Stokes vectors are averaged, for the rotary probe example shown in Fig. 7.
Fig. 6 .
Fig. 6.Top: conventional (left) and polarization-sensitive (right) images of an ex vivo human meniscus specimen.The box indicates the region where the double-pass phase retardation rate is quantified (see text).Images are 4 mm wide and 1.2 mm deep.The probe sheath is indicated in the conventional OCT image (s).Bottom: corresponding histology with trichrome stain.
Fig. 7 .
Fig. 7. Conventional (a) and polarization-sensitive (b, c) images of ex-vivo human coronary tissue, obtained with the rotary scanning probe.The polarization-sensitive images were generated using either individual surface states (b), or averaged surface states (c).Corresponding histologic sections, with H&E (d) and trichrome stain (e).Surface polarization states used to generate the polarization-sensitive images above (f).Scale bars = 0.5 mm.
Table 1 .
Measured values of double pass phase retardation (DPPR) for a quarter wave plate
|
2016-12-22T08:44:57.161Z
|
2005-07-25T00:00:00.000
|
{
"year": 2005,
"sha1": "fc6a89c5e6c01c046c0efcb4f265249599cb4647",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1364/opex.13.005739",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "fc6a89c5e6c01c046c0efcb4f265249599cb4647",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
254020810
|
pes2o/s2orc
|
v3-fos-license
|
Exploration of exposure to artificial intelligence in undergraduate medical education: a Canadian cross-sectional mixed-methods study
Background Emerging artificial intelligence (AI) technologies have diverse applications in medicine. As AI tools advance towards clinical implementation, skills in how to use and interpret AI in a healthcare setting could become integral for physicians. This study examines undergraduate medical students’ perceptions of AI, educational opportunities about of AI in medicine, and the desired medium for AI curriculum delivery. Methods A 32 question survey for undergraduate medical students was distributed from May–October 2021 to students to all 17 Canadian medical schools. The survey assessed the currently available learning opportunities about AI, the perceived need for learning opportunities about AI, and barriers to educating about AI in medicine. Interviews were conducted with participants to provide narrative context to survey responses. Likert scale survey questions were scored from 1 (disagree) to 5 (agree). Interview transcripts were analyzed using qualitative thematic analysis. Results We received 486 responses from 17 of 17 medical schools (roughly 5% of Canadian undergraduate medical students). The mean age of respondents was 25.34, with 45% being in their first year of medical school, 27% in their 2nd year, 15% in their 3rd year, and 10% in their 4th year. Respondents agreed that AI applications in medicine would become common in the future (94% agree) and would improve medicine (84% agree Further, respondents agreed that they would need to use and understand AI during their medical careers (73% agree; 68% agree), and that AI should be formally taught in medical education (67% agree). In contrast, a significant number of participants indicated that they did not have any formal educational opportunities about AI (85% disagree) and that AI-related learning opportunities were inadequate (74% disagree). Interviews with 18 students were conducted. Emerging themes from the interviews were a lack of formal education opportunities and non-AI content taking priority in the curriculum. Conclusion A lack of educational opportunities about AI in medicine were identified across Canada in the participating students. As AI tools are currently progressing towards clinical implementation and there is currently a lack of educational opportunities about AI in medicine, AI should be considered for inclusion in formal medical curriculum. Supplementary Information The online version contains supplementary material available at 10.1186/s12909-022-03896-5.
Background It is likely that artificial intelligence (AI) technologies will be incorporated into clinical practise, with applications such as image-based diagnostics in radiology demonstrating accuracy and efficacy rivalling speciality trained physicians [1][2]. While many of these technologies remain investigational, the traditional roles of physicians, particularly those who perform imaging-based diagnostics, are expected to change in some capacity as AI tools progress towards clinical implementation [3,4]. Concrete skills about how to interpret and use AI and integrate AI tools into clinical workflow may become important for physicians in the near future [3]. Additionally, other key skills in medicine such as the ethical aspects of decision making or humanization and empathy through the doctor-patient interaction may become more important [3][4][5]. Professional and regulatory bodies have also begun to recognize the value of AI as a core competency for physicians, as is demonstrated by the recent establishment of the Canadian Royal College of Physicians and Surgeons Task Force Report on Artificial Intelligence and Emerging Digital Technologies [6].
Despite the paradigm shift that AI may bring, there have been few developments in formal educational opportunities about AI or machine learning (ML) for medical trainees at all levels [3,4]. Educational opportunities about AI and ML for physicians and medical learners remain optional, inconsistent between institutions, and largely focus on research applications [3]. A lack of knowledge about AI and it's uses in a clinical context will plausibly pose a barrier to future uptake and effective use among physicians [3,4]. Previous studies have addressed the perceptions of undergraduate medical students about AI in medicine, noting that they believe AI will be an integral part of medical practise in the future [7,8]. These studies have also found poor self-reported knowledge about AI among medical trainees, including undergraduate medical students [7,8]. However, there have been no studies to date that identify and assess exposure to existing educational opportunities about AI in medicine among undergraduate medical students, or that gauge the interest and perceived need for AI education among these learners [7,8]. Additionally, the AI content that students would be receptive to learning about and the desired mediums of curriculum delivery have not been identified. This data could strengthen existing educational opportunities, support the inclusion of AI in formal medical curriculum, and ensure that AI curriculum is well received by undergraduate medical learners.
To address this knowledge gap, we performed a national survey among all undergraduate medical students in Canada. The survey aimed to assess undergraduate medical students' feelings about AI in medicine, identify the currently available educational opportunities about AI in medicine, explore the perceived need for AI inclusion in medical curriculum, and identify desired mediums for AI curriculum delivery. Given the heterogeneity of education about AI in medicine, follow-up interviews were performed with a sample of participants to provide further insight into educational opportunities about AI that were not captured using the survey instrument.
Methods
This cross-sectional mixed methods study had both an online survey and interview component. A mixedmethods approach was selected as the survey component enables data collection from a large quantity of respondents at all Canadian medical schools, while the interview component allows for a more holistic exploration of education of AI in medicine and provides narrative context. Given how variable curriculum can be between institutions, interviews are particularly valuable in exploring institutionally specific opportunities or deficits that cannot be captured by the survey instrument. The survey was distributed to all 17 Canadian medical schools through various digital mediums including social media, student portal, and undergraduate medical newsletters, as dictated by the institutional requirements (Additional file 1). A random sample of participants who opted-in to interviews during the survey (responded "Yes" to the question "Would you be willing to provide a short, recorded interview about your responses at a later date?") were contacted to participate in interviews. Institutional ethics approval was obtained by the Health Sciences and Affiliated Teaching Hospitals Research Ethics Board at Queen's University (ID# 6031912).
Survey design
A 56 question survey was developed in accordance with the Consensus-Based Checklist for Reporting of Survey Studies (CROSS) guidelines for survey studies and coded in Microsoft Forms (Additional file 2 [9];). The CROSS guidelines are a well established framework for Keywords: Artificial intelligence, Curriculum, Deep learning, Education, medical, Machine intelligence, Machine learning, Undergraduate developing and reporting survey based studies. The survey was available in both English and French. The survey was piloted by medical students on the research team that were not involved in survey instrument creation to ensure question clarity and accessibility of Microsoft Forms (PG, JDP, VV, WL); no formal pretest was performed. Assessment of inclusion eligibility occurred at the start of the questionnaire, with inclusion criteria being consent to participate and enrollment in Canadian medical school at any time in 2021. Participation was entirely voluntary. A chance to win one of four $50 gift cards was offered as incentive for participation. Survey distribution occurred from May 2021 to October 2021. Each institution was given a 1 month period to respond to the survey after initial distribution, with a reminder issued with 2 weeks remaining. While it is impossible to know how many students received the survey given the variable methods of distribution, it is plausible that all of the ~ 10,000 Canadian undergraduate medical students had the opportunity to respond [10]. All responses were anonymous to the research team, although emails addresses were collected to contact participants for interviews and to prevent multiple responses from participants.
The first section of the survey contained six screening questions exclude participants who did not meet inclusion criteria, and logistical questions regarding the gift card draw and the participants willingness to participate in the interview portion. The second section of the survey contained 12 questions identifying participant demographics. The third section of the survey consisted of five questions about the participants knowledge of AI in daily life. The fourth section consisted of 20 questions about the participants attitudes, beliefs, and knowledge regarding AI in medicine, including 16 Likert scale questions scored from 1 (strongly disagree) to 5 (strongly agree). The final section consisted of 13 questions (12 Likert scale) about participant access to educational opportunities about AI during their medical training and their preferred formats to learn about AI.
Interview design
The interview study component was developed in accordance with the Consolidated Criteria for Reporting Qualitative Research (COREQ) guidelines for qualitative research reporting [11]. Participants attended a 10-15 minute interview with one interviewer via video conference (Zoom Video Communications, San Jose, California) between October 2021 to December 2021, after the closure of the survey. The interview consisted of six scripted interview questions that required participants to use retrospective recall to recount their experiences and feelings about their previous experience with AI in medical school and their thoughts about receiving AI education during their medical training (Additional file 3 [9, 10 12,13];). The purpose of the additional interview component was to inform future research and the development of curriculum about AI in healthcare. The interview audio was recorded and transcribed verbatim digitally for analysis.
Selection of participants
In an effort to interview participants from each of the English speaking medical schools (14 of the 17 medical schools), a stratified random sampling method was employed. A random number generator was used to select two participants from each school, who were then invited to interview. A follow-up email was sent if the participant did not respond in 1 week. If a participant failed to respond to their invitation in 2 weeks, another participant was randomly selected using the same method and invited. There were no specific inclusion/exclusion criteria and participation was voluntary. All participants were originally contacted via email by a member of the research team (AP, NC). There were no additional incentives or financial compensation were offered for participation.
Characteristics of study subjects
Seventeen undergraduate medical students were recruited from 11 medical schools. Nine participants were female and eight were male. Three participants were currently in their first year of medical school (MS1), seven were MS2, three were MS3, two were MS4, and two had most recently completed MS2 and were on leave to complete the PhD component of their MD PhD. The average age of the participants was 26.1 (range of 23-30). All participants were familiar with the research goals. No participants declined to interview or dropped-out from interviews, although three students were invited to interview and did not respond.
Protocol for responses
Two interviewers performed the interviews (AP, NC), and audio from the video interviews was recorded. At the start of each interview the study goals were explained, and verbal consent was obtained. No follow-up questions were asked, the interviewers provided no additional information during the interviews, and there was no repeat questioning at a later date. Nobody else was present during the interview. Survey transcripts were not returned to participants for comment or correction.
Research team and reflexivity
Project conception and survey design were performed by AP, RR, NC, JJN, FYM. AP is a male medical student with a research background in medical education, AI, and ophthalmology. RR is a male medical student with a research background in medical education, plastic surgery, and qualitive improvement. NC is a female medical student who holds an MPH and has research experience in public health. JJN is a Brazilian female medical student who has research experience in medical education and AI. FYM is a staff radiation oncologist who holds a PhD in radiation oncology. FYM has extensive research experience in oncology and AI. Two of the interview participants were personally known to AP and RR, while all participants were unfamiliar to the second interviewer and the rest of the research design team (NC, FYM). Members of the interview and research team had previously expressed that they believe AI training in medicine in important in their published work [3].
Analysis
Statistics were performed in SPSS 27.0 (IBM Corp., Armonk, NY). Quantitative demographic data were reported descriptively, as a count and a percentage. Likert scale data were reported as the percentage of participants that either agreed or disagreed with each given statement, and the count of relevant responses. Each of the statements in the results is based on a single Likert scale question. Participants who failed the screening questions or did not provide responses to any of the questions were removed from the data. Box plot figures of Likert scale data were created using R (R Core Team, Vienna, Austria).
The text transcriptions of interview responses were analyzed sentence by sentence using emergent thematic analysis to explore the experiences of participants with AI education in medicine and their perceived need for AI education [14]. Qualitative analysis software (NVIVO 12 Pro, QSR International, Melbourne, Australia) was used to organize, sort, and code the data. Two independent, blinded coders (AP, NC) performed emergent coding consisting of assigning all phrases of the interview data into codes, regardless of their expected relevance to future themes [14,15]. Codes were initially developed as the smallest unit of analysis, with similar codes being grouped together to form subthemes, and subthemes further grouped together to form themes. The process of theme development was iterative, with frequent revisions as patterns became apparent. Finalization of themes occurred after review and discussion by the two coders. Emerging patterns were noted based on coding frequencies. No coding diary was maintained during the analysis process, with no formal comment on bias or emerging themes; instead email communication and regular meetings between team members supported accurate, unbiased coding, ameliorating personal biases or preconceptions. The inter-coder reliability was 82%.
Survey (quantitative analysis)
In total, 486 responses were obtained and 475 met the inclusion criteria, with survey respondent demographics outlined in Table 1 (survey section 4, Fig. 1) Only 39% (34 strongly agree, 148 agree) of respondents were able to describe AI, machine learning, neural networks, and deep learning and 63% did not understand AI research methods. Student perceptions of AI in medicine included: the belief that AI has improved medicine (74% agree; 102 strongly agree, 234 agree), that AI is commonly used in medicine (M59% agree; 54 strongly agree, 217 agree), and that AI will revolutionize medicine in the future (74% agree; 150 strongly agree, 186 agree). Respondents agreed that artificial intelligence will be cost-effective (64%; 96 strongly agree, 199 agree) and optimize physician's work (77% agree; 96 strongly agree, 256 agree), however students did not believe that some or all physicians would be replaced by AI (66% disagree; 121 strongly disagree, 181 disagree) and were not frightened by the development of AI (53% disagree; 62 strongly disagree, 182 disagree). Medical students were unsure if AI would particularly affect their specialty of choice (31.7% agree, 35.1% disagree), but agreed that they will need to understand AI throughout their career (68.3% agree; 104 strongly agree, 212 agree) and that they would use applications of AI during their careers (72.9% agree; 110 strongly agree, 223 agree). Pucchio et al. BMC Medical Education (2022) 22:815 Artificial intelligence in medical education (survey section 5, Fig. 2
)
Respondents believed that AI should be formally taught in medical education (67% agree; 99 strongly agree, 205 agree), but indicated they had not received training in formal curriculum previously (85% disagree; 181 strongly disagree, 196 disagree). Medical students had not received training in AI through education external to formal medical school curriculum (66% disagree; 101 strongly disagree, 191 disagree), or research or work experiences (70% disagree; 122 strongly disagree, 197 disagree). Some students had independently learned about AI (41.9% agree, 42.4% disagree). Survey respondents disagreed that learning opportunities regarding AI in medicine have been adequate (74% disagree; 87 strongly disagree, 244 disagree). Students agreed that their understanding of programming or mathematics were a barrier to understanding AI (47% agree; 70 strongly agree, 141 agree). Students agreed that it is important to study AI in medicine (62% agree; 70 strongly agree, 214 agree) and that given the chance they would like to learn more about AI (78% agree; 150 strongly agree, 210 agree).
Interviews (qualitative analysis)
Three major themes emerged in the qualitative analysis: a lack of existing learning opportunities about AI, the need to incorporate AI learning in medical curriculum, and positive sentiment about the future of AI in medicine. Subthemes that emerged included the value of elective or informal learning about AI, a scarcity of formal learning opportunities, the desired formats of future AI education, barriers to development of AI curriculum, excitement about the future of AI in medicine, fear about misuse or poor stewardship of AI, and specific uses for AI including use in specific disciplines.
Learning opportunities
The theme of existing learning opportunities about AI in medicine developed over many references across all participants. The vast majority of existing AI educational opportunities were elective or informal learning, with the most commonly identified opportunity being education prior to medical school or research, non-institutional, independent reading, and discussions with peers. One participant noted "When I was an undergrad, I did a course called 'Engineering in Medicine' and there was a big overview of the uses of AI and different types of image processing using AI. " Learning opportunities in education prior to medicine, particularly in interviewees who had done engineering degrees, or in research endeavors, were common; "… My first real introduction to AI in healthcare was when I was working as a research student the summer before I started medical school… we had a guest lecturer who came in and spoke about her work… it was an analysis of breast cancer pathology samples using AI and machine learning to do it a lot faster. " Non-institutional courses and workshops were also a common medium to learn about AI; "Outside of the curriculum there are a lot of student led workshops that teach fundamentals of artificial intelligence". All participants discussed formal institutional learning opportunities about AI, with all participants but one indicating that there were no formal educational opportunities about AI available to them. One student said "There's no exposure [to AI in medical school]. I think we learn basic stats but nothing more, " a sentiment that was shared by almost all of the participants. Some quotes indicated that AI was referenced off-hand; "There hasn't been anything in our formal curriculum… [AI] might have got mentioned in passing in one of our radiology lectures. " One MD/PhD candidate noted that some AI training was part of their postdoctoral training, while an undergraduate medical student noted that "The [University of Toronto] Center for AI Research and Education in Medicine has student representatives, so through them I've got some exposure…. " No participants mentioned the inclusion of AI in formal undergraduate medicine lectures.
The need for medical curriculum about AI
Despite a lack of formal learning opportunities, the importance of learning about AI in medicine developed, was noted by the majority of participants. Many participants noted that "[learning about AI] is important, because I think that it's going to be a reality in how a physician practices medicine and should be something we should learn, " while many references noted that "I don't have a great understanding of what AI is capable of or what it even is. It definitely could have implications in different fields of healthcare, and I think we all need to be prepared in the future of medicine. " Students often discussed their desired learning formats and learning needs (82 references). While some students indicated that "… having like a lecture from engineers or computer scientists would probably be really helpful, " many believed smaller scale changes would also be useful; "I think just an addendum to an existing lecture… Even if it was just a few slides where AI is relevant to that field, I think it would be really helpful. " The idea of learning outside of traditional didactic lectures was also proposed, with one student suggesting "…something more accessible like a video or an audio podcast. " While most participants did believe inclusion in curriculum was important, a small number of participants indicated that it should not be a priority. One student said "I think some people don't really need to learn about AI or ML, " while another mentioned "I think that… my time would be better spent understanding the body and having [AI tools and results] interpreted for me by somebody who is an expert. " Barriers to inclusion of AI in medical curriculum was established by a small number of participants. Notably, many participants noted that AI shouldn't take priority over other missing topics in curriculum. One participant said "I feel as though the preclerkship curriculum is already pretty packed with a lot of very relevant things… There are things I'm going to need to know as a clerk that I don't feel like we've adequately covered. " Notably, no student had directly encountered this barrier, but rather it was speculated that curricular prioritization would be an issue. A lack of technical understanding of mathematics, programming, and computer science were also thought to be barriers to learning about AI, illustrated by one participant who joked: "I'm an old timer, I have issues with figuring out how to use my USB… I wouldn't know where to start. "
Sentiment
Varying sentiments about AI were expressed by participants, with some references to excitement around the future of AI while other participants expressed concern or fear about AI or misuse of AI. Most of the positive references to the future of AI noted novel applications or the ability for technology to improve physician workflow. There were a variety of concerns brought up by participants, including "… concerns regarding the ethical issues surrounding AI" or the accuracy of AI tools. While infrequently noted, there was some concern about job security; "maybe there'll be less jobs for physicians in certain fields that AI is more applicable to, like radiology or pathology. " There were many references to specific uses of AI, with imaging-based AI tools and specific professions such as radiology and pathology being the most commonly referenced subthemes. Some participants also thought there would be applications for patient use, such as "symptom checkers, [where the patient can] input the symptoms, and it spits out possible diagnoses. "
Discussion
Our findings demonstrated that the majority of surveyed medical students believe that AI is important to the future of medicine and desire learning opportunities about AI. We also found that despite these attitudes, there remains a lack of educational opportunities across Canada at the institutions of study participants. With the rapid progression of AI tools towards clinical implementation and more prevalent use of AI in medical research, educational opportunities about AI need to be considered for inclusion in formal medical curriculum. Further, as the skillsets required to use AI may be different than those traditionally possessed by physicians, the desired learning formats, content interests, and perceived learning barriers of medical learners must inform the inclusion of AI content in medical curriculum.
Our findings are consistent with previous studies of medical learners, which have also identified limited knowledge of AI among medical trainees [7,8,16,17]. A recent survey by Teng et al. (2022) found that medical students had limited knowledge about AI, suggesting that this indicated a need for urgent education. They noted that this growing knowledge gap would likely become a barrier to the development and use of AI in medicine, something that has been supported by other literature [1,18]. Interestingly, this survey found that healthcare learners were optimistic of AI in their fields, although they were not sure it would be relevant in their field [7]. We also found a degree of cognitive dissonance among participants, as they believed that AI would revolutionize medicine while simultaneously believing AI would not directly affect them or their future practise. These findings could reflect poor understanding of AI applications, sensational reporting of AI in media or medical literature, or the limited exposure to AI in a clinical setting. Some recent survey cohorts of medical students have found that their cohort is worried that AI may replace physicians in the future, while other surveys have reported this to be a non-issue for their study cohort [8,16]. Gong et al. (2019) found that this anxiety has discouraged students from considering imaging based diagnostic specialities, such as radiology. Our findings are more congruent with a European survey performed by Pinto dos Santos et al. (2021), and did not support anxiety related to physician replacement among medical students. While fears about AI may vary given study cohorts, anxiety regarding the use of AI in a clinical context could be ameliorated by curriculum. Finally, as AI applications progress towards clinical implementation, a lack of understanding could present challenges in effective uptake by physicians [19,7]. Teaching could address this, in addition to other changing requirements such as the ethical and humanistic role of physicians [3]. Urgent development in medical curriculum is required to accommodate for this growing need [3,7,17]. As both this study and previous surveys have confirmed that medical students want AI incorporated into their formal medical curriculum, any such changes should be well received by the undergraduate medical learner population [7,17].
Previous literature has identified potential objectives for AI teaching, suggesting educational objectives including identifying what technology is appropriate in a specific clinical context, the humanistic and ethical components of AI, and identification of quality improvement applications of AI [3,7,20]. This study adds an assessment of existing educational opportunities, preferred formats of AI education by medical students, and potential barriers to uptake. We found that there is no existing formal curriculum about AI at any of the medical schools in Canada. Educational opportunities are similarly limited outside of Canada [21]. Our interviews identified one notable barrier to the inclusion of AI in formal curriculum, being non-AI content taking priority for curricular inclusion. However, as our respondents identified workshops as their preferred learning format, this barrier could be mitigated by use of non-longitudinal educational formats. These would be both more amenable to learners and more easily implemented in a crowded curriculum. Although survey respondents did not believe technical knowledge would be a barrier to uptake of AI, interview participants did express concern about a lack of mathematical or computer science knowledge prevent effective learning about AI. Given the large spectrum of educational backgrounds and experience with technology, it is prudent for medical AI curriculum to restrain from exploring complex technical detail.
The study limitations include non-response and participant bias. While we did receive responses from all medical schools in Canada, our respondent population makes up only ~ 4.5% of the total undergraduate medical student population in Canada [10]. This was in part due to the variable ability of undergraduate medical faculties to support survey dissemination, with some sending the survey as a newsletter, other posting it on the student portal, and other being unable to facilitate distribution. It is likely that participant bias affected study outcomes, and that respondents were more likely to possess interest or knowledge of AI and have a stronger technical understanding than non-respondents. Another limitation is that medical students in later stages of training (e.g. 3rd and 4th year) and male respondents were underrepresented in the survey results. Finally, some aspects of the study design had potential to introduce bias or error. No formal validation process for the survey was employed, outside of the internal pilot to ensure question clarity; this could have reduced face validity or construct validity. We were also unable to comment on how many participants were recruited through each medium, making study reproduction more challenging. Additionally, our survey instrument was lengthy and included questions that were beyond the scope of our research question. The length could have also contributed to participant non-response.
A lack of understanding of AI has been demonstrated over multiple studies, as has the need for curriculum to address AI in medicine. With this survey providing insight into preferred formats of AI education and barriers to AI education, informed development of AI curriculum is possible. We recommend trialling a condensed workshop or lecture, as students reported that they would be most receptive to learning in these formats. Medical education has been traditionally slow to adapt to technological changes, leaving students ill prepared to use technology in clinical practise [22]. However, as policy both in Canada and internationally begins to acknowledge the importance of AI in medicine, financial and institutional support for educational efforts will grow [23]. Future research should seek to develop educational content in the formats indicated above and trial them in a medical student population.
Conclusion
A lack of educational opportunities about AI in medicine were identified across Canada in the participating medical students. Given medical students overwhelmingly believe that AI is important to the future of medicine and their desire to learn about AI, the development and inclusion of AI in undergraduate medical education should be considered. As AI tools are likely to become widely used in the future, teaching the future generation of physicians about how AI will integrate into clinical workflow will set them up for success, improving the thoughtful implementation of these tools in medical practise and subsequently improving patient care [24].
|
2022-11-28T15:02:47.099Z
|
2022-11-28T00:00:00.000
|
{
"year": 2022,
"sha1": "606dd81853f3d2683c100c633d2f63539c8f55d5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "606dd81853f3d2683c100c633d2f63539c8f55d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
10720358
|
pes2o/s2orc
|
v3-fos-license
|
Adora2b-elicited Per2 stabilization promotes a HIF-dependent metabolic switch critical for myocardial adaptation to ischemia
Studies of metabolic adaptation during environmental stress have broad applications to human disease. Adenosine signaling has been implicated in cardiac adaptation to limited oxygen availability. Serendipitously, a wide search for adenosine receptor A2b (Adora2b)-elicited cardio-adaptive responses identified the circadian rhythm protein period2 (Per2). Subsequent pharmacologic and genetic studies confirmed Adora2b-dependent stabilization of Per2 during myocardial ischemia. Functional studies of myocardial ischemia in Per2−/− mice revealed larger infarct sizes and abolished cardio-protection by ischemic preconditioning. Metabolic studies during myocardial ischemia uncovered a limited ability of Per2−/− mice to utilize carbohydrates via oxygen-efficient glycolysis. These metabolic alterations were associated with a failure in Per2−/− mice to stabilize hypoxia-inducible-factor Hif1a. Moreover, cardiac stabilization of Per2 via light-exposure transcriptionally enhanced glycolysis, and provided period-specific cardio-protection from ischemia. Together, these studies identify Per2 as key regulator of ischemia tolerance through reprogramming of cardiac metabolism and implicate Per2 as novel therapeutic modality during acute myocardial ischemia.
INTRODUCTION
Metabolic adaptation during environmental stress is currently an area of intense investigation, as metabolic alterations have broad applications to human disease 1,2 . For instance, myocardial ischemia leads to the activation of pathways directed towards enhancing myocardial oxygen efficiency 3 . In fact, a metabolic switch from more "energyefficient" utilization of fatty acids to more "oxygen-efficient" utilization of glucose as the main source for energy generation is pivotal to allow the myocardium to function under ischemic conditions. 4,5 Extracellular adenosine is a signaling molecule implicated in cellular adaptation to hypoxia 2 . In the extracellular compartment, adenosine stems from phosphohydrolysis of AMP via the ecto-5'-nucleotidase (NT5E) 6 and signals through four adenosine receptors (ARs,ADORA1, ADORA2A, ADORA2B, ADORA3) 7 . During conditions of hypoxia, adenosine generation is significantly enhanced, and activation of ARs plays a critical role in counterbalancing deleterious effects of hypoxia 1,8 . Particularly during conditions of myocardial ischemia, adenosine signaling events have been implicated in cardio-protection from ischemia. Similarly, cardio-protective responses elicited by ischemic preconditioning (IP) are abolished following pharmacological inhibition or genetic ablation of extracellular adenosine production or signaling 9 . In the present studies we identified the circadian rhythm protein Period 2 (Per2) as an important mediator in Adora2b elicited cardio-protection by enhancing the glycolytic capacity of the ischemic heart.
Adenosine signaling events mediate cardiac adaptation to ischemia
Previous studies have implicated adenosine receptor signaling in myocardial adaptation to ischemia or hypoxia in mice 10 . Here, we studied these pathways in cardiac tissues obtained from patients suffering from ischemic heart disease (Supplementary Table S1). In comparison to cardiac tissues derived from healthy hearts, we found a selective induction of the ADORA2B (Fig. 1a). Together with previous studies in gene-targeted mice 10 , these findings in human patients implicate extracellular adenosine signaling via the ADORA2B in cardio-protection from ischemia.
Identification of the circadian rhythm protein Per2 as an Adora2b target-gene
Given the prominent role of adenosine receptor signaling in IP, we next pursued microarray studies comparing transcriptional responses elicited by IP treatment (Supplementary Fig. S1) in wild-type or Adora2b −/− mice (Fig. 1b, Supplementary Fig. S2-4, Supplementary Table S2, http://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE19875). The gene with highest differential readout was the circadian rhythm protein Per2. Per2 is a member of the Period family of genes expressed in a circadian rhythm pattern in the suprachiasmatic nucleus 11 . In addition to Per2, the microarray studies also demonstrated a similar regulatory pattern for Per1, while other members of the circadian rhythm family were not Adora2b-dependently induced ( Supplementary Fig. S3-5, Supplementary Table S2). However, studies in Per1 −/− mice failed to identify a functional role for Per1 in myocardial ischemia (Supplementary Fig. S6 and Fig.S7). We therefore focused on Adora2b-elicited alterations of Per2. Indeed, the circadian expression pattern of cardiac Per2 mRNA over a 24h zeitgeber period was abolished in Adora2b −/− mice, similar to the induction of Per2 transcript and protein levels following ischemic preconditioning of the heart (Fig. 1 c, d, e, f, Supplementary Fig. S1). Studies in isolated cardiac myocytes exposed to in vitro hypoxic preconditioning (HPC, Supplementary Fig. S8) demonstrated enhanced Adora2b and Per2 transcript in wild-type, but not in Adora2b −/− myocytes (Fig. 1g). Moreover, we found elevated PER2 transcript and protein levels in cardiac tissues of patients with ischemic heart disease (n=10 per group; Fig. 1h, Supplementary Fig. S9, Supplementary Table S1). Studies in a cultured endothelial cell line (HMEC-1, Supplementary Fig. S10) revealed binding of cAMP response element binding protein (CREB) to the PER2 promoter upon ADORA2B agonist treatment (BAY 60-6583 10 , Fig. 1i). Studies with truncated PER2 promoter constructs identified a CREB binding site responsible for ADORA2B-inducibility of the PER2 promoter that is conserved between mice and human ( Fig. 1j, Supplementary Fig. S11). Together, these studies indicate that Adora2b/ADORA2B signaling transcriptionally induces Per2/PER2 transcript and protein levels.
ADORA2B signaling events attenuate proteasomal degradation of PER2 via CULLINdeneddylation
The rapid kinetics of PER2 stabilization following adenosine receptor activation in conjunction with previous reports indicating posttranslational mechanisms in regulating PER2 protein levels 12 prompted us to investigate additional post-translational mechanisms of ADORA2B-dependent regulation for PER2. As differences between PER2 expression in controls or ADORA2B agonist treated HMEC-1 were maximal after 6h (Fig. 2a), we examined the influence of inhibition of transcription by actinomycin or inhibition of translation with cycloheximide at 6h (Fig. 2 b and c, Supplementary Figure 12a). These studies demonstrated a combination of transcriptional and post-translational mechanisms in ADORA2B-dependent PER2 stabilization.
Next, we pursued the ADORA2B as inhibitor of proteasomal PER2 degradation (Fig. 2d, left). As first step, we pretreated HMEC-1 with the proteasomal inhibitor AM114, which resulted in prominent PER2 stabilization (Fig. 2d, right). Previous studies have indicated that post-translational degradation of Per2 involves the SCF E3 ubiquitin ligase complex resulting in polyubiquitination and subsequent degradation by the 26S proteasome 13 . This SCF complex is active only when CUL1 is covalently modified by the ubiquitin-like protein NEDD8 14 . Indeed, immunoprecipitation of PER2 and immunoblotting for ubiquitin demonstrates attenuated PER2 ubiquitination following ADORA2B agonist treatment (Fig. 2e). Given that ADORA2B signaling has been shown to deneddylate CUL1 15 , we pursued ADORA2B-dependent alterations of the neddylation status of CUL1. Here, we confirmed that deneddylation of CUL1 was enhanced following ADORA2B agonist treatment (Fig. 2f, Supplementary Fig. S12c,d). Furthermore, pretreatment with an ADORA2B antagonist (PSB1115) blocked ADORA2B-agonist-dependent deneddylation of all CULLINs (Fig. 2g).
Impaired myocardial adaptation to ischemia in Per2 −/− mice
We next studied previously characterized mice gene-targeted for Per2 ( Supplementary Fig. S13a) 11 . Studies of myocardial ischemia in Per2 −/− mice revealed enhanced tissue injury with myocardial ischemia, and abolished cardio-protection by IP (Fig. 3a, b). Moreover, treatment with the Adora2b agonist BAY 60-6583 10 led to a significant reduction of infarcted tissue or Troponin I levels in wild-type, but no cardio-protection in Per2 −/− mice, indicating that Adora2b-dependent cardio-protection is abolished in Per2 −/− mice (Fig. 3 b,c). Baseline electron microscopic imaging of the cardiac ultrastructure in Per2 −/− mice demonstrated isolated mitochondrial swelling and glycogen accumulation, with no major structural alterations of the myofibrillar apparatus (Fig. 3d). Baseline cardiac glycogen levels were elevated ( Supplementary Fig. 13b), while long chain fatty acids were decreased ( Supplementary Fig. 13c). Consistent with these findings, Per2 −/− mice exhibited elevated protein levels for glycogen synthase 1 ( Supplementary Fig. 13d) and carnitinepalmitoyltransferase 1 ( Supplementary Fig. 13e). However, baseline cardiac function assessed by echocardiography was unaltered ( Supplementary Fig. 13f). Consistent with recent studies on the role of Per2 in fatty acid metabolism, 17 we observed decreased long chain fatty acids with enhanced Cpt1 protein levels following IP treatment (Fig. 3e). Since these studies indicate enhanced fatty acid metabolism at baseline and following myocardial ischemia of Per2 −/− mice, we next utilized magnetic-resonance (NMR) studies to characterize the metabolic role of Per2. We exposed Per2 −/− mice or controls to ischemia alone (60min), or IP-treatment prior to ischemia and analyzed cardiac tissues. While baseline creatine phosphate levels were similar, ischemia-associated creatine phosphate depletion was significantly enhanced in Per2 −/− mice and conservation of creatine phosphate levels by IP-treatment was abolished (Fig. 3f). Parallel measurements of lactate levels demonstrated that ischemia-induced increases of cardiac lactate were impaired in Per2 −/− mice (Fig. 3g, h), indicating a role for Per2 in glycolytic utilization of carbohydrates during myocardial ischemia. Impaired glycolysis during myocardial ischemia in Per2 −/− mice Analysis of glycolytic enzymes revealed IP-elicited induction of their transcript levels in wild-type mice which was completely abolished in Per2 −/− mice ( Supplementary Fig. S14). To further characterize metabolic alterations in Per2 −/− mice, we next used liquid chromatography-tandem mass spectrometry studies following the infusion of 13C-glucose to assess glucose metabolism during ischemia (I) or reperfusion (IR). While we observed no difference in global 13C-glucose flux at baseline, during ischemia or at reperfusion (Fig. 4a, Supplementary Fig. 15a), detailed analysis of glycolytic flux revealed that ischemiaassociated increases of 13C-fructose-1,6-bisphosphate levels were completely abolished in Per2 −/− mice, indicating that hypoxia-elicited enhancement of glycolysis during ischemia involves Per2 (Fig. 4b, Supplementary Fig. 15b). Moreover, we observed that the ischemiainduced elevation of 13C-pyruvate or 13C-lactate was abolished in Per2 −/− mice (Fig. 4c,d, Supplementary Fig. 15c). In addition, while ischemia in wild-type mice attenuated glucose oxidation, glucose TCA flux in Per2 −/− was increased (Fig. 4 e). Finally, IP treatment of wild-type mice was associated with an additional reduction in TCA cycle flux, which was abolished in Per2 −/− mice (Fig. 4f, Fig. Supplementary Fig. 15d).
While glycolytic utilization of carbohydrates is an important adaptive mechanism during ischemia 18 , increased glycolysis during reperfusion is considered detrimental as it frequently indicates mitochondrial dysfunction. 19 Indeed, tissue reperfusion attenuated glycolysis reflected as reduced production of 13C-fructose-1,6-biosphosphate, 13C-pyruvate or 13Clactate in wild-type mice ( Fig. 4b-d). Although ischemia alone failed to enhance glycolysis in Per2 −/− mice, their glycolytic flux was increased during reperfusion (Fig. 4e,f). Studies on the effect of IP pretreatment on reperfusion metabolism in wild-type mice demonstrated a further reduction of glycolysis 20 and restoration of a glucose metabolism comparable to baseline conditions (Fig. 4f, Supplementary Fig. 15b-d). In contrast thereof, Per2 −/− mice maintained lactate production, indicating uncoupling glycolysis from glucose oxidation as a sign of mitochondrial dysfunction 21 ( Supplementary Fig. 15c). Since we did not observe differences in glucose uptake between wild-type and Per2 −/− mice during ischemia or reperfusion, we next analyzed the effect of metabolic changes on cardiac glycogen. Here, ischemia significantly reduced glycogen storages in both, wild-type or Per2 −/− mice, even though Per2 −/− started from a higher glycogen baseline level (Fig. 4g, Supplementary Fig. 15f). While reperfusion led to the restoration of glycogen in wild-type mice, this was abolished in Per2 −/− mice. Together these data indicate that Per2 −/− are severely compromised in effectively utilizing carbohydrates during ischemia or reperfusion (Fig. 4h).
Hypoxia-inducible factor 1 alpha (Hif1a) links adenosine-mediated Per2 stabilization to cardiac metabolism during ischemia
We next addressed transcriptional mechanisms controlling glycolysis during limited oxygen availability, as IP treatment of wild-type mice was associated with a robust induction of glycolytic enzymes, which was completely abolished in gene-targeted mice for Per2 (Supplementary Table S6). Based on the notion that Hif1a plays a key role in transcriptional control of the glycolytic pathway, 22 we pursued the functional status of Hif1a in genetargeted mice for Per2 in a Hif1a reporter mouse 23 . Surprisingly, we observed a diurnal kinetic for cardiac Hif1a protein levels ( Fig. 5a), transcript levels of its isoforms Hif1.1, Hif1.2 as well as the glycolytic enzymes Pdk1 and Ldh ( Supplementary Fig. S16) during a 24h time period. Moreover, genetic deletion of Per2 in Hif1a reporter mice abolished cycling of Hif1a (Fig. 5a). In addition, stabilization of Hif1a with IP treatment in wild-type was abolished in Per2 −/− mice (Fig. 5b), while mice with induced deletion of Hif1a in cardiac myocytes (Fig. 5c, Supplementary Fig. S17) retained their ability to stabilize Per2 upon IP treatment (Fig. 5c). Similarly, hypoxic Hif1a stabilization was abolished in isolated myocytes from Per2 −/− mice (Fig. 5d). Moreover, IP treatment of Hif1a reporter mice was associated with increased reporter activity, which was abolished following genetic deletion of Per2 (Fig. 5e). We next assessed transcriptional regulation of glycolytic enzymes in oxygen-stable HIF1A overexpressing HMEC-1 cells 24 with or without siRNA mediated PER2 knockdown. Oxygen-stable HIF1A or treatment with the ADORA2B agonist BAY 60-6583 was associated with elevated transcript levels of glycolytic enzymes. While treatment with BAY 60-6583 further enhanced glycolytic transcripts in oxygen-stable HIF1A overexpressing cells, this was abolished after PER2 knockdown (Fig. 5f). Additional studies utilizing co-immunoprecipitation indicated a direct protein-protein interaction between Hif1a and Per2 in cardiac tissues following exposure to IP (Fig. 5g).
As we had previously shown that Adora2b signaling plays a critical role in the transcriptional induction and protein stability for Per2, we next examined Hif1a in the hearts of gene-targeted mice for the Adora2b. Similar to the above findings in Per2 −/− mice, we observed lower expression of the transcript levels for Hif1.1 and Hif1.2 isoforms in genetargeted mice for the Adora2b, in conjunction with abolished circadian expression over a 24h period (Supplementary Fig. S18). Moreover, hypoxia induced stabilization of Hif1a was abolished in Adora2b −/− mice (Fig. 5h,i). Finally, we observed a similar defect for the transcriptional induction of the glycolytic enzymes with cardiac IP treatment of Adora2b −/− as previously seen in Per2 −/− mice (Fig. 5j) Together, these data indicate that Adora2bdependent control of Per2 plays an important role in the hypoxia-elicited induction of the glycolytic machinery during myocardial ischemia.
Stabilization of cardiac Per2 by light exposure mediates cardio-protection from ischemia
We next attempted to achieve enhanced cardiac Per2 stabilization via light exposure 25,26 . Therefore, we exposed mice over 0 to 4h to daylight (13,000 lux, Fig. 6a) and assessed cardiac levels of Per2 protein. We found time-dependent increases in cardiac Per2 protein levels with daylight exposure (Fig. 6b) compared to mice maintained at room light (200 lux, Fig. 6b, Supplementary Fig. 19a). Light exposure of wild-type mice over 4h was associated with induction of cardiac transcript levels for glycolytic enzymes (Fig. 6c). In contrast, lightdependent induction of glycolytic enzymes was abolished in Per2 −/− mice.
We next tested a potential association of light exposure-elicited stabilization of Per2 with cardio-protection (Fig. 6d,e). We observed a time-dependent attenuation of myocardial infarct sizes and plasma troponin I levels in wild-type mice pre-exposed to intense light. In contrast, reduction of myocardial infarct sizes (Fig. 6d, Supplementary Fig. 19b) or plasma troponin I levels (Fig. 6e) following intense light exposure was abolished in gene-targeted mice for Per2. Consistent with these findings, myocardial infarct sizes showed a diurnal variation, with smallest infarct sizes and lowest troponin I levels around midnight (Fig. 6f, Supplementary Fig.19c). Taken together, these findings indicate that exposure to intense light enhances cardiac Per2 levels, and is associated with Per2-dependent cardio-protection from ischemia (Fig. 6g).
DISCUSSION
Myocardial adaptation to conditions of limited oxygen availability involves a metabolic switch towards more oxygen efficient utilization of carbohydrates 27 . The present studies demonstrate that Adora2b/ADORA2B-dependent stabilization of Per2/PER2 plays an important role in this cardio-adaptive response. We observed that Per2 −/− mice showed diminished levels of cellular energy stores during ischemia, while they failed to generate lactate, indicating a metabolic phenotype. In fact, Per2 −/− mice lacked the capacity to enhance oxygen efficient glycolysis, ultimately resulting in depletion of energy-rich phosphate levels, and increased myocardial cell death during ischemia. Together, these studies indicate a previously unrecognized role for Per2 as a metabolic master-switch during cardio-adaption to ischemia, driving the utilization of oxygen-efficient carbohydratedependent metabolic pathways (Fig. 6g).
We observed that Adora2b signaling increased Per2 expression and protein stability. Consistent with our findings, Adora2b signaling has previously been implicated in regulating target genes. One study demonstrated that Adora2b signaling activates MAPK or p38 via cAMP/Creb pathways, 28 both which have been implicated in Per2 regulation. 29 Consistent with the notion that Adora2b signaling alters intracellular cAMP responses, recent studies identified CLOCK-independent regulation of Per2 via cAMP-dependent signaling 30 . Other studies have shown that Adora2b signaling regulates protein stability of adenosine target-genes. In fact, studies on the role of hypoxic preconditioning of the lungs demonstrated that Adora2b signaling has protective and anti-inflammatory effects by regulating the post-translational stability of NFκB 15 .
Our studies demonstrate that cardiac Per2 stabilization can be achieved by daylight exposure. Indeed, stabilization of Per2 within the suprachiasmatic nuclei of the hypothalamus involves light-induced Creb phosphorylation 31 . Moreover, a previous study examined whether profiles of Per1 or Per2 proteins in peripheral organs are affected by the photoperiod 26 . For this purpose, the authors maintained rats under different photoperiods and found that timing of light exposure significantly affected the circadian profile of Per1 or Per2 protein levels in lungs and hearts 26,32 . At present, the mechanisms linking central circadian rhythm regulation and peripheral Per2 stabilization are under investigation and could involve hormonal pathways 33 , or cyclic alterations in 5'-AMP or adenosine 34 . Indeed, a recent study identified 5'-AMP in the initiation of hypo-metabolism in mammals 35 . In addition, it is intriguing to think about light-dependent stabilization of Per2 and cardioprotection from ischemia in the context of well-documented variations in the frequency of the onset of acute myocardial infarction 36 . In fact, two recent studies found that patients have larger infarct sizes in the early morning hours 37,38 . For the first time, it was shown that myocardial infarct size and left ventricular function after acute myocardial infarction have a circadian dependence on the time of day onset of ischemia. 38 However, a different study in humans on cardiac PER2 oscillation reports a 12h delay compared to murine Per2 39 . As such, observations in patients indicating larger infarct sizes in the morning hours 37 would be in contrast to the present findings. However, this study included a high proportion of patients with coronary heart disease and cardiomyopathy compared to healthy controls. As such it is not clear if these findings are a reflection of a circadian pattern in healthy patients or in patients with heart disease. Moreover, epidemiologic studies in human indicate that other factors than daytime (e.g. sun exposure, exercise, social factors) are critical for synchronization of circadian rhythms 40 . Therefore, the present findings from mice cannot simply be extrapolated to the circadian rhythmicity of myocardial ischemia in humans. In fact, additional studies will be necessary to define the expression levels and circadian rhythmicity of PER2 in the human heart, as well as its functional role in human heart disease.
The critical role of anaerobic glycolysis in providing ATP in severe ischemia has been well documented and observed in different gene targeted mice. As such GLUT4 −/− or AMPK −/− mice 41,42 exhibited reduced lactate production which is associated with increased tissue injury after low flow ischemia and diminished regeneration of high-energy phosphate compounds on reperfusion. These findings are in line with the present studies showing larger infarct sizes in Per2 −/− who failed to utilize glycolysis during ischemia and to restore glycogen levels during reperfusion. Similar to GLUT4 −/− , Per2 −/− also showed higher glycogen levels at baseline but were unable to sufficiently utilize carbohydrates. While diminished uptake of exogenous glucose contributes to the phenotype in GLUT4 −/− , we did not observe differences in 13C6 Glucose uptake between wild-type or Per2 −/− mice.
The present studies indicate that Per2 enhances glycolytic capacity during ischemia and genetic data implicate Hif1a as molecular regulator for this adaptation. These findings are consistent with recent studies of myocardial ischemia in hypoxia-inducible factor prolyl 4hydroxylase-2 hypomorphic mice 43 . These mice show increased protein levels of Hif1a and Hif2a, in conjunction with cardio-protection from ischemia. While the present studies in gene-targeted mice for Per2 demonstrate attenuated stabilization of Hif1a, larger infarct sizes and attenuated glycolytic flow, the above studies demonstrate that Hif1a overexpression by Phd hypomorphism is associated with increased lactate levels and glycolytic capacity. 43 While Ldh regulation by Hif1a has been shown earlier 44 , studies on cardiac Ldh implicated the Clock:Bmal1 as regulating transcription factor 45 . Clock:Bmal1 has also been shown to regulate the circadian pattern of Per2. However, photic induction of Per2 in the SCN is suggested to be Creb dependent. 46 Similarly, in the present study, enhanced Adora2b signaling during ischemia led to Creb induction and Per2 stabilization. Together, such findings support the concept that multiple pathways can function to regulate the circadian network. 47 Consistent with our findings for Hif1a as circadian protein with highest protein levels in the late evening (ZT12-ZT18), we found a significant reduction in infarct sizes at ZT12 and ZT18 compared to ZT0. Surprisingly, a different study on diurnal variations in myocardial ischemia/reperfusion tolerance revealed increased infarct sizes at ZT12 compared to ZT0. 48 The authors used a closed chest model for myocardial ischemia, and a potent opioid (buprenorphine) was used for anesthesia. The differences between both studies could potentially be explained by the fact that the open chest model used in the current study has been associated with increased inflammation which may have influenced experimental results. 49 Moreover, a study of permanent cardiac occlusion in Per2 −/− mice revealed attenuated infarct sizes following Per2 deletion, which could be due to differences in the model system or methods (e.g. identification of an area at risk) 50 . In line with our findings, other studies on the role of Per2 during ischemia and reperfusion reported impaired endothelial progenitor cell function and auto-amputation of the distal limb when Per2 −/− mice were subjected to hind-limb ischemia. 51,52 The present findings of Per2-dependent regulation of cardiac metabolism provide some level of specificity for Per2, as gene-targeted mice for Per1 did not show a phenotype in myocardial ischemia nor had alterations in their ability to stabilize Hif1a. This is consistent with previous studies showing non-redundant roles for Per1 and Per2 in the mammalian circadian clock 53 . Studies on clock, cryptochrome 1 or timeless following IP showed no induction of their transcript levels, while protein stabilization occurred in an Adora2b-independent fashion Supplementary Fig. S4). These results are consistent with previous findings indicating posttranslational mechanisms in the regulation of the mammalian circadian clock. 12 Taken together, the present results identify Adora2b-dependent stabilization of Per2 as an endogenous mechanism allowing the ischemic myocardium to adapt its metabolism towards oxygen-efficient utilization of carbohydrates. Future challenges will involve understanding the axis between light exposure and cardiac Per2 stabilization, as well as defining approaches to stabilize cardiac Per2 levels in a therapeutic setting. Moreover, the use of a germline Per2 −/− mouse in the current studies does not allow conclusions regarding the contributions of different tissues to the observed phenotype. Therefore, additional challenges will include studies in mice with tissue-specific Per2 deletion.
Human cardiac tissue
Patient heart samples were obtained from patients undergoing orthotopic cardiac transplantation. Clinical samples screened are given in Supplementary Table S1. Collection and use of patient samples were approved by the appropriate IRB of each Institution in addition to the study having Colorado Multiple Institutional Review Board (COMIRB) approval.
Mice
Experimental protocols were approved by the Institutional Review Board at the University of Colorado Denver, USA. They were in accordance with the Protection of Animals and the National Institutes of Health guidelines for use of live animals. Adora2b −/− mice were generated by Deltagen 10 , Per2 −/− mice 11 , BL6C57, Hif1a loxp/loxp 54 and Myh6-cre/Esr1 55 , mPer2Luc 56 and ROSA26 ODD-Luc +23 mice were obtained from the Jackson Laboratories.
Murine Model for cardiac ischemia
Murine model for in situ ischemia and IP of the heart was performed using a hanging weight system 57 .
Transcriptional analysis
Total RNA was isolated from human heart tissue or human endothelial cells (HMEC) and transcript levels were determined by real-time RT-PCR (iCycler; Bio-Rad Laboratories Inc.) 58 .
In vitro preconditioning
Cellular preconditioning was performed on adult cardiomyocytes that were plated on either 6-or 24-well plates following a modified in vivo protocol optimized for cells 15 .
PER2 promoter studies
Full length PER2 promoter constructs and truncations were sub-cloned into pGL4 luciferase reporter vector. HMEC-1 cells were co-transfected with the pGL4 construct expressing firefly luciferase and with pRL-TK plasmid expressing Renilla luciferase. To measure promoter activity, the activity of firefly luciferase was corrected against Renilla luciferase activity. To control for circadian activity, cells were co-transected with the CREB dominant negative vector from Clontech.
Electron Microscopy
The samples were imaged with an FEI Tecnai G2 Spirit Biotwin TEM (Hillsboro, OR) at an operating voltage of 120 kV.
Glycogen and Long Chain Fatty Acid (LCFA) measurements
Glycogen and LCFA were determined using Glycogen Assay Kit and Free Fatty Acid Quantification Kit from Biovision.
Echocardiography
For echocardiography, mice were anesthetized with 2% isoflurane and cardiac function was assessed by 2D-transthoracic echocardiography using a Visual Sonics Vevo 770 high resolution ultrasound imager equipped with a 35-MHz transducer. The heart rates was maintained above 500 beats/min throughout. 60
PK and LDH activity
Tissues were homogenized and enzyme activity was determined using a Pyruvate Kinase Assay Kit and and LDH Assay kit from Biovison.
NMR Analysis on Cell and Tissue Extracts
All 1H-NMR spectra were obtained at the Bruker 500 MHz DRX NMR spectrometer using an inverse Bruker 5-mm TXI probe.
Determination of 13C glucose and 13C metabolites using liquid chromatography-tandem mass spectrometry (UPLC-MS)
Isotopically labeled 13C-glucose was purchased from Cambridge Isotope Labs. All UPLC-MS data were acquired with a Waters Acquity UPLC system coupled to a Water Synapt HDMS quadrupole time-of-flight mass spectrometer. Experimental details are given in the "Supplementary Information".
Co-Immunoprecipitation studies
Co-IP studies were performed using the Thermo Scientific Pierce Co-Immunoprecipitation (Co-IP) Kit.
Data analysis
Data were compared by two-factor ANOVA with Bonferroni's posttest, or by Student's t test where appropriate. Values are expressed as mean ± SD from 3-6 animals per condition. For analysis of changes in transcript a one-way ANOVA was carried out and multiple comparisons between control and treatment groups were made using the Dunnett post test. Data are expressed as mean ± SD. P<0.05 was considered statistically significant. For metabolic analysis 3 repeats were performed. All numerical data are presented as mean ± SD from the replicate experiments. Unpaired T-test and/or one-way analysis of variance (ANOVA) test were used to determine differences between groups. The significance level was set at p<0.05 for all tests. For all statistical analysis GraphPad Prism 5.0 software for Windows XP was used. The authors had full access to and take full responsibility for the integrity of the data. All authors have read and agree to the manuscript as written.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. post-translational PER2 protein stability; ADORA2B: A2B adenosine receptor. (d, right) PER2 protein levels following inhibition of proteasomal degradation (AM114 (10 µM; one of three representative blots is displayed). (e) Synchronized HMEC-1 were treated with vehicle, or ADORA2B agonist BAY 60-6583 and protein lysates were isolated for native protein complexes using a PER2 antibody covalently coupled (immobilized) onto an aminereactive resin. Immunoprecipitated protein was analyzed using immunoblot against ubiquitin. One representative blot of three is displayed. (e, middle, right) Changes in protein shown by densitometry (n=3). (f) Synchronized HMEC-1 treated with vehicle, or ADORA2B agonist BAY 60-6583 and blotted for total CUL1 or neddylated CUL1 using a specific CUL1 antibody on a gradient gel. (g) HMEC-1 at 6h following synchronization treated with ADORA2B agonist BAY 60-6583 alone (10 µM), or following additional pretreatment with ADORA2B antagonist PSB 1115 (1 µM) and blotted for neddylated CULLIN using a NEDD8 antibody (one of three representative experiments is displayed). (h, i) HMEC-1 following siRNA repression of CSN5 (siCSN5) or treatment with non-specific control siRNA (csiR) were synchronized by serum starvation, lysed and blotted for neddylated CULLIN (h) or PER2 (i) at indicated time points (one of three representative experiments is displayed). (j) Cardiac myocytes were isolated from wild-type (WT) or Adora2b −/− mice, exposed to hypoxic preconditioning (HPC) or control conditions and blotted for Per2 or neddylated Cullin (one representative blot of three independent experiments is displayed, one animal per experiment). (a-c) Per2 −/− mice or littermate controls matched in age, weight and gender were exposed to 60 min of in situ myocardial ischemia followed by 2h or reperfusion, or received IP pretreatment or Adora2b agonist (BAY 60-6583) treatment prior to myocardial ischemia. IP consisted of 4 cycles of 5 minutes of myocardial ischemia and 5 minutes of reperfusion. Infarct sizes are expressed as the percent of the area at risk that was exposed to myocardial ischemia. In parallel, measurements of the myocardial injury marker troponin I were performed (mean±SD; n=6). (b) Representative infarct staining from Per2 −/− mice exposed (a-h) Per2 −/− mice or littermate controls matched in age, weight and gender were exposed to 60 min of in situ myocardial ischemia with or without ischemic preconditioning (IP; 4 cycles of 5 min ischemia followed by 5 min of reperfusion) prior to myocardial ischemia. 13C glucose (Cambridge Isotopes) was administered intra-arterially in Per2 −/− mice or littermate controls either 30 minutes before ischemia (ischemia group: I) or at the onset of reperfusion following 60 min of in situ myocardial ischemia (reperfusion group: R) with or (IP; 4 cycles of 5 min ischemia followed by 5 min of reperfusion) and Western blot analysis for Hif1a or Per2 protein from the area at risk was performed, respectively. One representative blot of three is displayed. (d) Isolated adult cardiomyocytes from wild-type or Per2 −/− mice were exposed to ambient hypoxia [1%, 4h] and analyzed for Hif1a protein. One Following exposure to 12h of darkness, mice were exposed to indicated times of intense daylight (13,000 lux) and compared to controls that were maintained at room light. Cardiac animals per condition; Pfkm: 6-phosphofructokinase-m; Pgk1: phosphoglycerate kinase 1; Pk: pyruvate kinase; Pdk1: pyruvate dehydrogenase kinase, isozyme 1). (g,e) Per2 −/− mice or littermate controls matched in age, gender and weight underwent light therapy as described above over indicated time periods, followed by exposure to in situ myocardial ischemia (60 min) followed by 2h of reperfusion. Myocardial injury was assessed by measurement of Troponin I plasma levels (n=6 mice per experimental group) or infarct staining (see also Supplementary Fig S19b, n=6 animals per group, mean±SD). (f) Wildtype mice were subjected to in situ myocardial ischemia (60 min) followed by 2h of reperfusion over a 24 h time period. Troponin or infarct sizes (scale bar represents 50 µm) were correlated to Per2 protein levels using Per2 reporter mice (*p<0.05, n=8, mean±SD, Fig. 5a, Supplementary Fig. S19c). (g) Schematic model of Adora2b-dependent Per2 stabilization and its role in regulating anaerobic glycolysis and cardiac metabolism during myocardial ischemia.
|
2017-11-08T19:09:49.434Z
|
2012-04-15T00:00:00.000
|
{
"year": 2012,
"sha1": "a67ee9c6355263624f451e2f8dbe8c0ad195f0af",
"oa_license": "implied-oa",
"oa_url": "https://europepmc.org/articles/pmc3378044?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "62358e7edba2778966bd5af73f07f3a3a9febce1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
249265544
|
pes2o/s2orc
|
v3-fos-license
|
Some Fixed Point Results for Two Pairs of Mappings on Integral and Rational Settings
In 2000, P. Hitzler and A.K. Seda (Hitzeler & Seda 2000) obtained a very important generalization of topology which they named as dislocated topology. The corresponding generalized notion of metric obtained from dislocated topology was named as dislocated metric.The fixed point theorem for a single map satisfying contractive condition of integral type with a summable Lebesgue integrable mapping in complete metric space was first time estabished by Branciari (Branciari 2002) in the year 2002. B. E. Rhoades (Rhoades 2003) further extended the theorem of Branciari (Branciari 2002) with a most general contractive condition. Extensions and generalizations for rational and integral type mapping in various spaces can be seen in the literature of fixed point theory. This article establishes some common fixed point results satisfying integral and rational type contractive conditions with common limit range property for two pairs of maps in dislocated metric space. We have established common fixed point result in dislocated metric space with compatible and reciprocal continuity of mappings.
INTRODUCTION
In 1886 A.D., H. Poincare first time introduced the notion of fixed point. In 1922, an important and remarkable result was presented by S. Banach (Banach 1922) for a contraction mapping in complete metric space which is famous as Banach Contraction Principle (BCP). After establishment of BCP, various NJST | Vol 20 | No. 1 | Jan-Jun 2021 Nepal Academy of Science and Technology generalizations by several authors are obtained in the literature of fixed point theory.Now, the theory of fixed point has become a most crucial and dynamic area of research in nonlinear analysis. A remarkable generalization of BCP was obtained by A. Meir and E. Keeler (Meir & Keeler 1969) with (ϵ-δ) notions.
The concept of compatible maps was initiated by G. Jungck (Jungck 1986). R. P. Pant (Pant 1999) introduced the concept of reciprocally continuous mappings in metric space. P. Hitzler and A. K. Seda (Hitzeler & Seda 2000) obtained a generalization of topology which they named as dislocated topology.The corresponding generalized notion of metric from dislocated topology is dislocated metric. However, The concept of dislocated metric space appeared in (Matthews 1986) by S. G. Matthews under metric domains.
Branciari (Branciari 2002) obtained a fixed point theorem for a single map satisfying contractive condition of integral type with a summable Lebesgue integrable mapping in complete metric space. B. E. Rhoades (Rhoades 2003) extended the theorem of Branciari (Branciari 2002 The purpose of this article is to establish some fixed point theorems using common limit range property for compatible maps, weakly compatible maps and reciprocally continuous maps having integral and rational type contractive conditions in dislocated metric space.
PRELIMINARIES
We start with the following definitions, lemmas and theorems. Definition 1 (Hitzler & Seda 2000) Let X be a non empty set and let d:X×X→[0,∞) be a function satisfying the following conditions: Assume further that given for each ε > 0 there exists δ > 0 such that for all x,y X and (4) where then for each x 0 X, the sequence {y n } in X defined by the rule is a Cauchy sequence.
MAIN RESULTS
We establish a common fixed point theorem satisfying integral and rational type contractive condition with common limit range (CLR) property for two pairs of weakly compatible maps in dislocated metric space.
Theorem 4 Let (X,d) be a dislocated metric space. Let A,B,S,T:X→X satisfying the following conditions
where, is a Lebesgue integrable mapping which is summable, non-negative and such that Ty d Ax Sx d By Ty d Sx By d Ax Ty , , , Proof: Assume that the pair (A,T) satisfy (CLR A ) property, so there exists a sequence {x n } X such that (9) for some x X. Since A(X) S(X), so there exists a sequence {y n } X such that lim n→∞ Ax n =lim n→∞ Sy n =Ax. We show that (10) From the relation (6) we have (11) where, Taking limit as n→∞ in (11) we get (12) Since, where, Since, So, taking the limit as n→∞ in (13), we conclude that (14) which is a contradiction.
Au=Bv=Tu=w
This proves that u is the coincidence point of the maps A and T.
ATu=TAu Aw=Tw
We show that Aw=w.
From relation (6) where, Hence, d (w,z) = 0 w = z. This establishes the uniqueness of the common fixed point.
We can obtain the following corollaries with the help of above theorem.
Corollary 1 Let (X,d) be a dislocated metric space. Let A,B,S:X→X satisfying the following conditions A(X) and B(X) S(X)
where, is a Lebesgue integrable mapping which is summable, non-negative and such that
Corollary 2 Let (X,d) be a dislocated metric space. Let A,S,T:X→X satisfying the following conditions A(X) S(X) and A(X) T(X)
where,
Corollary 3 Let (X,d) be a dislocated metric space. Let A,S:X→X satisfying the following conditions A(X) S(X)
where, is a Lebesgue integrable mapping which is summable, non-negative and such that and is such that ϕ (t) < t for some t > 0.
Suppose that the mappings in one of the pairs (A, which is a contradictions, so d(Az, AAz) = 0 Az = AAz. Hence, Az = AAz = SAz. Thus Az is the common fixed point of the mappings A and S.
Similarly we obtain Bw(=Az) is the common fixed point of the mappings B and T.
Uniqueness
If possible, let u and v(u ≠ v) are two common fixed points of the maps A, B, S and T. Now by virtue of relation (17) = ϕ(3kd(u, v)) < 3kd (u, v) which is a contradiction. This shows that d(u,v)=0 u=v.
The proof is similar when the mappings B and T are assumed compatible and reciprocally continuous. It completes the proof of the theorem.
We can establish the following corollaries with the help of the above theorem.
|
2022-06-02T15:11:48.196Z
|
2021-12-31T00:00:00.000
|
{
"year": 2021,
"sha1": "615c34ec51102d01b08d1c4e06c3d156f78cbf0e",
"oa_license": "CCBYNC",
"oa_url": "https://www.nepjol.info/index.php/NJST/article/download/43346/33351",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "558dd41eb3a48a43c870e1dfdf2b979a436ba1be",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
15406760
|
pes2o/s2orc
|
v3-fos-license
|
Heaviside transform of the effective potential in the Gross-Neveu model
Unconventional way of handling the perturbative series is presented with the help of Heaviside transformation with respect to the mass. We apply Heaviside transform to the effective potential in the massive Gross-Neveu model and carry out perturbative approximation of the massless potential by dealing with the resulting Heaviside function. We find that accurate values of the dynamical mass can be obtained from the Heaviside function already at finite orders where just the several of diagrams are incorporated. We prove that our approximants converges to the exact massless potential in the infinite order. Small mass expansion of the effective potential can be also obtained in our approach.
Introduction
Even if the proof of dynamical massless symmetry breaking requires genuine nonperturbative approaches, it does not necessarily mean that the perturbative expansion is totally useless. There is the possibility that non-perturbative quantities in the massless limit may be approximately calculated via perturbative approach. The purpose of this paper is to explore the possibility and show a concrete affirmative result by re-visiting the Gross-Neveu model 1 .
Let us consider the effective potential of the Gross-Neveu model. As is well known, ordinary massless perturbation expansion gives infra-red divergences and to cure the problem one must sum up all the one-loop diagrams. Then the summed result reveals the non-trivial vacuum configuration of <ψψ > and the dynamical generation of the mass.
The point we like to note is whether such a non-perturbative effect needs, in the approximate evaluation, the infinite sum of perturbative contributions. To resolve the issue, we deal with a truncated series V pert , without conventional loop summation, and study the approximate calculation of the effective potential V at m = 0.
A naive way of approximation would go as the following: To get around the infrared singularity we turn to the massive case and probe V pert (σ, m) at small m.
Since the limit, m → 0, cannot be taken in V pert (σ, m), we may choose some nonzero m (= m * ) and approximate the effective potential V (σ, m = 0) by V pert (σ, m * ).
However, the problem is that V pert (σ, m) is not valid for small enough m. This is the place where the Heaviside function comes in. Our suggestion to resolve the problem is to contact the Heaviside transformation of V (σ, m) with respect to the mass 2,3 .
Heaviside transform of the effective potential,V , is a function of σ and x which is conjugate with m. Then, the key relation is that lim m→0 V (σ, m) = lim x→∞V (σ, x).
Of course this is valid only when the both limits exist and do not apply for V pert and its Heaviside function,V pert , because those functions diverge in the limits. However there arises the possibility thatV (σ, ∞) and hence V (σ, 0) may be well approximated by putting some finite value of x intoV pert . This is becauseV pert has the convergence radius much larger than that of V pert . AlthoughV pert shares the similar infra-red problems with V pert , we will find thatV pert is much more convenient in this kind of massless approximation. Actually we will demonstrate that, at finite perturbative orders where just the several of Feynman diagrams are taken into account, the accurate dynamical mass is obtained via the Heaviside transform approach.
Throughout this paper, we use dimensional regularization 4 . We confine ourselves with the leading order of large N expansion and N is omitted for the sake of simplicity.
Heaviside transform with respect to the mass
In this section we summarize basic features of the Heaviside transform and illustrate our strategy by taking a simple example.
Let Ω(m) be a given function of the mass m. The Heaviside transform of Ω(m) is given by the Bromwich integral, where the vertical straight contour should lie in the right of all the possible poles and the cut of Ω(m)/m (In (1), the real parameter s specifies the location of the contour). Since Ω(m)/m is analytic in the domain, Re(m) > s,Ω(x) is zero when x < 0. It is known that the Laplace transformation (of the second kind) gives the original function as, SinceΩ(x) = 0 for x < 0, the region of the integration effectively reduces to [0, ∞).
It is easy to derive the relation, where the both limits are assumed to exist. As noted before, the point of our scheme consists in utilizingΩ to approximate the massless value of Ω, Ω(0), by relying upon To illustrate our strategy based on (3), let us consider a simple example. Given a following truncated series in 1/m, we try to approximate the value of f (m) = f ∞ (m) = (1 + m) −1 at m = 0, f (0) = 1, by using information just contained in the truncated series (4). Since the convergence radius, ρ, of f ∞ (m) is unity, we cannot have approximation better than 1/2 from f L (m). However, the state changes if we deal with its Heaviside function.
The Heaviside transform of f L (m) is given bŷ where From (5) it is easy to find thatf (x) = (1−e −x )θ(x) and (3) holds for f andf . For our purpose it is crucial that ρ = ∞ forf ∞ while ρ = 1 for f ∞ . The infinite convergence radius ensures us to probe the large x behavior off byf L to arbitrary precision by increasing perturbative order. Due to the truncation, however,f L diverges as x → ∞.
Then, in approximatingf (∞) and therefore f (0), we stop taking the limit and input some finite value into x. The input value of x, say x * , should be taken as large as possible in the reliable perturbative region in x. At this place we understand that the good convergence property off L is one of the advantage of Heaviside function.
Since the upper limit of perturbative region is not a rigorously defined concept, we determine the input value x * in heuristic way. Our suggestion to fix x * is as the following: The series (5) is valid for small x but breaks down at large x. The breaking appears as the domination of the highest term inf L which leads to the unlimited growth or decreasing of the function (see Fig.1). Thus x * is located somewhere around the beginning of the dominating behavior. Then, for odd L and large even L, we find the plateau region just before the domination and that the region represents the end of the perturbative regime. Thus, we choose the stationary point in the plateau region as representing the typical violation of the perturbation expansion. Hence we fix x * by the stationarity condition, The condition (7) reads as and reduces to The solution exists for odd L and it varies with L. We find from (9) that the solution x * tends to ∞ as L → ∞. More precisely the solution scales for large L as We have explicitly done the numerical experiment to several higher orders and obtained the result for L = 1, 5, 9, 13, 17, converges in the L → ∞ limit by using the scaling relation (10).
Up to now we have concentrated on approximating the massless value. We here point out that our scheme is capable of constructing the small m expansion of Ω(m), that is, the scheme allows the approximation of the function itself when m is small.
Consider in general the approximation of the derivatives at m = 0, which is needed when one constructs the small m expansion of Ω(m), The coefficients, Ω (k) (0) (k = 1, 2, 3, · · ·), can be approximated as follows. The starting formula is that where we used Here H above the arrow represents the Heaviside transformation. Hence, from the agreement condition (3) we find Now in our perturbative approach we use Ω L (the truncated series at the order L) for real Ω. Then, we show that we can simulate α k (∞) by α k (x * k ) where x * k may be fixed following the same reasoning we presented for the case of f (0) approximation.
Namely we guess that the break down of the perturbative expansion is represented by the plateau region, if it exists, just before the unlimited growth of the size of the function. Therefore we use stationarity condition which reads, and find that x * k satisfies the same condition as that for x * . Thus the solution of (17) is universal for all k and fixes the coefficients of the small m expansion to all orders. This is desirable since the uncertainty connected with the choice of x * and x * k is minimized. We note that for α k (x * ) (k = 1, 2, 3, · · ·) the integration is necessary and θ(x) and δ functions should be kept in the integrand in general.
As an example let us calculate the small m expansion of f (m). The coefficient α k is given at order L by, By substituting x * at L = 17 into α k , we have the following satisfactory approximant 3 Application to the effective potential Having prepared basic analysis, we turn to a model field theory which is of our main interest. Consider the Gross-Neveu model at the leading order of large N expansion 1 . The Lagrangian is given within dimensional regularization 4 where Here MS scheme 5 was used for the subtraction. It is well known that the model generates the dynamical fermion mass, m dyn = Λ, where Λ denotes the renormalization group invariant scale in MS scheme.
At the leading order of 1/N expansion, the effective potential is given by the sum of diagrams shown in Fig.2. The straightforward calculation gives We note that although the naive power counting with respect to N leads that the contribution with many σ-legs corresponds to higher order in 1/N, they must be included since the vacuum value of σ is of order √ N .
The series (22) converges only when |gσ/m| < 1 and hence the small m behavior relevant to the dynamical mass generation is not known from (22). However, Heavi-side transformation enlarges the convergence radius and enables us to study the large x behavior of the corresponding Heaviside function,V (σ, x), as we can see below.
To obtainV (σ, x) we need to know the transform of m k (k = 0, 1, 2, · · ·), m log m, m 2 log m and 1/m k . Here the following formula is basic, For example from (1) we have The use of (23) on (24) then leads to The transformation of m k is easily obtained from as The δ functions are needed when one carries out Laplace integrals for the Heaviside functions. This is because the δ function terms cancel out the divergences coming from the first terms of (25) and (26), for example. Since the integration over x is however not necessary as long as the approximation in the massless limit is concerned, we omit, for a while, δ functions and set θ(x) = 1 in the transformed functions. Now, using the results, (24), (25), (26), (27), (28) and Hm −k = x k /k!, we find Note that, due to the creation of 1/k! in H[1/m k ], the series converges for any large x. Therefore the large x behavior ofV can be easily accessed by increasing the order of expansion. This is one of the advantages ofV over V .
We turn to the approximation of the massless effective potential by perturbative series at order L,V L . At L-th order, we have just first L + 1 terms of (29) and find The input x * will be determined as in the previous section. Actually the break down appears as the domination of the last term inV L which shows up as its unlimited behavior for large x. This can be seen in Fig. 3. And before the domination the function experiences a stationary behavior for odd L and large even L. We find that the plateau region represents the end of the reliable perturbative regime and thus we fix x * by the equation, If there are several solutions we should input the largest one into x * due to the obvious reason. Now, the condition (31) gives x * as constant/gσ for odd L † . For odd L, the substitution of the solution intoV L gives the optimized potential V opt (σ). For example for L = 3, we have the solution, gσx * = 1.59607, and this gives the optimized potential, The dynamical mass is given from V opt in the standard way. Note that, since (33) The above result is quite good. Thus, via Heaviside transform approach, the dynamical mass is approximated only from perturbative information.
The small mass expansion can be also obtained. Our task is just to substitute the solution of (31) into the approximate coefficients, To perform integration, we need the full form ofV L including the θ and δ functions.
The full form is given bŷ From (35) α k is given at L-th order as (−1) n X n−1 n!(n − 1) , where X = x * gσ. At L = 11, for example, we have This is quite accurate because the exact result reads Before closing this section, we prove that our approximants for the massless potential converges to the exact result in the L → ∞ limit. That is, lim L→∞VL (σ, x * ) = V (σ, 0). From (30) we find that lim L→∞ ∂V L /∂x can be easily summed up and, using lim L→∞VL =V (given by (29)), The perturbative truncation of (39) is given by expanding exp[−gσx] to relevant orders. Since ρ = ∞ for the series expansion of (39), the solution of the truncated version of (39) = 0 approaches to ∞ in the L → ∞ limit. More precisely we find the scaling of the solution for large L, Now consider the reminderR L , defined bŷ SinceV L +R L =V ∞ and ρ = ∞ forV ∞ , it is sufficient to show that Note here that x * depends on L and behaves at large L as (40). Now, using the Stirling's formula and (40), we have which proves the convergence.
Discussion
One reason of the success of our approximate calculation is that the transformed function has infinite radius of convergence. The other reason is that, as the order increases,V L (σ, x) quickly approaches to the value at x = ∞ for fixed σ. If one uses the closed form,V one finds the reason by expanding (44) for large gσx, By contrast, the original function has the power-like expansion as shown in (38).
Thus it is obvious that the approximation of the massless potential is more convenient inV L since it approaches to the "massless" value much faster than V L . The reason behind why the transformed function behaves so good is not known to us.
We have shown that the perturbative series at finite orders produces the approximate massless effective potential and the dynamical mass. The deformation of the effective potential was made by Heaviside transform with respect to the mass and the stationarity prescription to fix the input value x * has found to work good. It was also shown that our scheme is capable of approximating the small m behavior. Thus the Heaviside transform drastically improves the status of the perturbative approximation of physical quantities. We are under the study of full approximation method by which the general case where the explicit mass is not small can be treated. The result of investigation will be reported elsewhere. and 9. For the sake of the simplicity, we have set that gσ = 1 and Λ = 1.
|
2014-10-01T00:00:00.000Z
|
1996-04-16T00:00:00.000
|
{
"year": 1996,
"sha1": "fead18cfd1d2f8ac8bbe24d7f6cf20f59788eaa7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-th/9604081",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "c4b9c6fb9f9758af327f3e572296e05b8a4a0df1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
195071939
|
pes2o/s2orc
|
v3-fos-license
|
Long-term Patient-reported Quality of Life and Pain After a Multidisciplinary Clinical Pathway for Elderly Patients With Hip Fracture: A Retrospective Comparative Cohort Study
Introduction: There is an increase in incidence of hip fractures in the ageing population. The implementation of multidisciplinary clinical pathways (MCP) has proven to be effective in improving the care for these frail patients, and MCP tends to be more effective than usual care (UC). The aim of this study was to analyze potential differences in patient-reported outcome among elderly patients with hip fractures who followed MCP versus those who followed UC. Materials and Methods: This retrospective cohort study included patients aged 65 years or older with a low-energy hip fracture, who underwent surgery in the Maastricht University Medical Center, Maastricht, the Netherlands. Two cohorts were analyzed; the first one had patients who underwent UC in 2012 and the second one contained patients who followed MCP in 2015. Collected data regarded demographics, patient-reported outcomes (Short Form 12 [SF-12] and the Numeric Rating Scale [NRS] to measure pain), and patient outcome. Results: This cohort study included 398 patients, 182 of them were included in the MCP group and 216 were in the UC group. No differences in gender, age, or American Society of Anesthesiologists classification were found between the groups. No significant differences were found in SF-12 and the NRS data between the MCP group and UC group. In the MCP group, significantly lower rates of postoperative complications were found than in the UC group, but mortality within 30 days and one year after the hip fracture was similar in both groups. Discussion: Although the effects of hip fractures in the elderly on patient-reported outcome, pain and quality of life have been addressed in several recent studies, the effects of MCP on long-term outcome was unclear. Conclusion: A multidisciplinary clinical pathway approach for elderly patients with a hip fracture is associated with a reduced time to surgery and reduced postoperative complications, while no differences were found in quality of life, pain, or mortality.
Introduction
The ageing population is growing rapidly, and as a pertaining issue, the incidence of hip fractures among elderly patients is increasing as well. 1,2 The high rate of comorbidities in this frail group is associated with high mortality rates. [3][4][5][6] Moreover, both the functional outcome and life expectancy decrease in elderly patients with hip fractures. Several studies have shown a decrease in quality of life, mobility, and ability to perform activities of daily living among this population. [7][8][9][10][11] Implementation of a multidisciplinary clinical pathway (MCP), 12 which has been developed to optimize medical care in various patient groups, [13][14][15][16][17][18][19] is one of the few effective measures to improve the outcome. In fact, the usage of MCP for elderly patients with hip fractures tends to be more effective than usual care (UC), 20,21 resulting in lower rates of postoperative complications and a decrease in the 30-day mortality rate. [22][23][24] Although the effect of MCP on postoperative complications and mortality was established before, the comparison between UC and MCP on patient-reported outcome (patient-reported quality of life and pain) has not been studied. The aim of this study was therefore to compare patient-reported outcome in elderly patients with a surgically treated hip fracture following UC versus elderly patients with a surgically treated hip fracture following MCP.
Materials and Methods
This retrospective cohort study included patients aged 65 years or older with a surgically treated low-energy proximal femur fracture (femoral neck) and or pertrochanteric fractures (AO/ OTA type 31 A [trochanteric fracture] and 31 B [femoral neck fracture]) who underwent surgery in the Maastricht University Medical Center, Maastricht, the Netherlands. Patients with a high-energy hip fracture (defined as motor vehicle and motorcycle accidents, a collision at a moped or bicycle >35 km/h, pedestrian struck by a motor vehicle >10 km/h, and fall from 2 times the body height), patients with an AO/OTA type 31 C (femoral head) proximal femoral fracture, patients with >2 fractures, and patients not living in the hospital area were excluded. Data from both cohorts were separately collected and analyzed. The first cohort regarded the data of all patients treated during the year 2012, before implementation of MCPs. This cohort is therefore referred to as the cohort submitted to UC. The second cohort regarded the data of all patients treated during the year 2015, which is 2 years after the implementation of MCP. Therefore, this cohort is referred to as the MCP cohort. Surgical treatment was performed according to the Dutch Guidelines. 25 Usual care protocol includes standard traditional treatment by an orthopedic trauma surgeon at the trauma unit with a follow-up at the out-patient clinic. Physiotherapy is prescribed when the patient is discharged home. Multidisciplinary clinical pathways address the management of care that patients need from arrival in the emergency department until they are discharged to the rehabilitation unit or a nursing home. The multidisciplinary team consists of an orthopedic trauma surgeon, a geriatrician, an anesthesiologist, and a physiotherapist. These disciplines are all actively involved in the decision making process regarding the care that patients need from the first presentation at the emergency department until they are discharged from the hospital. Additional medical specialties remain available for consultation depending on the comorbidities of the patient. The aim of the team is to perform surgical treatment within 24 hours upon admission and to achieve discharge within 4 days. To achieve this goal, agreements have been set in place with rehabilitation facilities to transfer the surgically treated patients to a patient-centered destination as soon as possible. This may either be a rehabilitation center or nursing home with rehabilitation facilities. The postoperative protocol for both groups, MCP and UC, was early mobilization and early full weight bearing.
Data were retrospectively collected from the medical records by 2 independent researchers. Demographics included age at time of fracture, gender, ASA (American Society of Anesthesiologists; assessing the fitness of patients before surgery, type 1-6), 26 Charlson-comorbidity score (classifying prognostic comorbidity, a higher score represents additional comorbidities), 27 time to operating theatre (in hours), type of fracture (femoral neck fracture or pertrochanteric fracture), type of surgical procedure (prosthesis, intramedullary nail or dynamic hip screw), and length of stay (in days). Furthermore, patient-reported questionnaires were sent by mail to all surviving individuals after a minimum of 2 years to follow-up on their quality of life and pain. To ensure a sufficient response rate, all eligible participants were contacted by telephone prior to sending the questionnaires by regular mail. During this telephone call, informed consent was obtained. If the questionnaires were not returned within 30 days after sending, a new telephone call followed to improve participation.
The primary outcome measure was the patient-reported outcome questionnaire, which included 2 items, the quality of life and pain. The quality of life was measured with the Short Form 12 (SF-12). 28 The SF-12 consists of 12 items that assess 8 dimensions of health: physical functioning, role-physical, bodily pain, general health, vitality, social functioning, role-emotional, and mental health. The SF-12 measures various aspects of physical and mental health from which physical composite score (PCS) and mental composite score (MCS) can be calculated, ranging from a minimum of 0 to a maximum of 100. The intensity of pain was measured with the Numeric Rating Scale (NRS; 0 is no pain and 10 is worst pain). 29 The secondary outcome parameters were complications, delirium, the 30-day mortality, and one-year mortality rate. Postoperative complications (eg, complications related to the fracture and general complications not related to the fracture) were defined as any adverse event that required intervention; these were recorded as either present or non-present. Data on the occurrence of postoperative delirium were collected separately.
The medical ethics committee of Maastricht University Medical Center, Maastricht, the Netherlands approved this study and informed consent for sending the questionnaire was given by all patients.
Statistical Analysis
Statistical analysis was performed with IBM SPSS Statistics (Version 23.0, Armonk, NY). Descriptive statistics were used to describe the demographic data and baseline characteristics for the entire study population. Independent samples t-tests were used for normally distributed continuous data and w 2 tests for categorical variables. Results are presented as either mean (standard deviation) or as frequencies and percentages. In case of non-parametric data, the median with the interquartile range are described. The level of statistical significance was set at an a of .05.
Baseline Characteristics
This cohort study included 398 patients, 216 in the UC group and 182 in the MCP group. Patients in the UC group had more comorbidities (mean Charlson score; 7.0 [2.6] vs 6.1 [1.9], respectively, P < .01) and were more likely to have a femoral neck fracture than those in the MCP group. No differences in gender, age, or ASA classification were found between the groups (P < .05). Characteristics of patients in the MCP and UC groups are summarized in Table 1.
In-Hospital Outcome
The mean time to surgery was significantly shorter in the MCP group than in the UC group: 18.2 (9.3) hours versus 25.3 (13.9) hours (P < .01). The number of patients who had to wait more than 24 hours was also significantly lower in the MCP group than in the UC group: 17.6% versus 44.9% (P < .01). There was a significant difference in the type of surgical procedures performed in the MCP versus UC groups, with more prostheses and intramedullary nails in the UC group for femoral neck and pertrochanteric fractures, respectively ( Table 2).
The mean length of hospitalization was significantly shorter in the UC group compared to the length of hospitalization in the MCP group: 12.3 (7.3) days versus 15.1 (15.7) days (P ¼ .02).
Patient-reported Outcome
After at least 2 years follow-up, only 49.9% of the patients were still alive. Fourteen of these remaining 159 patients were unable to fulfill the questionnaire due to cognitive impairment. The final overall response rate of the patient-reported questionnaire (SF-12 and the NRS) was 65.6% (95 of 145 participants). The response rate was similar in the MCP group and the UC group (69% vs 60.7%, respectively, P ¼ .30). The patient-reported outcome as measured with the SF-12 showed similar scores for both the MCP and the UC cohort. Quality of life, also measured with the SF-12, and pain, measured with the NRS, were also similar for both cohorts (Table 3).
Complications and Mortality Outcome Measures
Postoperative complications, defined as complications requiring intervention, were common in the overall study population (65.6%). The incidence of postoperative complications was significantly lower in the MCP group (45.1%) compared to the UC group (82.9%, P < .01; Table 4). Postoperative delirium occurred less frequently in the MCP group than in the UC group (19.2% vs 45.4%; P < .01). Mortality rate within 30 days and one year after admission was 8.0% and 35.4%, respectively. The difference between the MCP group and the UC group was not significant.
Discussion
Although the effects of hip fractures in the elderly on patientreported outcome, pain, and quality of life have been addressed in several recent studies, 23,30-33 the effects of MCP on longterm outcome was unclear. This retrospective cohort study found that the use of MCP for elderly patients with a hip fracture was associated with a reduced time to surgery and reduced postoperative complications. Nonetheless patient-reported long-term quality of life and pain are similar for patients who are treated according to MCP or UC. Surprisingly, neither the 30-day nor the 1-year mortality rate was affected by the implementation of MCP.
The reduced time to surgery that was seen in our study was also found by other authors. 19,34,35 This reduced time to surgery was associated with a significant reduction of the complication rate in our study. These findings are in line with several studies that described a significant increase in complications after 24 hours 36,37 and even a significant increase in mortality after 48 hours. 38 Although the reduced time to surgery in our study showed an effect on the complication rate, an effect on neither the 30-day nor the one-year mortality was seen.
A postoperative complication rate up to 59% has been reported in elderly people with hip fractures. 17,19,34,39 We found comparable rates (45%) of postoperative complications in the MCP group, and the complication rate was lower than that in the UC group. This finding is in line with 2 other studies that reported similar differences. 20,40 Delirium was scored as a separate complication in our study, since it is the most common complication in elderly patients with hip fractures. In our study, the overall incidence of delirium was 33.4%, but the incidence was significantly lower in the MCP group. This could be a direct result of the MCP approach, in which a geriatrician is consulted to impose preventive measures in each patient.
The MCP approach aims to shorten the length of stay, but our study shows a significantly longer length of stay in the MCP group compared to the UC group. This is contradictory to all other studies regarding MCP, that have shown a significant reduction in length of stay. [17][18][19]34,35,39,41,42 A possible explanation for this difference is the discharge destination of the patients. Instead of going home, most patients are transferred to geriatric rehabilitation. This makes the discharge date dependent on availability of such a center and this delays discharge in many cases. The MCP for elderly patients with hip fractures might be beneficial and cost-effective regarding the hospital care, as the significant reduction in complications could make the MCP cost-effective. However, from the patient perspective, the MCP has little benefit, as no significant differences in patient-reported outcome and observed mortality rate.
In the interpretation of our data, some limitations have to be taken into account. The retrospective nature of the study limits the data quality and the use of a questionnaire induces a risk of selection bias. However, the retrieved data were found to be well documented. The response rate for the patient-reported outcome, which included the SF-12, was substantial and comparable in both groups, making the comparison between both groups in our opinion justified. It is evident that a prospective cohort study in elderly patients with hip fractures is needed to address the (cost-)effectiveness, the functional outcome, and long-term patient-reported outcome of the MCP strategy. The main practical concern of this study regarded the long period to follow-up with this frail patient group.
Conclusion
Although this retrospective comparative cohort study shows that the MCP approach for elderly patients with a hip fracture is associated with a reduced time to surgery and reduced postoperative complications, no differences were found in longterm patient-reported quality of life or pain. Moreover, there was no significant difference in 30-day and one-year mortality.
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
|
2019-06-21T23:04:09.191Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "43044257702273ae6157d4961df19f7db4d2473e",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/2151459319841743",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "34feb0d02ebbe55aab6b8cf33ad44e43cb741cfd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
215758688
|
pes2o/s2orc
|
v3-fos-license
|
Transcatheter arterial embolization for advanced gastric cancer bleeding
Abstract To investigate computed tomography and angiography findings and clinical outcomes after transcatheter arterial embolization for acute upper gastrointestinal bleeding from advanced gastric cancers. From January 2005 to December 2014, 58 patients with pathologically proven gastric cancer were treated at our institution with transcatheter arterial embolization due to acute upper gastrointestinal bleeding recalcitrant to endoscopic treatment. The electronic medical records for each patient were reviewed for clinical presentation, endoscopy history, computed tomography and angiographic findings, blood transfusion requirements, and follow-up results. Angiography findings were positive in 13 patients (22.4%): contrast extravasation was found in 9 patients and pseudoaneurysm in 4 patients. All patients with positive angiograms underwent selective embolization treatment. Those with negative angiography findings underwent empirical embolization. Gelfoam, n-butyl cyanoacrylate, coils, or a combination of these were used as embolic agents. The overall clinical success rate was 72.4% (42/58), and the success rate for patients with positive angiography was 53.8% (7/13). The median survival was 97.5 days (range, 7–1415 days), and the 1-month survival rate was 89.6% (52/58). The 1-month survival rate of the clinical success group was 95.2% (40/42), which was significantly higher than that of the clinical failure group (P = .04). The clinical success group also required significantly fewer transfusions (2.43 units, range 0–24 units) (P = .02). Transcatheter arterial embolization is a highly effective treatment for advanced gastric cancer with active bleeding. It should be considered as an additional treatment, especially when endoscopic or surgical treatment fails or when these approaches are difficult.
Introduction
Bleeding from advanced gastric cancer accounts for 1% to 8% of the total prevalence of acute upper gastrointestinal bleeding (UGIB). [1][2][3] Such bleeding may cause delays in scheduled chemotherapy, increased transfusion requirements, and even death. [4,5] Esophagogastroduodenoscopy (EGDS) is the treatment of choice for UGIB because it enables a specific diagnosis of the cause of the bleeding, and hemostasis can be achieved using various techniques. However, it may fail to stop the bleeding because the exact focus of bleeding may be masked by profuse blood in the stomach or blood oozing diffusely from the tumor mass. [6][7][8] Surgical treatment can be performed in cases of endoscopic failure, but this leads to high morbidity and mortality. [9][10][11][12] Due to advances in angiographic devices and embolic materials, embolization is becoming accepted as the firstline treatment modality for this condition. [2] However, only a few studies have been conducted on this subject, and these were based on a relatively small number of patients. The present study aimed to investigate clinical outcomes after transcatheter arterial embolization (TAE) for UGIB in advanced gastric cancer.
Study population
The institutional review board approved this study, and informed consent was waived due to the retrospective nature of the research. We retrospectively reviewed the electronic medical records of 58 patients with pathologically proven advanced gastric cancer (46 males, 12 females; mean age ± standard deviation [SD], 62.5 ± 12.79 years; range, 22-87 years), who were treated at our institution with TAE for UGIB due to gastric cancer from January 2005 to December 2014.
Endoscopy and computed tomography
Endoscopy is the first-line diagnostic and therapeutic modality for patients with suspected acute arterial UGIB, including those with advanced gastric cancer, at our institution. However, if active bleeding or pseudoaneurysm is found on computed tomography (CT), angiography can be performed prior to endoscopy at the discretion of the primary physician.
Contrast-enhanced CT is performed before angiography when recurrent bleeding occurs after endoscopic treatment, when UGIB is still suspected even after negative endoscopic findings, or when endoscopy is not applicable.
An experienced radiologist reviewed the CT images retrospectively, and the findings were divided into 4 categories: presence of contrast extravasation (category 1), presence of arterial pseudoaneurysm (category 2), prominent tumor feeding vessel (category 3), and no visible abnormal findings (category 4). Categories 1 to 3 were defined as positive CT findings, while category 4 was defined as negative.
Angiography and embolization
Emergency angiography was performed before TAE on all patients in this study. Celiac and superior mesenteric arteriography were performed using a 5-F Rösch-Hepatic or Cobra catheter (Cook, Bloomington, IN, USA). If there were no definite signs of bleeding, further coaxial selective angiography was performed using a 2.4-F or 2.0-F microcatheter (Renegade HI-FLO [Boston Scientific, Natick, MA, USA] or Progreat ɑ [Terumo, Somerset, NJ, USA]) in the left gastric, right gastric, short gastric, posterior gastric, gastroduodenal, or pancreaticoduodenal arteries to rule out false-negative results.
The angiographic findings were classified into 3 categories: presence of contrast extravasation (category 1), presence of arterial pseudoaneurysm (category 2), and other tumor staining and/or no visible abnormal findings (category 3). Categories 1 and 2 were defined as positive, while category 3 was defined as negative.
Transcatheter arterial embolization was performed in all cases. Although the choice of embolic material was at the operators discretion. All procedures were conducted in accordance with the following strategy. In patients with positive angiographic findings for active bleeding or pseudoaneurysm, superselective embolization was performed with n-butyl cyanoacrylate (NBCA) or microcoils. The tip of the microcatheter was inserted into the target artery as close as possible to the focus of bleeding. Under continuous fluoroscopic monitoring, 5% dextrose solution was used to flush the microcatheter. Then a mixture of NBCA and lipiodol (ratio of 1:2 to 1:3) was infused using a 1-ml syringe. When the degree of selection was insufficient for safe NBCA, this procedure was not performed and TAE using microcoils was performed instead. For patients with angiographic findings of hypervascular tumor staining, we used microcatheters to select the feeding vessels, which were embolized using Gelfoam particles. However, when angiography showed no abnormal findings, empirical embolization was performed on the left gastric artery (LGA) as the main target vessel and on additional gastric arteries that, based on CT findings, were suspected of being tumor feeders. [13]
Endpoints
The primary endpoint of this study was the clinical success rate, which was defined as the patients survival without recurrent bleeding on the 14th day after embolization. [13] Recurrent bleeding was diagnosed based on comprehensive consideration of Table 1 Comparison of clinicoradiologic characteristics based on procedural success. follow-up diagnostic studies, including endoscopy, angiography, and contrast-enhanced CT, and on the clinical assessment of the physician based on symptoms related to bleeding, such as hematemesis/hematochezia/melena, hemodynamic instability, and decreased hemoglobin levels. Technical success, a 1-month survival rate, and a reduced requirement for red blood cell (RBC) transfusions were the secondary endpoints. In cases without active arterial bleeding, technical success was defined as either tumor devascularization or stasis of arterial flow in the target vessels. In cases where the angiography revealed active arterial bleeding, technical success was defined as the disappearance of extravasation or complete exclusion of the pseudoaneurysm. [14] One-month survival was defined as being alive on the 30th day after embolization. We reviewed the blood transfusion history of each patient and divided them into 3 groups based on the number of packed RBCs received from the time of admission to the time of the procedure (early transfusion), from the end of the procedure to 24 hours after the procedure (mid-transfusion), and from 24 hours after the procedure to discharge (late transfusion).
Statistical analysis
The data were tested for normal distribution using the Kolmogorov-Smirnov test. Normally distributed variables were compared with the independent t test and presented as the mean ± SD. Group comparisons of categorical variables were performed using the Chi-Squared test or, for small cell values, Fisher exact test. All statistical analyses were performed using SPSS version 19.0 (SPSS, IBM, Chicago, IL, USA), with P < .05 indicating statistical significance. Table 1 summarizes the clinical and radiological data for all patients categorized by clinical success or clinical failure.
Endoscopy and CT
Fifty three out of 58 patients underwent endoscopy before TAE. Two patients did not undergo preprocedural endoscopy because active bleeding or pseudoaneurysm had already been found on a CT scan, while another 3 had vital signs that were too unstable to undergo endoscopy.
Technical and clinical success
Technical success was achieved in 100% of the procedures, while the overall clinical success rate was 72.4% (42/58). The clinical success rate of selective embolization for angiographically positive patients was 53.8% (7/13), and that of empirical embolization for angiographically negative patients was 77.8% (35/45). However, there was no significant difference in clinical success rates between the 2 groups (P = .22).
Complications
One patient with positive CT findings and an equivocal angiographic finding of aneurysmal changes underwent wide embolization using NBCA and microcoils in relatively large vessels. This completely embolized the blood supply to the gastric wall (including the collateral vessels), which had been weakened by the cancer. This eventually led to a procedure-related complication of stomach wall perforation, and the patient underwent total gastrectomy (Fig. 3).
Discussion
The incidence of major bleeding from advanced gastric cancer is approximately 5%. [4,15,16] Major bleeding can delay scheduled chemotherapy, increase the need for blood transfusions, or even lead to severe morbidity or mortality. Endoscopic management is less successful at controlling UGIB from malignant tumors than it is at controlling bleeding from other benign causes. [4,5] This is likely because UGIB involves a large area of the arterial bed, which is invaded and eroded by the malignant tumor. [8,17,18] When endoscopic hemostasis fails to stop the bleeding, TAE can be an important and useful second-line treatment.
In this study, we achieved a >70% embolization success rate for patients in whom endoscopic hemostasis had failed, which corresponds well to the success rate of 48% to 79% in the literature. [19,20] Our findings show a statistically significant decrease in the amount of transfused blood after successful TAE, which indicates that the clinical success of TAE appears to be related to favorable 1-month survival outcomes.
As many preprocedural CT scans as possible were performed in our study patients. Since the exact location of a tumor in the stomach can be identified from a CT scan before the procedure, selective embolization can be performed by locating the corresponding feeder. In patients with negative angiography, this approach can be more effective than empirical embolization of the LGA, which is a well-known method. [19,20] In addition, meandering vessels that are encased or invaded by tumor are considered pseudoaneurysmal due to weakened vessel walls, and embolization with permanent embolic materials, such as NBCA or microcoils, can be helpful. This technique can improve the quality of embolization without affecting normal gastric circulation. In this study, 5 patients had active bleeding or exposed vessels within the tumor, which were clearly revealed on a CT scan despite negative angiography. In these cases, we performed additional superselective embolization using NBCA. However, not all enrolled patients underwent CT examinations in this study; therefore, generalization regarding the effect of the treatment on survival rate is limited. Further studies with larger sample sizes are required to validate of our results.
As described above, patients with negative angiograms underwent empirical embolization using Gelfoam. However, we did not consider the natural clinical course or evaluate clinical efficiency for these patients by comparing them with patients who did not receive treatment. Although many reports argue for the usefulness of empirical embolization for UGIB, negative angio- (F) After the microcatheter tip was located at the aneurysmal portion, additional embolization was performed using NBCA mixed with lipiodol (white arrow). (G) Selective angiogram of the gastroduodenal artery (GDA) showed equivocal findings with aneurysmal changes (white arrows), and we conducted embolization using NBCA mixed with lipiodol. (H) Completion angiography revealed that the abnormal vessels with aneurysmal changes at the LGA and GDA were successfully embolized with a glue cast (white arrow). However, the patient underwent total gastrectomy due to stomach wall perforation.
Cho et al. Medicine (2020) 99: 15 www.md-journal.com grams themselves could be relevant to the favorable outcome group (less and temporary bleeding). Therefore, our ability to judge the direct clinical efficacy of empirical embolization in this group was limited. Further research may be required to clarify this issue and validate our findings. This study has several limitations. First, although the patient pool was larger than in previous studies, it was still not enough for a single statistical analysis. Second, we did not conduct a comparative analysis of the natural clinical course in the negative angiogram group. Third, there may be some bias because this was a retrospective study and it was difficult to set the standard treatment including embolic materials. Lastly, the operator subjectively decided the endpoint via subjective evaluation of certain factors during the procedure, such as flow stasis and tumor devascularization, which may have affected the success rate.
In conclusion, TAE is a very effective treatment for acute bleeding in advanced gastric cancer. It should certainly be considered when an endoscopic or surgical approach is difficult.
|
2020-04-15T13:06:38.895Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "249a3b5cb01a9619389e1b0b64c6d6b07fca5408",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1097/md.0000000000019630",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "217471d896322cf4b188c19b82da62767b21b472",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256489396
|
pes2o/s2orc
|
v3-fos-license
|
Rapid microwave-assisted synthesis of nitrogen-doped carbon quantum dots as fluorescent nanosensors for the spectrofluorimetric determination of palbociclib: application for cellular imaging and selective probing in living cancer cells
The current study introduces a spectrofluorimetric methodology for the assessment of palbociclib without the need for any pre-derivatization steps for the first time. This approach relied on the palbociclib quenching effect on the native fluorescence of newly synthesized nitrogen-doped carbon quantum dots (N-CQDs). An innovative, facile, and rapid microwave-assisted pyrolysis procedure was applied for the synthesis of N-CQDs using available and economic starting materials (the carbon source is orange juice and the nitrogen source is urea) in less than 10 minutes. Full characterization of the prepared QDs was carried out using various techniques. The prepared N-CQDs exhibited good fluorescence emission at 417 nm after excitation at 325 nm with stable fluorescence intensity and good quantum yield (29.3%). They showed spherical shapes and narrow size distribution with a particle size of around 2–5 nm. Different experimental variables influencing fluorescence quenching were examined and optimized. A good linear correlation was exhibited alongside the range of 1.0 to 20.0 μg mL−1 with a correlation coefficient of 0.9997 and a detection limit of 0.021 μg mL−1. The proposed methodology showed good selectivity allowing its efficient application in tablets with high percentage recoveries and low percentage RSD values. The mechanism of quenching was proved to be static by applying the Stern–Volmer equation at four different temperatures. The method was validated in accordance with ICHQ2 (R1) recommendations. Intriguingly, N-CQDs demonstrated good biocompatibility and low cytotoxicity, which permitted cellular imaging and palbociclib detection in living cancer cells. Therefore, the proposed method may have potential applications in cancer therapy and related mechanism research.
Introduction
Palbociclib (PLB) is 6-acetyl-8-cyclopentyl-5-methyl-2-[(5piperazin-1-ylpyridin-2-yl)amino]pyrido [2,3-d]pyrimidin-7-one ( Fig. 1). 1 It is a pyridopyrimidine, a secondary and tertiary amino compound, and a member of cyclopentanes. As breast cancer is considered the second most common cancer globally and the most common cancer in women, 2 the synthesis and analysis of new anti-breast cancer drugs is of urgent need. PLB has been approved recently by FDA for the management of endocrine-resistant metastatic breast cancer combined with endocrine therapy. 3 In 2015, the randomized phase II PALOMA-1 trial dened for the rst time the activity and efficacy of PLB as an anti-breast cancer medication inhibiting cyclin-dependent kinases. 3 This targeted therapy is promising and effective for stopping cancer progression. 4 It is sometimes combined with other drugs as an aromatase inhibitor or fulvestrant. PLB is available as Ibrance ® capsules or lm-coated tablets in different doses of 75, 100, and 125 mg.
Few methods have been reported for the determination of PLB in pharmaceutical dosage forms and biological samples, mainly using UPLC, 5 HPLC, 6,7 and LC-MS/MS 8,9 methods. As far as we know, no spectrouorimetric methods have yet been adopted for its determination.
The construction of a novel multifunctional uorescence platform as carbon quantum dots (CQDs) is gaining great attention in the determination of different pharmaceutical compounds, metals, and biological compounds, as well as cellular imaging. [10][11][12][13][14][15][16][17][18][19][20][21] They are similar in size and photo-electrochemical properties; however, they vary in the internal structure and surface chemical groups. They are mono-disperse spherical nanoparticles with a carbon-based skeleton and a large number of oxygen-containing groups on the surface. 22 Additionally, heteroatom-doped CQDs were developed for enhancing the electrical and optical characteristics of CQDs. Fluorine, boron, sulfur, phosphorus, and nitrogen are the commonly presented doping chemical elements. 21,23,24 Different approaches have been reported for the synthesis of CQDs, including hydrothermal synthesis, 20,24-26 microwave-assisted synthesis, 27-29 chemical oxidation, 30 and carbonizing organics methods. 31 To date, no spectrouorimetric methods have been reported for PLB assay and the published methods for its estimation require high-cost instruments and a large amount of organic solvents. Therefore, the main objective of this study was to construct a novel spectrouorimetric method for the determination of PLB based on the merits of the quantum dots including, biocompatibility, good luminescence, facile synthesis, cost-effective starting materials, water-solubility, low toxicity levels, high sensitivity, and easy measurements. 32 In the current work, nitrogen-doped carbon quantum dots (N-CQDs) were prepared by a rapid and facile microwave-assisted pyrolysis approach in less than 10 minutes utilizing orange juice (as a carbon source) and urea (as a nitrogen source) for the rst time. Herein, PLB quantitatively quenches the uorescence of the prepared quantum dots. This quenching was investigated in order to design a spectrouorimetric method for its estimation. The novelty of this study is addressed as being the rst spec-trouorimetric approach for the determination of PLB without the need for any pre-derivatization steps or sophisticated techniques. Since the studied drug does not exhibit native uorescent properties, the importance of the proposed study is magnied. In addition to the outstanding features of N-CQDs, they demonstrated good biocompatibility and low cytotoxicity, which permitted cellular imaging and PLB detection in living cancer cells. Consequently, the developed method is expected to have substantial signicance and potential applications in cancer therapy.
Instruments
-A double-beam spectrophotometer (PG Instrument, UK) was utilized in spectrophotometric measurements.
-A Cary Eclipse uorescence spectrophotometer operated with a Xenon ash lamp from Agilent Technologies (Santa Clara, CA 95051, United States) was used. It was operated at 750 V.
-FT-IR spectra were obtained using the Thermo-Fisher Scientic Nicolet -iS10 FT-IR spectrometer (Thermo Fisher Scientic, Waltham, MA, USA). The instrument had a Ge/KBr beam splitter and a 4000 to 1000 cm −1 DTGS detector. The measurements were acquired with a resolution of 4 cm −1 in 32 scans.
Materials, reagents, and chemicals
-Palbociclib was obtained from Pzer, Freiburg, Germany.
-HepG2 cell line was obtained from Nawah Scientic Company, Egypt. -All materials and reagents were of analytical grade. -Double distilled water was utilized throughout the work.
Preparation of stock solution and buffers
-A stock solution of PLB at a concentration = 100.0 mg mL −1 was prepared in methanol. Subsequent dilutions of the stock solution were prepared using double-distilled water. The solutions remained stable for at least 14 days then they were kept in the fridge.
-A Britton-Robinson buffer (0.02 M) was prepared in distilled water to cover pH ranges from 2-12.
Fabrication of N-CQDs
The preparation of N-CQDs was carried out by dissolving 3 gm of urea in 50 mL of orange juice and then heating it for 10 minutes in a domestic microwave till it was completely charred. Next, the product was le to cool, diluted with water to 100 mL, and centrifuged at 6000 rpm for 15 min to eliminate suspended particles. The clear layer was ltered and the volume was adjusted to 200 mL with water to prepare the QD stock solution. The working solution was obtained by transferring 10 mL of QD stock solution into a 100 mL volumetric ask and completing to mark with double-distilled water (Scheme 1). The prepared solutions were kept for further use in the refrigerator.
Spectrouorimetric measurements
Aer optimizing different parameters, in a set of 5 mL measuring asks, 2 mL Britton-Robinson buffer (pH 2, 0.02 M) was added, followed by the addition of 125 mL of N-CQDs. Then, aliquots of the studied drug over the range (1.0 to 20.0 mg mL −1 ) were added and the asks were completed with distilled water up to the mark. The uorescence measurements were performed at room temperature at 325/417 nm as the excitation/ emission wavelengths, respectively. Quenching of the uorescence was performed and plotted vs. the drug concentration in mg mL −1 to construct the calibration curve and carry out the regression analysis.
Quantum yield measurements
The quantum yield (QY) of N-CQDs was determined by adopting the single point method using eqn 1: 33,34 where: F is the integrated measured emission intensity, F denotes QY, A represents the absorbance, and h represents the refractive index of the solvent. Quinine sulfate (QS) was prepared in 0.1 M H 2 SO 4 and employed as the standard (QY: 0.54 at 350 nm). In the aqueous solutions h N-CQDs /h st = 1.
Assay of pharmaceutical preparation
A set of laboratory-prepared PLB tablets (Ibrance ® Tablets, 100 mg PLB/Tablet) were obtained by mixing with 10 mg each of microcrystalline cellulose, lactose monohydrate, colloidal anhydrous silica, sodium starch glycolate type A, and Scheme 1 Synthesis of N-CQDs and application for determination of PLB. magnesium stearate while maintaining the drug's pharmaceutical concentration (100 mg PLB/Tablet). An amount of the powder corresponding to 100.0 mg PLB was transferred into a small ask and 50 mL of methanol was added. The ask was subjected to sonication in an ultrasonic bath for 15 min. The solution was ltered in a clean dry 100 mL measuring ask and lled with the same solvent to the volume. Finally, volumes within the linear range (4.0, 8.0, 12.0, 16.0 mg mL −1 ) were moved into a 5 mL set of measuring asks and completed with doubledistilled water. The steps mentioned in Section 2.5 were followed and percentage recoveries were computed from the calibration plot or the derived regression equation.
Cell viability by MTT assay
The seeding of cancer cells was performed in a 96-well plate (100 mL per well). Aer overnight incubation at 37°C and 5% CO 2 , the cells were incubated with either serial dilutions of the N-CQDs (0.03, 0.015, 0.0075, 0.00375, 0.001875, 0.0009375%) or doxorubicin (50, 25, 12.5, 6.25, 3.125, 1.65 mM). Aer incubation for 48 hours, 3-(4,5-dimethylthiazoyl)-2,5-diphenyltetrazolium bromide (MTT) (5 mg mL −1 , phosphate-buffered saline (PBS)) was added, followed by incubation of the plate for 4 hours. Then, to dissolve the formazan crystals, an acid-ied sodium dodecyl sulphate (SDS) solution (10% SDS + 0.01N HCl in 1× PBS) was utilized. A Biotek plate reader (Gen5™) was used to measure the absorbance aer 14 hours of incubation at l 570-630 nm. 35,36 2.10. Cellular bioimaging HepG2 cells were seeded on a coverslip in a 6-well plate (2 × 10 5 cells/ml, 2 mL in each well). Aer overnight incubation, cells were treated with 0.01% of N-CQDs for 6 hours alone or with the quencher PLB (10 nM). Aer that, the cells were carefully washed with PBS and xed with ice-cold methanol for 30 min at room temperature (RT). Untreated control cells were stained for 15 minutes at RT with 0.5 mM ethidium homodimer. Aer washing with PBS, the xed cells were mounted on a glass slide. The coverslip was mounted on a glass slide and visualized using the Leica uorescence microscope, Leica DMI 8, Leica Application Suite X (Leica, Germany).
Results and discussion
PLB is the rst CDK4/6 inhibitor to be approved for use in humans. It has been widely used for the treatment of breast cancer. Therefore, our motivation was to investigate a novel method for its determination. As the heteroatom doping of CQDs has gained much attention, in this work, we demonstrate a simple, new, and rapid microwave-assisted pyrolysis strategy for N-CQDs synthesis using orange juice and urea as sources for carbon and nitrogen, respectively (Scheme 1). The prepared N-CQDs possess strong uorescence emissions. Interestingly, this uorescence could be selectively quenched by PLB, which could be a basis for an innovative methodology for its sensitive spectrouorimetric analysis for the rst time.
Characterization of N-CQDs
The prepared QDs were investigated by extensive characterization using different spectroscopic and microscopic techniques. The optical images of the N-CQDs solution under UV light and visible light are presented in Fig. 2A. The solution of N-CQDs demonstrated a dark orange color in visible light and a strong blue uorescence under the UV light with a longlasting homogenous phase, no obvious precipitation was seen, and stable uorescence intensity for more than four weeks was observed. Spectrouorimetric measurements showed that QDs exhibited high uorescence intensity at 417 nm following excitation at 325 nm ( Fig. 2A) and displayed a high QY (29.3%) utilizing QS as a reference. The emission of the synthesized QDs demonstrated excitation dependency across the 310 to 380 nm range and the optimum uorescence emission was obtained at 325 nm, as presented in Fig. 2B. The UV spectrum (Fig. 3A) was scanned to inspect the optical features of QDs and two characteristic bands were recorded at l max of 213 and 275 nm, equivalent to p-p*/n-p* transitions. 37 (Fig. 3B). 39 In addition, HRTEM images indicated that N-CQDs were well separated without any apparent aggregation with spherical shapes and sizes in the range of 2-5 nm (Fig. 4).
Investigation of the quenching mechanism of N-CQDs
As illustrated in Fig. 5, upon adding increased concentrations of PLB, the native uorescence of N-CQDs was quantitatively quenched, which can be referred to as damage to the surface passivation layer of QDs by the cited drug. 40,41 The uorescence quenching mechanisms include two main categories: dynamic and static. The difference between dynamic and static quenching can be examined by lifetime observations or, better yet, their temperature dependency. In dynamic quenching, higher temperature settings cause faster diffusion and a rise in the Stern-Volmer quenching constant (K SV ), while higher temperatures cause the complexes to dissociate and the quenching constant to decrease in static quenching. 42 To determine the potential quenching mechanism, the Stern-Volmer equation (2) was employed: 43 Considering that F is the N-CQDs-PLB system uorescence intensity and F 0 is for N-CQDs only, K SV is the Stern-Volmer quenching constant, [Q] is the PLB concentration, k q denotes the quenching rate constant, and s 0 represents the average lifetime of the uorophore (10 −8 s).
The uorescence quenching efficiency (F 0 /F) was plotted against [Q] and K SV values were calculated at four temperature settings (298, 303, 313, 323 K) (Fig. 6). K sv values were found to be 4.01 × 10 4 , 3.641 × 10 4 , 3.506 × 10 4 , and 3.158 × 10 4 L mol −1 at 298, 303, 313, 323 K, respectively. As observed, K sv values decreased upon increasing the temperature, indicating that the quenching proceeds by the static process. Additionally, from the obtained K SV values, k q values were calculated and were found to be 4.01 × 10 12 , 3.641 × 10 12 , 3.506 × 10 12 , and 3.158 × 10 12 L mol −1 s −1 , respectively, these values were signicantly higher than the maximum diffusion rate constant (2.0 × 10 10 L mol −1 s −1 ); further conrming the static quenching process. 44 This mechanism includes the formation of N-CQDs/PLB nonemissive complex, as evidenced by changes observed in N-CQDs UV spectra aer the addition of PLB. When PLB was added, a new absorption peak appeared at 370 nm, indicating the complex formation and conrming the static quenching mechanism (Fig. 7).
Optimization of experimental parameters inuencing the uorescence sensing
In order to reach the maximum sensitivity of the method, different parameters were studied including: 3.3.1. Effect of pH. The pH of a solution is well known to inuence not just the uorescence intensity of QDs, but also the interactions between QDs and target species. 45 Using the Britton-Robinson buffer, the impact of the pH change on the quenching of N-CQD uorescence by PLB was investigated. It was found that pH 2 was the ideal pH for uorescence quenching (F 0 -F), and as pH increased, quenching decreased (Fig. 8a). As a result, the buffer volume was optimized utilizing several volumes in the range of (0.5-4.0 mL). The optimum volume for maximal uorescence quenching with PLB was found to be 2 mL (Fig. 8b).
3.3.2. Effect of incubation time. From 1 to 60 minutes, the incubation time inuence on the interaction between N-CQDs and PLB was investigated. The reaction between PLB and N-CQDs was found to be fast, taking less than 1 min to complete, and the uorescence signals were constant for more than 60 min, giving the suggested approach another advantage (Fig. 8c).
Validation of the method
The designed approach was validated in regard to the international council of harmonization guidelines (ICH). 46 Different parameters were considered including; linearity, range, accuracy, precision, robustness, and selectivity.
The uorescence quenching (F 0 − F) was plotted against PLB concentration (mg mL −1 ) and a linear correlation was found in the range of 1.0 to 20.0 mg mL −1 (Fig. 5) with a linear regression equation can be represented as follows: where, C represents the concentration of PLB in mg mL −1 . The analytical performance data for the developed methodology are shown in Table 1. Table 1 verify the acceptable sensitivity of the developed approach.
The method's accuracy was assessed using mean percentage recoveries and tested in triplicate runs with varied concentrations covering the PLB linearity range ( Table 2). High recovery percentages (98.02-101.2%) were observed, demonstrating the method's high accuracy. Inter-day and intra-day precision were investigated at 3 concentration levels of PLB (4.0, 10.0, and 18.0 mg mL −1 ) and presented as % RSD and % error. The cited drug had low % RSD values (less than 1.12%) and % error (less than 0.65%), indicating that the developed approach was reasonably precise (Table 3).
To investigate the robustness of the proposed approach, the impact of minor variations in the experimental factors concerning the uorescence sensing of PLB was monitored. These factors include the volume of N-CQDs (125.0 mL ± 1), pH (2 ± 0.1), and buffer volume (2.0 mL ± 0.1 mL). It was veried that small changes did not signicantly affect the quenching of the uorescence intensities of N-CQDs by PLB, as presented in Table 4. The suggested methodology was utilized to analyze PLB in its tablets with high percentage recoveries (98.07-101.73%) and low percent RSD values (1.561%) without interference from common excipients, demonstrating method selectivity (Table 5).
Method applications
3.5.1. Analysis of PLB in tablets. PLB was determined using the designed approach in its tablet formulation. The concentrations of the cited drug were computed by referring to the regression equation. The % recoveries of the studied concentrations of PLB were acceptable (98.07-101.73%), as represented in Table 5, reecting the method's selectivity in determining the cited drug with no interference from excipients.
3.5.2. Cytotoxicity of N-CQDs and cellular imaging. Before undertaking any biological applications, it is essential to assess the biocompatibility of N-CQDs. As shown in Fig. 9, the MTT assay outcomes indicated that N-CQDs showed good biocompatibility and low toxicity at the tested concentration range with IC 50 of 0.2017% using doxorubicin as a control ( Table 6). The viability of HepG2 cells was more than 85% even with a high concentration of N-CQDs aer incubation for 48 hours. Therefore, the bioimaging experiment was performed. Fig. 10 shows the obtained confocal microscopy images of HepG2 cells where the cells exhibited no background auto-uorescence by a laser with a 360 nm excitation. As can be observed, aer incubating HepG2 cells with 0.015% of N-CQDs for 6 hours, the cells displayed blue uorescence when excited at 360 nm. In addition, aer incubation with the N-CQDs, no morphological damage to the cells was found, conrming their minimal cytotoxicity. All of these demonstrated that the N-CQDs were well suited to live-cell imaging. Based on the PLB-induced uorescence quenching, N-CQDs were then used to detect PLB in living cells. Exogenous PLB was added to the N-CQDs-pretreated HepG2 cells. Fig. 10 indicates that no intracellular uorescence was observed aer adding 10 nM PLB to the growth medium for 6 hours at 37°C. According to this result, the proposed N-CQDs could be employed to detect PLB in living cells as uorescent probes. Consequently, the developed method could have substantial signicance and prospective applications in cancer therapy.
Conclusion
The current study introduces, for the rst time, a sensitive and rapid spectrouorimetric approach for the determination of PLB. The suggested approach relies on employing N-CQDs as uorescent probes for the quantitation of the studied drug depending on the remarkable quenching effect of PLB on the uorescence emission of N-CQDs without the need for any prederivatization steps. Orange juice as a carbon source and urea as a nitrogen source were used as available and economical starting materials for the rapid microwave-assisted synthesis of N-CQDs in less than 10 minutes. A good linear correlation was exhibited over the concentration range of 1.0 to 20.0 mg mL −1 with a detection limit of 0.021 mg mL −1 and a correlation coef-cient of 0.9997. The designed system exhibited the merits of high selectivity and reproducibility as well as PLB assay in prepared tablets with satisfactory percentage recoveries (98.07-101.73%). Moreover, the N-CQDs demonstrated low cytotoxicity and high biocompatibility, permitting the cellular imaging and detection of PLB in living cells. ICHQ2 (R1) guidelines were used to validate the proposed technique.
Conflicts of interest
There are no conicts of interest to declare.
|
2023-02-02T16:12:32.227Z
|
2023-01-31T00:00:00.000
|
{
"year": 2023,
"sha1": "e9e60b5ec47594f89a65a8835fadace7e1c304de",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1039/d2ra05759j",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "505ec4de298367fcb6ed90aa91499c7d31289d79",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220058502
|
pes2o/s2orc
|
v3-fos-license
|
An alkaloid initiates phosphodiesterase 3A–schlafen 12 dependent apoptosis without affecting the phosphodiesterase activity
The promotion of apoptosis in tumor cells is a popular strategy for developing anti-cancer drugs. Here, we demonstrate that the plant indole alkaloid natural product nauclefine induces apoptosis of diverse cancer cells via a PDE3A-SLFN12 dependent death pathway. Nauclefine binds PDE3A but does not inhibit the PDE3A’s phosphodiesterase activity, thus representing a previously unknown type of PDE3A modulator that can initiate apoptosis without affecting PDE3A’s canonical function. We demonstrate that PDE3A’s H840, Q975, Q1001, and F1004 residues—as well as I105 in SLFN12—are essential for nauclefine-induced PDE3A-SLFN12 interaction and cell death. Extending these molecular insights, we show in vivo that nauclefine inhibits tumor xenograft growth, doing so in a PDE3A- and SLFN12-dependent manner. Thus, beyond demonstrating potent cytotoxic effects of an alkaloid natural product, our study illustrates a potentially side-effect-reducing strategy for targeting PDE3A for anti-cancer therapeutics without affecting its phosphodiesterase activity.
immunoprecipitation and western blotting. Representative data is shown from two independent experiments (with n=3 wells each time) as mean + SD, and the p values from two-tailed unpaired Student's t-tests are indicated. *p < 0.05, **p < 0.01, ***p < 0.001, ns, not significant.
(a and b) mCherry or mCherry-SLFN12 vectors were transfected into SLFN12-KO HeLa cells for 36 hours. Cell morphology was determined using confocal microscopy (n=3 independent experiments) (a) and cell viability was assessed by measuring ATP levels (b) (n=3 independent experiments, with 4 replicates each time). Scale bars, 10 µM. (c and d) Dox (1 µg/mL) was added to drive expression of Flag-SLFN12 cells for the indicated times (c) or the indicated concentrations of Dox were added for 48 hours (d). Cell viability was determined by assessing ATP levels (n=4 wells). Expression of SLFN12 was analyzed using an anti-Flag antibody. experiments (mean + SD), and the p values from two-tailed unpaired Student's t-tests are indicated. *p < 0.05, **p < 0.01, ***p < 0.001, ns, not significant (e) Nauclefine (Nauc) decreased protein level of Bcl-2 24 hours after treatment in a PDE3A dependent manner. (f) Dox induced expression of SLFN12 decreased protein level of Bcl-2. The western blotting results were repeated twice.
Supplementary Figure 10. HeLa-toxic nauclefine did not affect mice body weight.
(A) HeLa cells stably expressing a luciferase gene. The luciferase substrate was added to facilitate assessment of the number of HeLa-luc cells, and cell luminescence was determined.
(g) Expression of PDE3A was knocked down in MCF-7, EKVX, and H4 cells. Nauclefine (500 nM) was used to treat the MCF-7 and EKVX cells for 36 hours, and to treat the H4 cells for 24 hours. Protein expression was analyzed by western blotting with antibodies against the PDE3A and SLFN12 proteins. GAPDH was used as an internal control. Experiments were repeated The indolizinone natural products nauclefine, angustine, and 20-bromonauclefine were constructed using a common cascade cyclization strategy. This process involves a key hydroamination of an internal alkyne followed by lactamization. The internal alkyne was prepared from the Sonogashira coupling of iodobenzen 19A/B and 2-ethynyl-3-aminoethylindole 19C. The resulting 20-bromonauclefine product from cyclization was transferred to angustine via Stille cross-coupling of a vinyl tin substrate. The subditine (16) was synthesized from oxidative cleavage of the vinyl substrate of 15, which was generated from the Stille crosscoupling of vinyl tin and bromo-methylpyridine 17. This key cyclization intermediate 17 was constructed from intramolecular cyclization of enamide 24, which was the condensation product of 5-bromo-6-methylnicotinoyl chloride 23 and imine 22.
Supplementary note
Nauclefine was synthesized starting from Di-t-butyloxycarboryl protected 2-ethynyl tryptamine. After Sonogashira coupling with methyl 4-iodonicotinate and subsequent deprotection, we obtained the cascade reaction precursor, of which the amino was engaged into the crucial nucleophilic addition to the alkynyl in Cs2CO3/MeOH with excellent regioselectivity (exo-type) followed by concomitant cyclization to generate C ring of nauclefine 3. We also employed the same strategy to accomplish the synthesis of Angustine (14). Prior to the cascade cyclization to afford 20-bromonauclefine (21), the precursor 20B was produced by Sonogashira coupling from 19B and 19C. Finally, the vinyl group was introduced by PPh3-catalyzed Stille coupling to yield Angustine (14) (scheme 1). For the synthesis of subditine, the C ring of the subditine skeleton was established by referring to Lavilla's method30.
As shown in scheme 1 in the supplementary note, the condensation of compound 22 with 5bromo-6-methylnicotinoyl chloride 23 gave 24 accompanied by the isomerization of 1-cyclic olefinic bond, which then underwent 6 electric cyclization at 190 oC in vacuo to afford 17 in 48% yield. Subsequently, we also installed a vinyl group into the pyridine ring by Stille coupling reaction (15), the structure of which was verified by X-ray single crystal diffraction.
Further dihydroxylation and oxidation (in one pot) under strict temperature control yielded the target subditine ( Supplementary Fig.12).
Oxygen-and moisture-sensitive reactions were carried out under nitrogen atmosphere.
Under N2 atmosphere, at 0 oC, to the solution of 23 (250 mg, 1.1 mmol, 1.0 eq.) in CH2Cl2 (10 mL) was added Et3N (6 ml) dropwise, the reaction was then raised to RT and stirred for additional 1 h. After that, 22 (250 mg, 1.34 mmol, 1.2 eq.) was added to the reaction and then stirred at 45 oC for 2 h, which was then evaporated in vacuo and purified by silica gel column chromatography (15% ethyl acetate-petroleum ether) to afford crude 24(230mg, about 50%, decomposed quickly at room temperature)as a yellow solid. Rf = 0.63 (30% ethyl acetatepetroleum ether).
|
2020-06-26T14:48:47.687Z
|
2020-06-26T00:00:00.000
|
{
"year": 2020,
"sha1": "245200dbb4e0a113a369742730605b0b2c72ca8d",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-17052-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "77caf248f9faf495cd3c6d9672851c468be641bf",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
2064479
|
pes2o/s2orc
|
v3-fos-license
|
Statin Therapy and Mortality in HIV-Infected Individuals; A Danish Nationwide Population-Based Cohort Study
Background Recent studies have suggested that statins possess diverse immune modulatory and anti-inflammatory properties. As statins might attenuate inflammation, statin therapy has been hypothesized to reduce mortality in HIV-infected individuals. We therefore used a Danish nationwide cohort of HIV-infected individuals to estimate the impact of statin use on mortality before and after a diagnosis of cardiovascular disease, chronic kidney disease or diabetes. Methods We identified all Danish HIV-infected individuals (1,738) who initiated HAART after 1 January 1998, and achieved virological suppression within 180 days. Date of first redemption of a prescription of statin was obtained from the Danish National Prescription Registry. We used Poisson regression analysis to assess adjusted mortality rate ratios (aMRR). First, time was censored at date of virological failure (VL >500 copies/ml). Second, time was not censored at virological failure. All analyses were adjusted for potential confounders. Results In the analyses confined to observation time without virological failure (+ censoring) statin therapy was associated with a non-statistically significant reduced rate of death (aMRR 0.75; 95% CI: 0.33–1.68). No difference was observed in the analysis with no censoring (aMRR 1.17; 95% CI: 0.66–2.07). Use of statin seemed to reduce mortality in individuals after a diagnosis of comorbidity {(+ censoring: aMRR: 0.34; 95% CI: 0.11–1.04), (−censoring: aMRR: 0.64; 95% CI: 0.32–1.29)}. No difference in rate of death could be detected before first date of diagnosis of comorbidity {(+ censoring: aMRR: 1.12; 95% CI: 0.34–3.62), (−censoring: aMRR: 0.90; 95% CI: 0.28–2.88)}. Conclusion Statin therapy might reduce all-cause mortality in HIV-infected individuals, but the impact on individuals with no comorbidity seems small or absent. An unambiguous proof of a causal relation can only be obtained in a randomized controlled trial, but the sample size predicted may be prohibitive for its conduct.
Introduction
HMG CoA Reductase Inhibitors (statins) are cholesterollowering drugs used extensively in the primary and secondary prevention of cardiovascular disease [1]. Recent studies though have suggested that statins possess cholesterol-independent or pleitropic effects including diverse immune modulatory and antiinflammatory properties [2,3]. A wide range of beneficial effects have thus been hypothesized.
It is well established by several large clinical trials [4][5][6][7][8][9] that statin therapy can reduce the risk of coronary and cerebrovascular events, and decrease mortality due to coronary artery disease. During the last years a large number of cohort studies have investigated the impact of statin therapy on mortality for a wide range of other medical conditions. As such, studies on chronic obstructive pulmonary disease [10][11], sepsis [12][13][14], multiple sclerosis, non-ischemic heart failure and rheumatoid arthritis [15][16][17] have indicated potential protective effects of statin therapy with reduction in both all-cause as well as cause-specific mortality. However, as small negative studies are absent (publication bias), and very few results from randomized controlled trials (RCTs) are present, the evidence in general seems poor.
Recently the potential role of statins in HIV infection has also been subject of a debate. Despite successful suppression of HIVreplication with highly active antiretroviral therapy (HAART), HIV-infected individuals may have persistent inflammation, which can lead to a higher risk of age-associated non-AIDS morbidity [18] and mortality. As statins might attenuate inflammation [2,[19][20][21][22] statin therapy could potentially have beneficial effects on mortality in HIV-infected individuals beyond the known impact on cardiovascular disease. In a recent study, Moore et al. [23] found that, in HIV-infected individuals on HAART, statin therapy was associated with a lower risk of all-cause mortality (Adjusted Mortality Rate Ratio (aMRR): 0.33; 95%CI: 0.14-0.76). Evaluation of effects of drugs in cohort studies may be substantially hampered by unmeasured confounding and confounding by indication [24]. However, currently there are no data from RCTs to prove this possible effect in HIV-infected individuals. As RCTs are expensive and time consuming, Moore et al. proposed that further observational cohorts studies should investigate the potential protective effect of statin therapy in HIV-infected individuals [23]. We therefore conducted a nationwide cohort study using similar strategies for data analysis to determine the impact of statin therapy on mortality in HIV-infected individuals. We further applied additional strategies of data analysis.
Setting
As of 1 January 2010 Denmark had a population of 5.5 million, with an estimated HIV prevalence of 0.1% among adults [25][26]. Treatment of HIV infection is restricted to eight specialized centers, where patients are seen on an outpatient basis at intended intervals of 12 weeks. Antiretroviral treatment is provided free-ofcharge. During the follow-up period of the study, national criteria for initiating HAART were HIV-related disease, acute HIV infection, pregnancy, CD4 cell count ,300 cells/ml, and, until 2001, plasma HIV-RNA .100,000 copies/ml.
Data sources
We used the unique 10-digit civil registration number assigned to all individuals in Denmark at birth or upon immigration to link data from the following registers: The Danish HIV Cohort Study (DHCS). DHCS, which has been described in detail elsewhere [27], is a nationwide, prospective, population-based cohort study of all Danish HIVinfected individuals treated at Danish hospitals since 1 January 1995. DHCS is still ongoing, thus consecutively enrolling new HIV-infected patients and immigrants with HIV infection. Data are updated yearly on demographics, vital status, AIDS defining events, dates of and information on initiation of or changes in antiretroviral treatment. CD4 cell count and viral loads (VL) are extracted electronically from laboratory data files.
The Danish Civil Registration System (DCRS). DCRS, established in 1968, is a national registry which stores information on vital status, residency, and migration for all Danish residents [28].
Study period
The study period was 1 January 1998 through 31 December 2009.
Study population
From DHCS we included all Danish HIV positive patients older than 16 years at HIV diagnosis, who started HAART on 1 January 1998 or thereafter and before 31 December 2009, and within 6 months of that date had an undetectable VL (,50 copies/ml). Individuals who immigrated to Denmark after HAART initiation were excluded ( Figure 1). The index date was defined as the date of HAART initiation. HAART was defined as a treatment regimen of at least three antiretroviral drugs or a treatment regimen including a combination of a non-nucleoside reverse transcriptase inhibitor and a boosted protease inhibitor and/or integrase inhibitor. Structured treatment interruptions have generally not been used in Denmark, why an individual who initiated HAART was considered on HAART for the rest of the observation period.
Outcome
The primary outcome was time to death from any cause as registered in the Danish Civil Registration System (DCRS).
Exposure
Date of first redemption of a prescription of a statin was included as a time-updated covariate. The following statins were included in the analysis: simvastatin, lovastatin, pravastatin, fluvastatin, atorvastatin, cerivastatin, rosuvastatin and a combination of the drugs simvastatin and ezetimibe (Anatomical Therapeutic Chemical Classification code (ATC): C10AA01-07, C10BA02). Statins are reimbursable and only available on prescription in Denmark. They are used according to national guidelines [32] that are almost identical to the ESC/EAS Guidelines [33]. Names of and ATC codes for statins are further provided in the appendix S1. As with HAART, an individual who initiated a statin was considered on statins for the rest of the observation period.
Covariates and confounder control
We first introduced the following covariates to control for potential confounding (adjustment 1): age (included as a time- year of HAART initiation (log10(VL copies/ml)) and a cholesterol level before or up to 1 year after HAART initiation (,5, 5-8, .8 mmol/L). We further used an extended model (adjustment 2) in which comorbidity and the interaction term between statin use/non-use and comorbidity was also included. In these analyses comorbidity was defined as the date an individual was first diagnosed with one of the following comorbid conditions (coronary artery disease, cerebrovascular disease, peripheral artery disease, chronic kidney disease or a redeemed prescription of an antidiabetic drug (surrogate marker of diabetes mellitus)) as defined in DNHR and DNPR, and introduced as time-updated variable. ICD8, ICD10 and ATC codes are provided in appendix S2 and S3.
Statistical analysis
We computed time from HAART initiation until date of death from any cause, emigration, lost to follow up or 31 December 2009, whichever occurred first. Table 1 presents a summary of the models used in the study. We first performed the analyses in accordance with the study by Moore et al (model A) in which individuals who developed virological failure were censored at first date of a VL measurement higher than 500 copies/ml after date of first achieved undetectable VL (,50copies/ml). Subsequently a second analysis (model B) was performed with no censoring due to virological failure. We computed mortality rates (MR) per 1,000 person years and used Poisson regression analysis to compute aMRR, as a measure of the relative risk, and 95% confidence intervals (CI). To evaluate the impact of statins on rate of death we included date of statin initiation as a time-updated variable as described above and compared the rate of death before statin initiation to the rate after statin initiation. All analyses were adjusted for potential confounding factors as mentioned above. Due to a clinical important interaction between statin use/non-use and comorbidity, MRRs were estimated for time before or with no comorbidity and after a diagnosis of comorbidity. SPSS version 19.0 (SPSS Inc., Chicago, Illinois, USA) and STATA software, version 11.0 (Stata Corporation, college Station Texas, USA) were used for data analyses. Data from DNHR and DNPR was obtained with approval from the Danish Registry Board. The study was approved by the Danish Data Protection Agency (record number 2008-41-1781). Figure 1 presents a summary of the study design. The study cohort consisted of 1,738 HIV-infected individuals who initiated HAART on 1 January 1998 or after this date and within 180 days of that had a VL ,50 copies/ml. Two main analyses were performed (table 1). In the first analysis (model A) time of followup was censored in 396 (22.8%) individuals at date of virological failure (VL .500 copies/ml). In the second analysis (model B) individuals were not censored due to virological failure.
Results
In model A, 145 (8.3%) HIV-infected individuals initiated a statin of whom 124 (7.1%) were started after HAART initiation. These analyses gave rise to a total of 7,952 person-years of followup (PYR). Of these 7,528 PYR were before and 424 PYR after statin initiation. In total 109 (6.3%) individuals died of which 7 (6.4%) had initiated a statin drug prior to death. In Model B 169 (9.7%) individuals initiated a statin (148 (8.5%) after HAART initiation) and had a total of 9,865 PYR (9,358 PYR before and 506 PYR after statin initiation). 171 HIV-infected individuals died, of whom 15 had initiated a statin. Additional characteristics of the HIV-infected individuals are provided in table 2. The generic types of statin initiated were simvastatin (53.3%), pravastatin (31.2%), atorvastatin (5.9%) and rosuvastatin (9.5%).
Discussion
Despite higher rates of comorbidity (cardiovascular disease, chronic kidney disease and diabetes) in HIV-infected statin users, Table 1. Models used in the study for HIV-infected individuals who initiated HAART 1 January 1998 or thereafter, and within 6 months of that date had an undetectable VL (,50 copies/ml).
Model
Censoring Adjustment Censored at first VL .500 copies /ml after first VL ,50 copies/ml Age intervals (time-updated), gender, race, HIV-transmission group, hepatitis C status, calendar year of HAART initiation, AIDS defining illnesses prior to HAART, ART use before initiating HAART, CD4 cell count, viral load and cholesterol at HAART initiation.
A2 (Model A, Adjustment 2)
As for A1 As model A1, but also including first date of comorbidity (cardiovascular disease, chronic kidney disease and diabetes) as time updated covariate and the interaction term between statin use/non-use and comorbidity we observed a trend towards a reduction in all-cause mortality in association with statin therapy in the analyses where patients with virological failure (model A) were censored. In analyses with no censoring (model B), statin users died at the same rate as nonusers. Although, use of statin reduced mortality in individuals after a diagnosis of comorbidity the impact on individuals with no comorbidity seemed minimal or absent. The strengths of our study include use of a nationwide population-based cohort with a long observation period and complete follow-up. As we had access to Danish registries of a high quality we could identify all redeemed prescriptions on statins dispensed at Danish community pharmacies as well as valid data on date of death. Furthermore, the availability of electronically collected data on VL, CD4 cell counts and history of antiretroviral treatment from DHCS, minimized potential selection and information bias. As the main aim of the study was to investigate whether we could confirm the findings by Moore et al. [23], we conducted the analysis using model A1 (Table 1) with only minor differences compared to their study. However, model A might have been biased by informed censoring [34], why model B was conducted. Both model A1 and B1 might have been biased by confounding by indication [24]. We therefore made additional analyses adjusted for comorbidity treated as a time-updated variable and included the clinically important interaction term between comorbidity and statin-use/non-use in the model (model A2 and B2).
Our study has some limitations. We did not consider the specific type of statin therapy or the degree of exposure (dosage and duration), thus assuming that statin-associated benefits were a class effect. However, as statins possess different potencies for HMG CoA reductase inhibition, due to differences in tissue permeability and metabolism this could lead to differences between studies. We analyzed the redemption of prescriptions of statins given the assumption that drug acquisition was a reasonable surrogate for consumption. However, adherence problems might exist. Furthermore, we did not consider cessations or changes of either statin or HAART. In the multivariate analyses we included the baseline CD4 cell count, viral load and cholesterol level; however, we are aware that the absence of the dynamic of these parameters on HAART could have affected the results. Furthermore, as we had no dynamic data on high-density lipoprotein, smoking status, body mass index or blood pressure, we could not adjust for these potential confounders. Due to the study design, we had to rely on hospital registry-based discharge diagnoses in order to identify comorbidity. Importantly, we used the same source of data to ascertain comorbidity for all study subjects. As registration of diagnoses is restricted to hospital contacts, we used first date of redemption of an antidiabetic drug as a surrogate marker of diabetes, as ICD codes are not valid for identification of diabetes.
If individuals under study are unlikely to obtain the prescribed medication from sources not captured by the database, the measure can be considered to have a high specificity [31]. A recent study [35] validated the diagnostic codes (ICD-10) used to ascertain the Charlson comorbidity index against the diagnoses assigned by the treating physician and found a consistently high positive predictive value (PPV) that was above 95% for myocardial infarction, peripheral artery disease, cerebrovascular disease and chronic kidney disease. Angina pectoris, for which the validity may be somewhat lower, was however not included in this analysis. Furthermore, information on patients with comorbidity that was not diagnosed at the hospital, as well as prehospital death was not covered. Other studies have found high [36] to moderate PPV [37][38], however, these studies were meant to validate the diagnoses assessed by the treating physician. Finally, despite adjustments for a number of confounders, we cannot exclude bias due to unmeasured and residual confounding. In a recent study Moore et al. [23] found that in HIV-infected individuals statin therapy was associated with a statistically significant 3 fold reduction in mortality (adjusted hazard ratio (aHR):0.33; 95% CI: 0.14-0.76). We were unable to reproduce this highly significant reduction in mortality. We presume some methodological problems might bias the results by Moore et al. [23]. Although we conducted model A almost as done in the study by Moore et al. [23], there are some differences that potentially could contribute to the difference in results. First, accessibility to the healthcare service (hospital and general practitioner) in Denmark is quite high and free of charge. Moreover, as statins are reimbursable, the prescriptions of these drugs rely mainly on an objective risk assessment [32][33] and are not affected by the quality of health care insurances. Second, the Johns Hopkins HIV clinical cohort, used by Moore et al [23], consists of patients who presents themselves for HIV care at the institutions and agree to participate in the cohort study, whereas our cohort is population based and nationwide. Third, baseline characteristics of our HIVinfected population differed from that used by Moore et al. [23] (fever females, more Caucasians, less IDUs, less HCV). Also the nadir CD4 cell count, which is a marker for poor clinical outcome, might differ between groups. Fourth, the median time of follow-up was substantially shorter in the study by Moore et al. [23] (1.6 years (median time on statin: 2 years)) than in our study (Model A) and fewer people died (85 in total, 7 on statin, 78 not on statin). This could rely on differences in the amount of individuals Table 3. Mortality rate ratio (MRR) of HIV-infected individuals initiating HAART after 1 January 1998 with a VL ,50 copies/ml within 180 days of HAART initiation with censoring of individuals with virological failure (VL .500copies/ml) comparing time on statin with time not on statin. censored; however, the latter data is not accessible in the study by Moore et al. [23]. Fifth, the type of statin used differed largely as the indication for prescribing statin (primary/secondary prophylaxis) might have done. Sixth, Moore et al. [23] required that patients had to receive statin for at least 30 days; however, as both HAART and statin use were based on prescribing those drugs and not the filling of a prescription, adherence problems could be a larger issue than in our study. Seventh, Moore et al. [23] censored follow-up in individuals with virological failure. This could seem like a sensible method to control for confounding, but as indicated above, it could lead to serious bias. We applied an additional strategy (model B), in which time was not censored at virological failure, and found no difference in mortality in association with statin therapy. Although model B would not be affected by informative censoring, there may however be a substantial amount of patient follow-up time where a non-suppressed viral load, which is strongly associated with increased mortality, could result in bias towards the null. Eighth, Moore et al. [23] adjusted the analysis for a number of important covariates including hemoglobin. Although hemoglobin has been found to be a strong independent prognostic marker of death [39], we presume that the lack of this covariate in our analysis cannot explain the difference in results. Ninth, Moore et al. [23] did not take effects of comorbidity into account.
The reduced mortality after statin initiation found by Moore et al. [23] and in our study (model A1) might rely on confounding from several factors such as alcohol consumption, smoking, obesity and non-AIDS morbidity (cardiovascular as well as non-cardiovascular). Also factors that might affect the allocation of a patient to statin therapy such as ethnicity, abuse, compliance and health seeking behavior in general may confound the results. In some studies the association between statin use and adverse outcomes, has been proposed to be due to a healthy user/adherer effect [40][41]. Initiation of and adherence to statin therapy could therefore be a surrogate marker for a higher medical attention and a healthier lifestyle [40][41][42]. We cannot exclude that these factors could bias our study, the study by Moore et al. [23] and Overton et al. [39], in which case the effect of statin would be overestimated. Despite major focus on cardiovascular disease in HIV-infected individuals, individuals not on statin might have unrecognized heart disease or risk factors. Furthermore, as the indications for the use of statins have been expanded during the last decade high-risk patients might have been substantially undertreated in the early years [1,8].
In a large meta-analysis of 14 RCTs of statin effects (90,056 individuals with cardiovascular disease, diabetes or risk factors) [9], Baigent et al. found a significant reduction in all-cause mortality and death due to vascular diseases but no difference regarding non-vascular causes of death between statin users and non-users (RR 0.95, 95%CI 0.91-1.01). In line with this we found very little if any impact of statin therapy on overall mortality in the time before or with no diagnosis of comorbidity (model A2 and B2). As our analysis did not have the power to address subgroups of individuals with e.g. immunologic non-response (i.e. persistently low CD4 cell counts despite years of suppressive HAART), we cannot exclude that statins might be effective for a subgroup of patients with substantially elevated levels of immune activation.
Given the observational study design, an RCT of an appropriate size is needed to achieve a valid evaluation of statin effects on mortality in HIV-infected individuals. However, using the mortality rates seen in our study, more than 25.000 patients have to be recruited into each arm in an RCT to detect a 10% reduction in mortality with 3 years of follow-up (alpha: 0.05, power: 0.80). Statins are already widely used and although these drugs are rather safe and tolerable, 3 recent meta-analyses [43][44][45] have found that statin therapy might be associated with a modest, dose-dependent, 9-12% higher risk of new-onset diabetes mellitus. The clinical importance of this potential risk seems to be outweighed by the cardiovascular benefit in individuals for whom statin therapy is recommended, [43][44][45]. But, if new indications are emerging, in which statin therapy are used for patients at low cardiovascular risk, the risk might not outweigh the benefits [45].
In conclusion, statin therapy might have a beneficial effect on all-cause mortality in HIV-infected individuals, but the impact in individuals with no cardiovascular disease, chronic kidney disease or diabetes is small or absent. An RCT is needed to make an evidence-based proof of a causal relation. However, as patients with high risks of cardiovascular disease obviously cannot be included in the trial, the sample size needed may be prohibitive for the conduct of the study.
Supporting Information
Appendix S1 ATC codes of cholesterol reducing drugs.
|
2016-05-12T22:15:10.714Z
|
2013-03-04T00:00:00.000
|
{
"year": 2013,
"sha1": "685ca205b20a7dcbb594e268ee04bb6801df14be",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0052828",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "685ca205b20a7dcbb594e268ee04bb6801df14be",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
18542756
|
pes2o/s2orc
|
v3-fos-license
|
Differential Cytotoxic Potential of Silver Nanoparticles in Human Ovarian Cancer Cells and Ovarian Cancer Stem Cells
The cancer stem cell (CSC) hypothesis postulates that cancer cells are composed of hierarchically-organized subpopulations of cells with distinct phenotypes and tumorigenic capacities. As a result, CSCs have been suggested as a source of disease recurrence. Recently, silver nanoparticles (AgNPs) have been used as antimicrobial, disinfectant, and antitumor agents. However, there is no study reporting the effects of AgNPs on ovarian cancer stem cells (OvCSCs). In this study, we investigated the cytotoxic effects of AgNPs and their mechanism of causing cell death in A2780 (human ovarian cancer cells) and OvCSCs derived from A2780. In order to examine these effects, OvCSCs were isolated and characterized using positive CSC markers including aldehyde dehydrogenase (ALDH) and CD133 by fluorescence-activated cell sorting (FACS). The anticancer properties of the AgNPs were evaluated by assessing cell viability, leakage of lactate dehydrogenase (LDH), reactive oxygen species (ROS), and mitochondrial membrane potential (mt-MP). The inhibitory effect of AgNPs on the growth of ovarian cancer cells and OvCSCs was evaluated using a clonogenic assay. Following 1–2 weeks of incubation with the AgNPs, the numbers of A2780 (bulk cells) and ALDH+/CD133+ colonies were significantly reduced. The expression of apoptotic and anti-apoptotic genes was measured by real-time quantitative reverse transcriptase polymerase chain reaction (qRT-PCR). Our observations showed that treatment with AgNPs resulted in severe cytotoxicity in both ovarian cancer cells and OvCSCs. In particular, AgNPs showed significant cytotoxic potential in ALDH+/CD133+ subpopulations of cells compared with other subpopulation of cells and also human ovarian cancer cells (bulk cells). These findings suggest that AgNPs can be utilized in the development of novel nanotherapeutic molecules for the treatment of ovarian cancers by specific targeting of the ALDH+/CD133+ subpopulation of cells.
Introduction
Ovarian cancer is the fifth most common cancer among all types of cancer, and the second most common gynecological malignancy. According to the American Cancer Society (ACS), 22,280 women will receive a new diagnosis of ovarian cancer, and 14,240 women will die from ovarian cancer in 2016 [1]. Most cases are diagnosed in the advanced stage [2]. The preliminary treatment was performed such as surgery is followed by platinum-based chemotherapy in women with ovarian cancer [3,4]. Although most women respond to primary treatment, eventually there is chemoresistance. The recurrence of cancer is due to a high degree of heterogeneity within ovarian tumors, a key feature of ovarian cancer, and between different ovarian cancer subtypes. In addition, a paucity of widely expressed therapeutically targetable genetic changes restricts effective treatment options [5]. Combination of chemotherapy is initially beneficial for ovarian cancer patients but eventually resistance develops [6]. In addition, ovarian cancer cells are a heterogeneous population of cells, with increased tumorigenicity and differentiating capacity compared with other cancer stem cells (CSCs) [7]. CSCs were isolated from various cancer cells based on either differential expression of cell surface markers or differential biochemical properties [8][9][10]. Aldehyde dehydrogenase (ALDH) has been proposed together with CD133 to identify the CSC population in hepatocellular carcinoma [11] and ALDH + cells are seems to be capable of directly generating tumors in vivo [10]. Among different subpopulations of CSCs, ALDH + and CD133 + populations of cells were able to form three-dimensional spheres more efficiently than their negative counterparts. Further, ALDH + , CD133 + , and ALDH + /CD133 + cells are capable to form tumors rapidly [9]. Choi et al. [12] reported that ALDH + /CD133 + subpopulations of cells could generate all four type of ALDH +/− CD133 +/− cell populations and had a clear branched differentiation hierarchy. Therefore, targeting CSCs is a vital aspect of cancer therapy.
Increasing evidence suggests that CSCs contribute to acquiring chemotherapy resistance across a broad range of malignancies, and a better understanding of CSCs could aid in the design of new therapies that improve the efficacy of chemotherapy [13]. CSCs are capable of unlimited self-renewal, which would give rise to tumorigenicity, and drug resistance for long term [14,15]. CSCs are able to grow and spread to maintain tumorigenic potential [12]. The potential of the CSCs population in ovarian cancer cells is defined by cell markers, including ALDH enzymatic activity and the stem cell marker CD133 [16,17], suggesting the potential role of ALDH + /CD133 + cells as the ovarian cancer cells of origin [18]. The mechanism of chemoresistance of cancer stem cells is a result of several factors including enhanced ALDH activity, ATP-binding cassette transporters (ABC) transporter expression, B-cell lymphoma-2 (BCL2)-related chemoresistance, enhanced DNA damage response, and activation of key signaling pathways [19].
Nanoparticles have become widely utilized because of their unique properties and diverse applications in industry, cosmetics, biotechnology, and nanomedicine. Silver nanoparticles (AgNPs) are one of the most commercialized nanoparticles worldwide among various nanomaterials. AgNPs are used as antibacterial, anticancer, and antiangiogenic agents because of their unique properties, such as optic and catalytic features, and they have potential for use in the creation of novel and advanced functional materials [20][21][22]. Therefore, the unique toxicity profiles of AgNPs may also offer an opportunity to exploit specific vulnerabilities in cancer, provided that an appropriate disease target could be identified (likely CSCs). The potential therapeutic efficiency of any anticancer drug is based on targeting specific cells; particularly, distinguishing between cancer cells and normal cells based on differential sensitivities of the two cell type [23]. The significant challenges in the treatment of ovarian cancer are due to multiple ovarian histophenotypes, various possible sites of disease origin, and differential hierarchal contributions of multiple CSC populations [17]. Therefore, the identification, functional characterization, and therapeutic targeting of ovarian CSCs are necessary. To the best of our knowledge, there is no study on the cytotoxic effect of AgNPs on different subpopulations of ovarian CSCs. Therefore, we designed a study based on the following objectives: the first objective of this study was to isolate different subpopulations of CSCs from human ovarian cancer cells using different surface markers such as ALDH + /CD133 + . The second objective of this study was to evaluate the cytotoxic potential of AgNPs on bulk cancer cells (A2780) and different subpopulations of ovarian cancer stem cells (OvCSCs). The third objective was to assess the effect of AgNPs on OvCSC self-renewal capacity, using the colony formation assay, and to elucidate the mechanisms of apoptosis induced by AgNPs in bulk cancer cells (A2780) and a specific subpopulation of OvCSCs, ALDH + /CD133 + cells.
Characterization of Silver Nanoparticles (AgNPs)
The aim of this experiment was to understand the anticancer effect of AgNPs in ovarian cancer cells and OvCSCs. Characterization of AgNPs was performed according to methods previously described [24]. First, we performed preliminary characterization using UV-VIS spectroscopy (Mecasys Co., Seoul, Korea). he UV-VIS absorption spectra were measured, and peaks was observed in the range of 350-550 nm, with a strong peak located at 420 nm ( Figure 1A), which is typical characteristic feature of AgNPs. To confirm the crystalline nature of the particles, the X-ray diffraction (XRD) pattern of the AgNPs was evaluated; it is shown in Figure 1B. The XRD results clearly showed that the AgNPs were crystalline in nature, and four prominent peaks were observed. Transmission electron microscopy (TEM) is a valuable tool for analysis of the surface morphology and shape of nanoparticles. As shown in Figure 1C, the diameter and morphology of AgNPs were analyzed by TEM. The TEM image shows well-dispersed, uniform, spherical-shaped particles. We measured the particle size distributions from transmission electron microscopy images from more than 200 particles, and the distribution is presented. The average range of observed particle diameter was 47.5 nm ( Figure 1D). Although the average size was 47.5 nm, the AgNP colloidal suspension contained differently sized particles with a diameter range mostly between 42 nm and 57 nm. The size distribution was further confirmed by dynamic light scattering (DLS), which is used to evaluate particle size and size distribution of nanomaterials in solution [21]. DLS analysis shows that AgNPs had an average size of 50 nm ( Figure 1E). However, the range of sizes was higher than that obtained using TEM because of Brownian motion.
Characterization of Silver Nanoparticles (AgNPs)
The aim of this experiment was to understand the anticancer effect of AgNPs in ovarian cancer cells and OvCSCs. Characterization of AgNPs was performed according to methods previously described [24]. First, we performed preliminary characterization using UV-VIS spectroscopy (Mecasys Co., Seoul, Korea). he UV-VIS absorption spectra were measured, and peaks was observed in the range of 350-550 nm, with a strong peak located at 420 nm ( Figure 1A), which is typical characteristic feature of AgNPs. To confirm the crystalline nature of the particles, the X-ray diffraction (XRD) pattern of the AgNPs was evaluated; it is shown in Figure 1B. The XRD results clearly showed that the AgNPs were crystalline in nature, and four prominent peaks were observed. Transmission electron microscopy (TEM) is a valuable tool for analysis of the surface morphology and shape of nanoparticles. As shown in Figure 1C, the diameter and morphology of AgNPs were analyzed by TEM. The TEM image shows well-dispersed, uniform, spherical-shaped particles. We measured the particle size distributions from transmission electron microscopy images from more than 200 particles, and the distribution is presented. The average range of observed particle diameter was 47.5 nm ( Figure 1D). Although the average size was 47.5 nm, the AgNP colloidal suspension contained differently sized particles with a diameter range mostly between 42 nm and 57 nm. The size distribution was further confirmed by dynamic light scattering (DLS), which is used to evaluate particle size and size distribution of nanomaterials in solution [21]. DLS analysis shows that AgNPs had an average size of 50 nm ( Figure 1E). However, the range of sizes was higher than that obtained using TEM because of Brownian motion. The absorption spectrum of AgNPs exhibited a strong broad peak at 420 nm and observation of such a band is assigned to surface plasmon resonance of the particles (A); X-ray diffraction (XRD) pattern of silver nanoparticles (B); TEM images of AgNPs (C); particle size distributions from transmission electron microscopy images (D); Several fields were photographed and used to determine the diameter of AgNPs, the average range of observed diameter was 47.5 nm. Size distribution analysis of AgNPs using dynamic light scattering (DLS) (E). The absorption spectrum of AgNPs exhibited a strong broad peak at 420 nm and observation of such a band is assigned to surface plasmon resonance of the particles (A); X-ray diffraction (XRD) pattern of silver nanoparticles (B); TEM images of AgNPs (C); particle size distributions from transmission electron microscopy images (D); Several fields were photographed and used to determine the diameter of AgNPs, the average range of observed diameter was 47.5 nm. Size distribution analysis of AgNPs using dynamic light scattering (DLS) (E).
AgNPs Induce Dose-and Time-Dependent Effects on Cell Viability in Human Ovarian Cancer Cells
Before examining the effect of AgNPs on OvCSCs, we first examined the cytotoxic effects of AgNPs on A2780 cells (bulk) using a cell viability assay. A2780 cells are parental cells, which are used for isolation of OvCSCs. To determine the effect of AgNPs on A2780 cells, A2780 cells were exposed to different concentrations of AgNPs ranging from 20 ng/mL to 10,000 ng/mL for 12 and 24 h, and then cell viability was assessed using the cell counting kit (CCK-8) assay ( Figure 2). The results of the CCK-8 assay, which measured water-soluble formazan dye produced by metabolic activity of live cells, showed that cell viability was decreased after exposure to AgNPs in a time-and dose-dependent manner. A2780 cells were treated with various concentrations of AgNPs for 12 and 24 h, and the results suggest that AgNPs were able to reduce the cell viability of A2780 cells in a dose-dependent manner ( Figure 2A). After 12 h of treatment, AgNPs were found to be cytotoxic to the cells at concentrations of 200 ng/mL but this effect was significant at 10,000 ng/mL. When the same cells were treated with 20-10,000 ng/mL for 24 h, significant cytotoxicity was observed even at 50 ng/mL ( Figure 2B). It suggests that the effect of AgNPs is clearly influenced by the time of incubation and dose. Finally, we determined minimum inhibitory concentration of AgNPs at 24 h, which was found to be 1000 ng/mL ( Figure 2B). Interestingly, at higher concentration at above 1000 ng/mL, the toxicity is not significant; it maintains the same level of toxicity.
AgNPs Induce Dose-and Time-Dependent Effects on Cell Viability in Human Ovarian Cancer Cells
Before examining the effect of AgNPs on OvCSCs, we first examined the cytotoxic effects of AgNPs on A2780 cells (bulk) using a cell viability assay. A2780 cells are parental cells, which are used for isolation of OvCSCs. To determine the effect of AgNPs on A2780 cells, A2780 cells were exposed to different concentrations of AgNPs ranging from 20 ng/mL to 10,000 ng/mL for 12 and 24 h, and then cell viability was assessed using the cell counting kit (CCK-8) assay ( Figure 2). The results of the CCK-8 assay, which measured water-soluble formazan dye produced by metabolic activity of live cells, showed that cell viability was decreased after exposure to AgNPs in a time-and dose-dependent manner. A2780 cells were treated with various concentrations of AgNPs for 12 and 24 h, and the results suggest that AgNPs were able to reduce the cell viability of A2780 cells in a dose-dependent manner (Figure 2A). After 12 h of treatment, AgNPs were found to be cytotoxic to the cells at concentrations of 200 ng/mL but this effect was significant at 10,000 ng/mL. When the same cells were treated with 20-10,000 ng/mL for 24 h, significant cytotoxicity was observed even at 50 ng/mL ( Figure 2B). It suggests that the effect of AgNPs is clearly influenced by the time of incubation and dose. Finally, we determined minimum inhibitory concentration of AgNPs at 24 h, which was found to be 1000 ng/mL ( Figure 2B). Interestingly, at higher concentration at above 1000 ng/mL, the toxicity is not significant; it maintains the same level of toxicity. The viability of A2780 human ovarian cancer cells was determined after 12 h exposure to different concentrations of AgNPs using the CCK-8 assay (A); the viability of A2780 human ovarian cancer cells was determined after 24 h exposure to different concentrations of AgNPs using the CCK-8 assay (B); the results are expressed as the mean ± standard deviation of three independent experiments. The viability of treated cells compared to the untreated cells was analyzed using the Student's t-test (* p < 0.05).
Isolation and Characterization of Cancer Stem Cells (CSCs)
To determine the cytotoxic potential of AgNPs on different subpopulations of OvCSCs from A2780 cells, we first gated CD133 expression and then checked the expression of ALDH in CD133 − and CD133 + cell populations. The P9 gate resulted in the OvCSC population shown in Figure 3. To characterize the tumorigenic potential of different subpopulations of cells dually stained for ALDH expression and ALDH activity, ALDH + CD133 + , ALDH − /CD133 + , ALDH + /CD133 − , and ALDH − /CD133 − cells were isolated from ovarian cancer cell lines. We characterized OvCSCs by the expression of potential CD133 and ALDH CSC markers, as these are characteristic markers for The viability of A2780 human ovarian cancer cells was determined after 12 h exposure to different concentrations of AgNPs using the CCK-8 assay (A); the viability of A2780 human ovarian cancer cells was determined after 24 h exposure to different concentrations of AgNPs using the CCK-8 assay (B); the results are expressed as the mean ± standard deviation of three independent experiments. The viability of treated cells compared to the untreated cells was analyzed using the Student's t-test (* p < 0.05).
Isolation and Characterization of Cancer Stem Cells (CSCs)
To determine the cytotoxic potential of AgNPs on different subpopulations of OvCSCs from A2780 cells, we first gated CD133 expression and then checked the expression of ALDH in CD133 − and CD133 + cell populations. The P9 gate resulted in the OvCSC population shown in Figure 3. To characterize the tumorigenic potential of different subpopulations of cells dually stained for ALDH expression and ALDH activity, ALDH + CD133 + , ALDH − /CD133 + , ALDH + /CD133 − , and ALDH − /CD133 − cells were isolated from ovarian cancer cell lines. We characterized OvCSCs by the expression of potential CD133 and ALDH CSC markers, as these are characteristic markers for identification and isolation of CSCs from ovarian or other solid tumors [8,10,12]. ALDH was highly expressed and is the only potential stem cell marker expressed in all primary tumor specimens as well as limited cellular sub-populations of human primary tumor cells [10]. Hence, it could be a potentially useful CSCs marker in ovarian cancer. ALDH + /CD133 + cells could be used to increase the ability to generate tumor xenografts compared with ALDH + /CD133 − or ALDH + alone [25]. ALDH + /CD133 + cells tend to elicit larger tumors and stimulate them more rapidly than ALDH + /CD133 − cells [25]. Based on literature and taking this into account, we scored different subpopulations including ALDH + /CD133 + , ALDH − /CD133 + , ALDH + /CD133 − , and ALDH − /CD133 − cells from ovarian cancer cell lines and used them for further studies. identification and isolation of CSCs from ovarian or other solid tumors [8,10,12]. ALDH was highly expressed and is the only potential stem cell marker expressed in all primary tumor specimens as well as limited cellular sub-populations of human primary tumor cells [10]. Hence, it could be a potentially useful CSCs marker in ovarian cancer. ALDH + /CD133 + cells could be used to increase the ability to generate tumor xenografts compared with ALDH + /CD133 − or ALDH + alone [25]. ALDH + /CD133 + cells tend to elicit larger tumors and stimulate them more rapidly than ALDH + /CD133 − cells [25]. Based on literature and taking this into account, we scored different subpopulations including ALDH + /CD133 + , ALDH − /CD133 + , ALDH + /CD133 − , and ALDH − /CD133 − cells from ovarian cancer cell lines and used them for further studies.
Effect of AgNPs on OvCSCs
CSCs are believed to occupy a limited percentage of solid tumors, and CSCs could be responsible for cancer relapses despite complete clinical remission with initial treatment. Many researchers are concentrating on the identification and development of new anticancer drugs with apoptosis-inducing properties, with a focus on CSCs. AgNPs are known to inhibit cancer cell viability in several cancer cell lines such as human breast, lung, and ovarian cancer cells [20,21,26]. Therefore, in this study we selected AgNPs as a potential alternative therapeutic agent for OvCSCs. AgNPs have a dual role: at lower concentrations, they can enhance cell survival and differentiation, and at higher concentrations, they can inhibit cell viability in neuronal cells [27]. For instance,
Effect of AgNPs on OvCSCs
CSCs are believed to occupy a limited percentage of solid tumors, and CSCs could be responsible for cancer relapses despite complete clinical remission with initial treatment. Many researchers are concentrating on the identification and development of new anticancer drugs with apoptosis-inducing properties, with a focus on CSCs. AgNPs are known to inhibit cancer cell viability in several cancer cell lines such as human breast, lung, and ovarian cancer cells [20,21,26]. Therefore, in this study we selected AgNPs as a potential alternative therapeutic agent for OvCSCs. AgNPs have a dual role: at lower concentrations, they can enhance cell survival and differentiation, and at higher concentrations, they can inhibit cell viability in neuronal cells [27]. For instance, AgNPs with an average size of 20 nm and at concentrations up to 2 µg/mL promoted osteogenic differentiation of urine-derived stem cells by inducing actin polymerization and activation of RhoA; whereas AgNO 3 had no such effects [28].
To determine the effect of AgNPs on cell survival of four different subpopulations of OvCSCs, we first examined the dose-dependent effect of AgNPs. In order to assess the sensitivity or resistance of OvCSCs, the four populations of cells were incubated with different concentrations of AgNPs (20-10,000 ng/mL) for 24 h. As shown in Figure 4, dose-dependent inhibition of the cell viability was observed in each subpopulation of cells in the concentration range of 20-10,000 ng/mL with an IC 50 (inhibitory concentration) value ranging from 1000-2000 ng/mL. The findings suggest that all four different subpopulations of cells exhibited enhanced cell viability after treatment with AgNPs concentrations at least up to 100 ng/mL except two subpopulations, such as ALDH + /CD133 + and ALDH − /CD133 + , and a significant inhibitory effect was observed between 20 and 10,000 ng/mL, which depends on the subpopulation of cells. For example, ALDH + /CD133 + cells treated with AgNPs at concentrations from 20-200 ng/mL, there was a significant inhibition in cell viability was observed, and a dramatic effect was observed between 500 and 10,000 ng/mL, with an IC 50 of~1000 ng/mL ( Figure 4A). When compared to bulk cells, it shows significant higher toxicity in dose dependent manner at above 1000 ng/mL. In ALDH + /CD133 − cells, a similar effect on cell viability was observed upon treatment with AgNPs at a concentration up to 200 ng/mL, and a significant inhibitory effect was observed at concentrations between 500 and 10,000 ng/mL, with an IC 50 of~1200 ng/mL ( Figure 4B). In ALDH − /CD133 + cells, a dose-dependent inhibition in cell viability was observed in the AgNPs range of 500-10,000 ng/mL, with an IC 50 of~1500 ng/mL. However, at lower concentrations, there was no significant inhibitory effect. There was a positive effect on cell viability up to 500 ng/mL ( Figure 4C). In ALDH − /CD133 − cells, a dose-dependent inhibition in cell viability was observed with AgNPs treatment in the range of 200-10,000 ng/mL, with an IC 50 of~1500 ng/mL ( Figure 4D). ALDH + /CD133 + was shown to have more sensitivity to AgNPs, and a significant inhibitory effect on cell viability was observed compared with the other tested subpopulations of cells. Interestingly, ALDH + /CD133 + cells were more sensitive even at lower concentrations of AgNPs than the other tested subpopulations of cells. Altogether, the results suggest that ALDH + /CD133 + cells are a promising target cell type, to inhibit the viability of ovarian cancer stem cells. Generally, CSCs cell survival are governed by several signaling mechanism such as Notch, Hedgehog, Wnt, Her2, and IL-6 and -8 signaling pathways. Wnt signaling could be possible target for loss of viability in ALDH + /CD133 + cells. However the mechanism of sensitivity is not known. The differential IC 50 value of each subpopulation of cells indicates that sensitivity of each cell type against AgNPs. Previous studies by Choi et al. [12] showed that only ALDH + /CD133 + cells could generate all four ALDH +/− CD133 +/− cell populations and identified a clear branched differentiation hierarchy. Therefore, further studies were focused only on the ALDH + /CD133 + subpopulation of cells.
Wnt signaling could be possible target for loss of viability in ALDH + /CD133 + cells. However the mechanism of sensitivity is not known. The differential IC50 value of each subpopulation of cells indicates that sensitivity of each cell type against AgNPs. Previous studies by Choi et al. [12] showed that only ALDH + /CD133 + cells could generate all four ALDH +/− CD133 +/− cell populations and identified a clear branched differentiation hierarchy. Therefore, further studies were focused only on the ALDH + /CD133 + subpopulation of cells. The results are expressed as the mean ± standard deviation of three independent experiments. The viability of treated cells compared to the untreated cells was analyzed using the Student's t-test (* p < 0.05).
Cytotoxic Effects of AgNPs on A2780 and OvCSCs
The above experimental conditions exhibited that low concentrations of AgNPs decrease the cell viability of ALDH + /CD133 + and ALDH + /CD133 − and increased the cell viability of ALDH − /CD133 + and ALDH − /CD133 − cells, therefore, we selected the respective IC50 value for each type of stem cell and tested the cytotoxic effects of AgNPs by assessing CCK-8, lactate dehydrogenase (LDH) release, reactive oxygen species (ROS) generation, and mt-MP (mitochondrial membrane potential). Particularly, we selected ALDH + /CD133 + cells, because it shows more sensitivity than other subpopulations. In the following experiment, we used A2780 cells as a positive control, which are parental of cells for all four different subpopulation of cells. Using their respective IC50 values for AgNPs, both bulk cells (A2780) and ALDH + /CD133 + cells were treated with AgNPs for 24 h, and the cell viability was examined ( Figure 5A). When the A2780 and ALDH + /CD133 + cells were treated with AgNPs, there was a reduction in viability at IC50 concentrations of 1000 ng/mL in both parental cells as well as ALDH + /CD133 + . An interesting observation in this experiment was that both bulk cells and ALDH + /CD133 + cells appeared equally sensitive to AgNPs, which indicated that AgNPs have significant cytotoxicity towards cancer stem cells and bulk cells. Anthothecol-encapsulated poly lactic-co-glycolic acid (PLGA)-nanoparticles inhibited cell proliferation and colony formation and induced apoptosis in pancreatic CSCs and cancer cell lines, but had no effect on human normal pancreatic epithelial cells [29].
LDH is a well-known marker for cell membrane integrity and cell viability, and its
Cytotoxic Effects of AgNPs on A2780 and OvCSCs
The above experimental conditions exhibited that low concentrations of AgNPs decrease the cell viability of ALDH + /CD133 + and ALDH + /CD133 − and increased the cell viability of ALDH − /CD133 + and ALDH − /CD133 − cells, therefore, we selected the respective IC 50 value for each type of stem cell and tested the cytotoxic effects of AgNPs by assessing CCK-8, lactate dehydrogenase (LDH) release, reactive oxygen species (ROS) generation, and mt-MP (mitochondrial membrane potential). Particularly, we selected ALDH + /CD133 + cells, because it shows more sensitivity than other subpopulations. In the following experiment, we used A2780 cells as a positive control, which are parental of cells for all four different subpopulation of cells. Using their respective IC 50 values for AgNPs, both bulk cells (A2780) and ALDH + /CD133 + cells were treated with AgNPs for 24 h, and the cell viability was examined ( Figure 5A). When the A2780 and ALDH + /CD133 + cells were treated with AgNPs, there was a reduction in viability at IC 50 concentrations of 1000 ng/mL in both parental cells as well as ALDH + /CD133 + . An interesting observation in this experiment was that both bulk cells and ALDH + /CD133 + cells appeared equally sensitive to AgNPs, which indicated that AgNPs have significant cytotoxicity towards cancer stem cells and bulk cells. Anthothecol-encapsulated poly lactic-co-glycolic acid (PLGA)-nanoparticles inhibited cell proliferation and colony formation and induced apoptosis in pancreatic CSCs and cancer cell lines, but had no effect on human normal pancreatic epithelial cells [29]. CSCs population found in the bulk cells, which might use redox regulatory mechanisms to promote cell survival and tolerance to anticancer agents [32]. The possible reasons for lower levels of ROS in bulk cells are less ROS production, enhanced ROS scavenging systems, and the slow division of CSCs found in the bulk cells [33]. mt-MP reflects the functional status of the mitochondrion related to cancer malignancy [34]. Recent studies suggest that mitochondrial features are different in CSCs with respect to mt-MP and ROS [35,36]. To determine the mechanisms of ROS-mediated toxicity in bulk and OvCSCs, we assessed mt-MP. Several studies in cancer cells have shown that AgNP-induced ROS plays an important role in the formation of the mitochondrial permeability transition pore (MPTP), which eventually leads to activation of the mitochondria-dependent cell death pathways [21,37]. Thus far, there have been no reports on the comparative effects of AgNPs on the mt-MP in cancer cells or in Figure 5. Effect of AgNPs on various cytotoxicity parameters in bulk cells and ALDH + /CD133 + . Bulk cells and ALDH + /CD133 + were incubated with AgNPs (1000 ng/mL) for 24 h. Cell viability was determined using cell counting kit (CCK-8) assay (A); lactate dehydrogenase (LDH) activity was measured at 490 nm using the LDH cytotoxicity kit (B); reactive oxygen species (ROS) generation was determined by 2',7'-dichlorofluorescein diacetate (DCFDA) (C); mitochondrial transmembrane potential (MTP) was determined using the cationic fluorescent indicator, JC-1 (D). The results are expressed as the mean ± standard deviation of three independent experiments. The treated groups showed statistically significant differences from the control group by Student's t-test (* p < 0.05).
LDH is a well-known marker for cell membrane integrity and cell viability, and its accumulation is the result of the breakdown of the plasma membrane and the alteration of its permeability at the stage of secondary necrosis at the late stage of apoptosis [30,31]. To assess the cytotoxic response to AgNPs, the amount of LDH leakage in the cell culture medium was measured at 24 h in bulk cells and ALDH + /CD133 + cells. Both bulk cells and ALDH + /CD133 + cells released LDH into the media (A2780) ( Figure 5B). Among these two different types of cells, the ALDH + /CD133 + subpopulation showed greater sensitivity than bulk cells. Interestingly, the leakage of LDH in ALDH + /CD133 + cells was higher than in bulk cells (A2780).
Next, we examined cytotoxic effects using the ROS generation assay. As expected, ALDH + /CD133 + cells produced higher amounts of ROS whereas bulk cells (A2780) produced less ROS ( Figure 5C). The increase in ROS levels in cancer cells is partially due to their higher metabolism rate. Lower levels of ROS in bulk cells are due to the drug-resistant or chemoresistant CSCs population found in the bulk cells, which might use redox regulatory mechanisms to promote cell survival and tolerance to anticancer agents [32]. The possible reasons for lower levels of ROS in bulk cells are less ROS production, enhanced ROS scavenging systems, and the slow division of CSCs found in the bulk cells [33]. mt-MP reflects the functional status of the mitochondrion related to cancer malignancy [34]. Recent studies suggest that mitochondrial features are different in CSCs with respect to mt-MP and ROS [35,36]. To determine the mechanisms of ROS-mediated toxicity in bulk and OvCSCs, we assessed mt-MP. Several studies in cancer cells have shown that AgNP-induced ROS plays an important role in the formation of the mitochondrial permeability transition pore (MPTP), which eventually leads to activation of the mitochondria-dependent cell death pathways [21,37]. Thus far, there have been no reports on the comparative effects of AgNPs on the mt-MP in cancer cells or in OvCSCs. Therefore, we analyzed mt-MP using mitochondrial fluorescence dye, JC-1, which stains mitochondria in a membrane potential-dependent manner, in bulk cells and ALDH + /CD133 + cells treated with AgNPs. As shown in Figure 5D, bulk cells and ALDH + /CD133 + cells exposed to 1000 ng/mL AgNPs for 24 h exhibited a significant decrease in the ratio of aggregate to monomer forms. The results suggest that AgNPs have a significant impact on mt-MP in bulk cells as well as in ALDH + /CD133 + cells. The mt-MP is an indicator of the functional status of mitochondria, which is thought to correlate with a cell's differentiation status, tumorigenicity, and malignancy [36]. The mitochondrial permeability is responsible for the release of apoptotic proteins such as cytochrome c and second mitochondria-derived activator of caspase (Smac), from the inter membrane space into the cytosol [38]. The functional status of mitochondria depends on mt-MP, which is highly related to cancer malignancy. Altogether, the data suggest that AgNPs could regulate the level of mt-MP and, in turn, induce apoptosis in both bulk cells and OvCSCs. Based on the cytotoxicity assays, ALDH + /CD133 + subpopulations seem to be more sensitive than bulk cells. Therefore, we selected ALDH + /CD133 + cells for further study.
AgNPs Inhibit Colonies Formation
To investigate whether AgNPs could impair the colony-formation capacity of A2780 and ALDH + /CD133 + cells, subpopulations were tested. After seeding the same number of both bulk and ALDH + /CD133 + subpopulations of cells, cells were cultured with AgNPs for~14 days. After~14 days, the colony formation ability was assessed by counting the number of colonies under a microscope after crystal violet staining. The results indicate that AgNPs-treated bulk A2780 cells had a significantly lower number of colonies compared to non-treated whole A2780 cells ( Figure 6A). Similarly, ALDH + /CD133 + cells treated with AgNPs showed fewer numbers of colonies than untreated cells. When compared to bulk cells, ALDH + /CD133 + cells had a significant reduction in the number of colonies ( Figure 6B). Altogether, the data from the cytotoxicity assays and clonogenecity assay showed that AgNPs were more cytotoxic to ALDH + /CD133 + OvCSCs than whole A2780 cells. To gain further evidence, we performed a quantitative analysis by dissolving crystal violet completely in menthol and then measuring absorbance at 590 nm. The relative absorbance showed the efficiency of AgNPs on inhibition of colony formation. The data demonstrated that AgNPs treatment of A2780 cells is an effective method for reducing the OvCSCs population in heterozygote ovarian tumors. Particularly, specific targeting of ALDH + /CD133 + cells by AgNPs is a suitable, efficient, and alternative method for cancer therapy. However, the mechanism of sensitivity of ALDH + /CD133 + cells are not known. completely in menthol and then measuring absorbance at 590 nm. The relative absorbance showed the efficiency of AgNPs on inhibition of colony formation. The data demonstrated that AgNPs treatment of A2780 cells is an effective method for reducing the OvCSCs population in heterozygote ovarian tumors. Particularly, specific targeting of ALDH + /CD133 + cells by AgNPs is a suitable, efficient, and alternative method for cancer therapy. However, the mechanism of sensitivity of ALDH + /CD133 + cells are not known. Figure 6. Effect of AgNPs on clonogenicity of A2780 (bulk cells) and ALDH + /CD133 + . A2780 (A) and ALDH+/CD133+ (B) were seeded in RPMI-1640 with 10% fetal bovine serum (FBS) at a density of ~500 cells/well on 48-well plates that were pre-coated with Matrigel. After ~14 days, the colony formation ability was assessed by counting the number of colonies under a microscope after crystal violet staining. Representative images were photographed. For quantitative analysis of A2780 colony formation, crystal violet was completely dissolved in methanol and then the absorbance was measured at 590 nm (C); for quantitative analysis of ALDH + /CD133 + colony formation, crystal violet was completely dissolved in methanol and then the absorbance was measured at 590 nm (D). The results are expressed as the mean ± standard deviation of three independent experiments. The treated groups showed statistically significant differences from the control group by Student's t-test (* p < 0.05).
AgNPs Induce Differential Apoptotic Responses in Bulk Cells (A2780) and OvCSCs (ALDH + /CD133 + )
The cell viability, cytotoxicity, and colony formation assays suggested that OvCSCs were more sensitive to AgNPs than the A2780 cells at respective IC50 concentrations of AgNPs. Based on these outcomes, it was favorable to use A2780 and ALDH + /CD133 + cells to explain the mechanism of apoptosis. To address this issue, apoptotic gene expression analysis was performed to examine pro-apoptotic genes (p53, caspase 3, bax, bak, and c-myc) and anti-apoptotic genes (bcl-2 and bcl-xl) using real-time reverse transcription polymerase chain reaction (RT-PCR) in both bulk cells and ALDH + /CD133 + cells exposed to AgNPs for 24 h. The results indicated up-regulation of p53, bax, bak, and c-myc genes (indicated by upward arrow in Figure 7B) and down-regulation of bcl-2 (indicated by downward arrow in Figure 7B) in AgNPs treated A2780 cells compared with untreated A2780 cells ( Figure 7A,B). The expression of β-actin remained the same. Overall, the sequence of events leading to apoptosis in AgNPs-treated A2780 cells is illustrated in Figure 7B. The AgNPs could induce oxidative stress in A2780 cells by generating higher levels of ROS and triggering the p53-mediated apoptotic pathway, whereas the later event of apoptosis carried out by caspase-3 has no impact on bulk cells. In case of ALDH + /CD133 + , AgNPs treated cells shows up-regulation of caspase-3, bax, bak, and c-myc, genes (indicated by upward arrow in Figure 7D) was observed in AgNPs treated ALDH + /CD133 + compared with untreated ALDH + /CD133 + (Figure 7C,D). Interestingly, there is no significant effect on p53 and Bcl-xl in AgNPs treated ALDH + /CD133 + . This also indicates that AgNPs regulate apoptosis in a differential manner in bulk cells and ALDH + /CD133 + . Overall, the sequence of events leading to apoptosis in both AgNPs-treated A2780 Figure 6. Effect of AgNPs on clonogenicity of A2780 (bulk cells) and ALDH + /CD133 + . A2780 (A) and ALDH+/CD133+ (B) were seeded in RPMI-1640 with 10% fetal bovine serum (FBS) at a density of~500 cells/well on 48-well plates that were pre-coated with Matrigel. After~14 days, the colony formation ability was assessed by counting the number of colonies under a microscope after crystal violet staining. Representative images were photographed. For quantitative analysis of A2780 colony formation, crystal violet was completely dissolved in methanol and then the absorbance was measured at 590 nm (C); for quantitative analysis of ALDH + /CD133 + colony formation, crystal violet was completely dissolved in methanol and then the absorbance was measured at 590 nm (D). The results are expressed as the mean ± standard deviation of three independent experiments. The treated groups showed statistically significant differences from the control group by Student's t-test (* p < 0.05).
AgNPs Induce Differential Apoptotic Responses in Bulk Cells (A2780) and OvCSCs (ALDH + /CD133 + )
The cell viability, cytotoxicity, and colony formation assays suggested that OvCSCs were more sensitive to AgNPs than the A2780 cells at respective IC 50 concentrations of AgNPs. Based on these outcomes, it was favorable to use A2780 and ALDH + /CD133 + cells to explain the mechanism of apoptosis. To address this issue, apoptotic gene expression analysis was performed to examine pro-apoptotic genes (p53, caspase 3, bax, bak, and c-myc) and anti-apoptotic genes (bcl-2 and bcl-xl) using real-time reverse transcription polymerase chain reaction (RT-PCR) in both bulk cells and ALDH + /CD133 + cells exposed to AgNPs for 24 h. The results indicated up-regulation of p53, bax, bak, and c-myc genes (indicated by upward arrow in Figure 7B) and down-regulation of bcl-2 (indicated by downward arrow in Figure 7B) in AgNPs treated A2780 cells compared with untreated A2780 cells ( Figure 7A,B). The expression of β-actin remained the same. Overall, the sequence of events leading to apoptosis in AgNPs-treated A2780 cells is illustrated in Figure 7B. The AgNPs could induce oxidative stress in A2780 cells by generating higher levels of ROS and triggering the p53-mediated apoptotic pathway, whereas the later event of apoptosis carried out by caspase-3 has no impact on bulk cells. In case of ALDH + /CD133 + , AgNPs treated cells shows up-regulation of caspase-3, bax, bak, and c-myc, genes (indicated by upward arrow in Figure 7D) was observed in AgNPs treated ALDH + /CD133 + compared with untreated ALDH + /CD133 + (Figure 7C,D). Interestingly, there is no significant effect on p53 and Bcl-xl in AgNPs treated ALDH + /CD133 + . This also indicates that AgNPs regulate apoptosis in a differential manner in bulk cells and ALDH + /CD133 + . Overall, the sequence of events leading to apoptosis in both AgNPs-treated A2780 and ALDH + /CD133 + cells were shown in Figure 7A-D. The data suggest that AgNPs induces apoptosis by oxidative stress, in which Bcl-2 playing an important role in mitochondrial outer membrane permeabilization and loss of mitochondrial membrane potential. The process of apoptosis is positively regulated by the tumor-suppressor p53, which induces the expression of many pro-apoptotic genes, including death receptors and multiple pro-apoptotic Bcl-2 family members [39]. p53 suppresses proliferation and self-renewal of neural stem cells [40]. The Bcl-2 family proteins play a pivotal role in mitochondrial-mediated apoptosis [41]. The anti-apoptotic proteins prevented cytochrome c release by forming heterodimer complexes with pro-apoptotic Bcl-2 family proteins, and Bax facilitates the release of apoptogenic molecules from mitochondria to the cytosol and accelerates apoptotic cell death [42][43][44]. The Bcl-2 protein family plays an integral role in maintaining the balance between cell survival and apoptosis. Based on our findings, the possible mechanism of inhibition of Bcl-2 by AgNPs could be the induction of mitochondrial dysfunction and energy depletion in CSCs in turn inducing the imbalance between oxidant and antioxidant levels in the cells. Inhibition of Bcl-2 and Bcl-xl by ABT-737 in tyrosine kinase inhibitor (TKI)-resistant blast crisis (BC) chronic myeloid leukemia (CML) promotes Relative mRNA expression of various apoptotic genes were analyzed by qRT-PCR in A2780 cells and ALDH + /CD133 + . A 2780 cells were treated with AgNPs (1000 ng/mL) for 24 h and expression was analyzed (A). The schematic representation of mechanism of cell death of A2780 cells were illustrated (B); ALDH + /CD133 + cells were treated with AgNPs (1000 ng/mL) for 24 h and expression was analyzed (C); The schematic representation of mechanism of cell death of ALDH + /CD133 + cells were illustrated (D). The results are expressed as the mean ± standard deviation of three separate experiments. The treated groups showed statistically significant differences from the control group by Student's t-test (* p < 0.05). Schematic representation of AgNPs induced apoptosis by up-regulation or down-regulation apoptotic related proteins (The symbols bold arrow indicate high expression of genes; solid arrow means indicate moderate expression of genes; dotted arrow indicate that lower expression of genes; T-bar indicate inhibition).
The process of apoptosis is positively regulated by the tumor-suppressor p53, which induces the expression of many pro-apoptotic genes, including death receptors and multiple pro-apoptotic Bcl-2 family members [39]. p53 suppresses proliferation and self-renewal of neural stem cells [40]. The Bcl-2 family proteins play a pivotal role in mitochondrial-mediated apoptosis [41]. The anti-apoptotic proteins prevented cytochrome c release by forming heterodimer complexes with pro-apoptotic Bcl-2 family proteins, and Bax facilitates the release of apoptogenic molecules from mitochondria to the cytosol and accelerates apoptotic cell death [42][43][44]. The Bcl-2 protein family plays an integral role in maintaining the balance between cell survival and apoptosis. Based on our findings, the possible mechanism of inhibition of Bcl-2 by AgNPs could be the induction of mitochondrial dysfunction and energy depletion in CSCs in turn inducing the imbalance between oxidant and antioxidant levels in the cells. Inhibition of Bcl-2 and Bcl-xl by ABT-737 in tyrosine kinase inhibitor (TKI)-resistant blast crisis (BC) chronic myeloid leukemia (CML) promotes apoptosis in quiescent CD34 + CML stem cells [45]. In addition to suppressing anti-apoptotic Bcl-2 family members, activation of pro-apoptotic Bax is required for activation of apoptosis through the mitochondria. The berberine liposome induces apoptosis via down-regulation of Bcl-2 and up-regulation of Bax in colon CSCs [46]. The loss of mt-MP might promote activation of cytochrome c and mitochondria-derived caspases. The results from our experiment suggest that AgNPs can up-regulate the expression of p53 and caspase-3 in bulk ells and ALDH + /CD133 + subpopulations, respectively. For instance, 20 (s)-ginsenoside Rg3 inhibits proliferation of colon CSCs and induces apoptosis through caspase-9 and caspase-3 pathways.
AgNPs Characterization
AgNPs were obtained from Nano High Tech (Seoul, Korea) as a clear colloidal aqueous suspension with a concentration of 1mg/mL. AgNPs characterization was performed as previously described [24]. AgNPs were primarily characterized by UV-VIS spectroscopy. Ultraviolet-visible (UV-VIS) spectra of AgNPs were recorded using an OPTIZEN POP spectrophotometer (Mechasys, Seoul, Korea) and other characterization was performed as described previously [24].
Cell Culture and Exposure to AgNPs
A2780 cell lines were kindly provided by Prof. Ronald Buckanovich, Division of Gynecologic Oncology, Department of Obstetrics and Gynecology, University of Michigan Medical Center, Ann Arbor, MI, USA. The cells were cultured in RPMI-1640 supplemented with 10% fetal bovine serum (FBS), 100 U/mL penicillin, and 100 µg/mL streptomycin. Attached cells were fully disaggregated by trypsinization between passages. The culture medium was replaced with medium containing AgNPs prepared at specific concentrations (0-10,000 ng/mL). After incubation for an additional 24 h, the cells were collected and analyzed for cell viability and other cytotoxicity assays. The cell lines were maintained at 37 • C in an incubator with humidified air with 5% CO 2 .
Flow Cytometry Analysis and Fluorescence-Activated Cell Sorting (FACS)
FACS was performed according to the method described previously [10] with suitable modifications. Cell line single-cell suspensions were counted and incubated with CD133 primary antibodies, and then ALDH + enzymatic activity was defined using the ALDEFLUOR kit per the protocol (Stem Cell Technologies, Vancouver, BC, Canada). For each sample, half of the cell/substrate mixture was treated with 50 mmol/L diethylaminobenzaldehyde (DEAB). Cells were incubated for 45 min. Gating for viability was established using propidium iodide (PI) exclusion and ALDEFLUOR/DEAB-treated cells were used to define negative gates. FACS was performed with ≥1 × 10 5 cells using the BD FACSCanto II (Becton Dickinson, Franklin Lakes, NJ, USA) or FACSAria (Becton Dickinson) under low pressure in the absence of UV light. In all experiments, the ALDEFLUOR-stained cells treated with DEAB served as ALDH-negative controls. The ALDH + CD133 + , ALDH − CD133 + , ALDH + CD133 − , and ALDH − CD133 − subpopulations were separated from the A2780 ovarian cancer cells by a FACSAria (Becton Dickinson). After sorting, all the cell subpopulations were cultured in a RPMI-1640 basic culture medium for 2 h; then, the cells were treated with different nanomaterials such as GO (50 µg/mL), rGO (20 µg/mL), rGO-Ag nanocomposite (10 µg/mL), and AgNPs (15 µg/mL) for 24 h.
Cell Viability (CCK-8 Assay)
The CCK-8 assay was analyzed according to the method described previously [26,31]. The cells were seeded in a 96-well plate and cultured in DMEM supplemented with 10% FBS for 24 h and incubated with various c concentrations of AgNPs for 24 h.
Mitochondrial Transmembrane Potential Assay (JC-1)
The mt-MP assay was performed following the manufacturer protocol. The cells were treated with 1000 ng/mL of AgNPs and then mt-MP was measured using the cationic fluorescent indicator, JC-1 (Molecular Probes Eugene, OR, USA).
Clonogenic Assay
The clonogenic assay was performed as previously described with modifications [47]. A2780 whole cells and sorted cells were plated on 48-well plates at a density of 100 cells per well and were allowed to adhere for 18 h. The cells were incubated with AgNPs 1000 ng/mL were added to each well and incubated for a maximum of 14 days at 37 • C. At least 50 cells were counted from each colony by manually. All data are expressed as relative to control.
RT-PCR Analysis
Total RNA was extracted from cells treated with AgNPs using an Arcturus PicoPure RNA isolation kit (ABioscience, San Diego, CA, USA) according to the manufacturer's instructions. RNA was reverse transcribed into cDNA using a Reverse Transcription Kit (Roche) in a final volume of 20 µL according to manufacturer instructions. The quantification of all gene transcripts (caspase3, p53, c-myc, Bax, Bad, Bcl-xl, Bcl2, and Bax) was carried out in three replicates by real-time reverse transcriptase quantitative polymerase chain reaction (RT-qPCR) on a Lightcycler apparatus using Lightcycler ® FastStart DNA Master SYBR Green I via an ABI Applied Biosystems machine. The primer sequences for each gene are shown in Table 1. The relative gene expression was quantified and analyzed by the 2 −∆∆Ct method. In all experiments, GAPDH mRNA was used as an internal standard.
Conclusions
Sequential self-renewal and the differentiation of cancer stem cells are responsible for tumor recurrence after treatment with radiation or chemotherapy and current therapies fail to eliminate CSCs. The cytotoxic effect of AgNPs on ovarian cancer stem cells is an unexplored area, which could shed light on the mechanisms of toxicity as well as on potential therapeutic agents. In the present study, we selected A2780 cells as model for isolating CSCs. Since A2780 is known to express aldehyde dehydrogenase (ALDH) activity, which is reported CSC marker in several solid tumors including ovarian cancer, and also the small number of ALDH + cells are capable of tumor initiation and propagation, and these cells generate tumors which recapitulate the original tumor cell composition. In addition to these, A2780 cell lines are poorly differentiated, highly tumorigenic, and heterogeneous with certain phenotypic subsets attributable to CSC-like properties when compared other ovarian cancer cell lines. Furthermore, these cells demonstrate resistance to chemotherapy and increased angiogenic capacity [10]. Choi et al. [12] reported that ALDH + /CD133 + subpopulation of cells could generate all four ALDH +/− CD133 +/− cell populations and had a clear branched differentiation hierarchy. Therefore we selected A2780 ovarian cancer cells to investigate the effect of AgNPs in both bulk cells (A2780) and CSCs derived from A2780 cells. The viability assays suggest that the cells treated with AgNPs showed different responses in A2780 ovarian cancer cells and four different subpopulation of CSCs derived from A2780 cells. Furthermore, cytotoxicity assays clearly indicated ALDH + /CD133 + more sensitive than bulk cells. The evidence gained from cytotoxicity assays supports that the interaction of nanoparticles with the cell membrane triggered ROS generation and oxidative stress alterations in metabolic pathways and apoptosis. Treatment of cells with AgNPs had a significant effect on cell viability, LDH leakage, ROS generation, and loss of mt-MP. However, of these two cell types, OvCSCs seem to be more sensitive than bulk cells. Further evidence shows that AgNPs inhibited colony formation both in bulk and OvCSCs, and a severe effect was observed in OvCSCs. The results indicate that AgNPs can be used to specifically target ALDH + /CD133 + cells, providing a possible approach for cancer therapy without side effects. This is the first study providing evidence for the specific targeting of the ALDH + /CD133 + subpopulation of CSCs by AgNPs and differential regulation of AgNPs in bulk cells and ALDH + CD133 + cells. However, the mechanism of sensitivity of ALDH + /CD133 + is still unknown. Further studies are warranted and should focus on CSCs-specific detailed signaling pathways, surface markers, and mechanisms of apoptosis.
|
2017-03-31T08:35:36.427Z
|
2016-12-01T00:00:00.000
|
{
"year": 2016,
"sha1": "f817d92cd4a352009d5eb09e9d63016055f0c88c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms17122077",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f817d92cd4a352009d5eb09e9d63016055f0c88c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
252581519
|
pes2o/s2orc
|
v3-fos-license
|
Development and Evaluation of In Situ Gel Formation for Treatment of Mouth Ulcer
Objectives: Mouth ulcers are one of the most prevalent conditions that can be caused by a range of circumstances. Many formulations, such as solutions, suspensions, and ointments are available commercially. However, because there is no long-term effect, no medication can be regarded as totally effective for treating mouth ulcers. The use of bioadhesive methods can boost the therapy efficacy. Because it is easier to administer than prepared gel formulations, the phenomenon of the sol-to-gel conversion can be beneficial. The major goal of this study was to develop and test in situ gels for treating mouth ulcers using choline salicylate and borax as model medicines. Materials and Methods: Because a thermosensitive polymer was employed in this formulation, the sol-to-gel change was thermally reversible, and the frequency of administration was reduced by using the mucoadhesive polymer carbopol. Gelation temperature, pH, gel strength, spreadability, in vitro mucoadhesion, and in vitro drug release were all measured in the formulations. Results: The experimental section indicated that viscosity of sols and gel strength increased with increasing temperature, i.e ., gel can be created at the site of application owing to body temperature. When poloxamer 407 was used at a concentration of 14 to 16 percent w/v , the gelling temperature was close to the body temperature (35-38 °C), but when carbopol 934P was added, the gelling temperature was raised. All formulations had pH between 5.5 and 6.8. All formulations had viscosities of less than 1000 cps, allowing for simple administration of the formulation to a mouth ulcer. Conclusion: As a result, a correctly developed in situ gel for oral ulcers can extend the duration spent at the application site and minimize the frequency of administration. These findings show that the developed technology is a viable alternative to traditional drug delivery systems and can help patients comply.
INTRODUCTION
Numerous routes of administration employed so far in new drug delivery systems, localized drug delivery to oral cavity tissues, have been examined for the treatment of periodontal diseases, bacterial and fungal infections, aphthous ulcers, and other disorders. 1The oral mucosa is the "skin" that covers most of the mouth cavity, besides the teeth.It can be used for multitude of things.Its main purpose is to serve as a deterrence. 2 It protects deeper tissues such as fat, muscle, nerves, and blood vessels from mechanical trauma such as chewing.Oral mucosal disease is the most common disease that affects people.Mouth ulcers are painful round or oval sores that develop in the mouth, usually on the inside of the cheeks or lips.
Mouth ulcers are also called recurrent aphthous stomatitis (RAS), aphthae, aphthosis, and canker sores.The word aphthous is derived from the Greek word "aphtha", which signifies the ulcer.Despite the redundancy, these oral sores are still referred to as aphthous ulcers in medical literature. 3AS has an etiology that is either unknown or unclear. 4diopathic RAS, rather than being a singular entity, may be the presentation of several illnesses with quite distinct etiologies.Nutritional deficiencies such as iron and vitamins, especially B12 and C, poor dental hygiene, infections, stress, indigestion, mechanical injury, food allergies, hormonal imbalance, and skin illness are all common causes of mouth ulcers.Hematinic deficits and blood disorders, gastrointestinal disorders, immune deficiencies such as in people with human immunodeficiency virus, neutropenia, and other conditions may predispose to RAS, such as microbial illness, chronic prescription of nonsteroidal anti-inflammatory drugs, alendronate, nicorandil, and other cytotoxic drugs.In some circumstances, quitting smoking might trigger or worsen RAS. 4,5rious topical therapy techniques can be used to effectively treat mouth ulcers.However, there are some problems that emerge from the drug's short retention duration, which could be the cause of limited therapeutic efficacy and should be addressed. 5,6vantages of in situ forming polymeric drug delivery systems, such as ease of administration and better patient comfort, have piqued interest.They increase the amount of time spent at the application site.Deformable dosage forms have less adverse effects than other dosage forms because they can conform to the contour of the surface on which they are placed.In situ forming polymeric formulations are drug delivery systems that are in sol form before being distributed in the body but gel in situ to create gel after being delivered.Recent advances in polymer chemistry and hydrogel engineering have facilitated the development of in situ forming hydrogels for drug delivery applications.In situ gels have the properties of linear polymer solutions outside of the body, allowing for easy injection/ administration.But they gel in situ within the body, resulting in prolonged drug release patterns.To accomplish in situ gelation, both physical and chemical crosslinking techniques have been used.Hydrogel precursor solutions can be injected and then polymerized in situ using intelligent design of monomers/ macromers with desired functionalities.The surgery and implantation technique can be completed with minimum of invasiveness thanks to the in situ sol-gel transition. 7oline salicylate (ChS), the medication employed in this study, is an analgesic.By acting locally on oral mucosal cells, it reduces pain severity. 8ChS gel, which is commercially available, gives pain relief but only for a short time since it can be washed away from the site by salivation and tongue movement; accidental engulfing causes adverse effects such as stomach ulcers and increased blood concentration.This is required to examine the formulation that enhances the drug residence time and availability at the application location.Borax is a homeopathic medication with antibacterial properties that has been used to cure mouth ulcers since ancient times.It also keeps the oral mucosa dry, allowing the mouth ulcer to heal more quickly.As a result, it can be used for both to treat mouth ulcers and as a preservative to the formulation. 9 attempt was made to develop a thermo-reversible in situ gel containing ChS and borax to treat mouth ulcers, to evaluate the formulation for various parameters, and to investigate the effect of the formulation on residence time, gelling temperature, and polymer mucoadhesive properties.Poloxamer 407 and carbopol P 934 were employed as polymers.Poloxamer 407 acts as a gelling agent and is temperature sensitive, while carbopol P 934 is a pH sensitive mucoadhesive polymer. 10
Objective
The main goal of this research is to develop and evaluate a thermoreversible in situ gel for treating mouth ulcers to find the best formula for improving patient compliance.
MATERIALS AND METHODS
This study certify that the project title "Development and evaluation of in situ gel formation for treatment of mouth ulcer" has been approved by IAEC at Appasaheb Birnale College of Pharmacy, Sangli (reference no: IAEC/ABCP/13/2015-16) issued on 07-11-2015.
ChS solution BP was obtained from Shreenath Chemicals Bhoisar, Mumbai.Poloxamer 407 (PF127) purchased from Sahyadri chemicals, Islampur, Maharashtra and carbopol 934P was provided as a gift samples by Corel Pharma Chem Ahmadabad.Borax was obtained from Raj Chemicals, Mumbai and sodium hydroxide, methanol, ferric chloride, hydrochloric acid, acetic acid was obtained from S.D Fine-chem limited, Mumbai.All other materials used were of analytical grade.
Software required for research work
Design Expert Software (Star Ease, Inc.) was used for research work.
Analytical UV-visible method development and validation
A simple UV-visible spectroscopic method was developed for ChS by following the procedure given below.
Preparation of stock solution I
Since the ChS solution BP contains 50% of ChS, 2 mL (1000 mg) of ChS solution BP was mixed in 100 mL phosphate buffered saline (PBS) of pH 6.8 to get 10 mg/mL.Further diluted to get 100 µg/mL concentration of drug.
1 mL, 2 mL, 3 mL, 4 mL, and 5 mL aliquots were withdrawn from stock solution I (100 µg/mL) and diluted up to 10 mL with PBS 0.6 pH in 10 mL volumetric flasks in order to get 10 µg/mL, 20 µg/mL, 30 µg/mL, 40 µg/mL, and 50 µg/mL concentrations of the drug.The absorbance was measured at 238 nm using PBS of pH 6.8 as the blank.
The method was validated using various parameters as per International Council for Harmonisation (ICH) guidelines such as accuracy, precision, limit of quantification (LOQ), limit of detection (LOD), and % relative standard deviation (RSD).
Formulation of in situ gel
GURAV and HUSUKALE.In Situ Gel Formulation Preparation and optimization of thermo-reversible PF 127 aqueous solution 11,12 The gel was prepared using the cold technique.Poloxamer concentrations ranging from 10% to 20% (w/v) were generated by dissolving the polymer in distilled water at temperatures below 5 °C in 50 mL.To guarantee complete polymer disintegration, the solutions were stored in refrigerator for 24 h.Temperature of gelation was then determined by visually inspecting each concentration.In a water bath, a beaker holding 20 mL of cold poloxamer solution was stored.A magnetic bead was placed in the beaker and a calibrated thermometer was hung in the beaker so that the tip of the thermometer was in the solution, but it did not touch the beaker's floor and did not disturb the magnetic bead's spin.The system was agitated at 100 rpm with the help of a magnetic stirrer, while temperature was allowed to rise at a rate of 2 °C/min.Temperature of gelation was measured, when the magnetic bead stopped rotating due to the production of gel.Concentrations that gelled close to body temperature (35-37 °C) were chosen for further optimization with other components.
Optimization of other ingredients with PF 127 concentration
The effect of other ingredients on the gelling temperature of poloxamer solution was studied.
Effect of carbopol 934P on gelling temperature
Carbopol 934P was prepared in various concentrations ranging from 0.1 to 0.5% (w/v).For this, a weighed amount of polymer was combined with a little amount of water and allowed to swell overnight.With the use of magnetic stirrer, these concentrations and poloxamer solution were mixed together and the gelation temperature was recorded.a.Effect of other ingredients on gelation temperature of solution poloxamer 407 and carbopol 934P mixture: The weighed quantity of drug and other ingredients were mixed in the solution containing poloxamer 407 and carbopol 934P.Changes in gelation temperature were noted down.
b. Formulation of batches based on design of experiment:
Depending on gelation temperature at or near the body temperature, concentrations were optimized and the experiment was designed by 2 2 factorial design.
Selection of independent variables
Gelation temperature of in situ gel at body temperature depends upon concentration of both polymers.Thus, independent variables of both polymers were selected based on gelation temperature and mucoadhesive properties and coded low level as -1 and high level +1 (Table 1).
Evaluation of formulation
Prepared batches of formulation were evaluated for the following parameters: Appearance: The prepared gel was visually inspected under light against white and black background for its clarity.
pH of the gel: Digital glass electrode pHmeter was used to measure pH of the gel by placing the electrode directly into the gel. 13lation temperature: In a water bath, a beaker holding 20 mL of the formulation's cold solution form was preserved.A magnetic bead was placed in the beaker and a calibrated thermometer was hung in the beaker so that the tip of the thermometer was in the solution, but it did not touch the beaker's floor and did not disturb the magnetic bead's spin.Temperature was allowed to rise at a rate of 2 °C/min, while the systems were agitated at 100 rpm.[16][17][18] Thermoreversible study: Using a constant temperature bath, thermoreversible investigation was conducted.In situ gel compositions were kept in a temperature bath at constant temperature.The instrument was adjusted at a temperature of 4-5 °C.[18] Similarly, the temperature was allowed to decline until the gel transformed into a sol and the viscosity was recorded as a function of temperature.
Viscosity of all prepared formulations was measured using a Brookfield viscometer (Brookfield viscometer RTV) with spindle no: 62 at the speed of 10 rpm.The rheological properties were also studied by measuring viscosity of all formulations at speeds of 10, 50 and 100 rpm with spindle no: 62. Shear rate (sec -¹) was calculated using the following formula:
Drug content
GURAV and HUSUKALE.In Situ Gel Formulation
Table 1. Coded values for levels of factors
Percentage ChS BP content was determined by dissolving 0.5 g of the gel in 100 mL of pH 6.8 PBS and scanning the resultant solution with UV-visible spectrophotometer set to 238 nm.Calibration curve was used to calculate the drug content. 12,17,18termination of mucoadhesive force The mucoadhesive force was determined according to Desai and Shirsand 20 description (2018).The assembly, which involved two glass vials, was completed in-house.One is hung in a downward position, while the other is placed on the floor in an upward position.The upper vial is fastened to one end of the thread and a pan is tied to the other end of the thread. 14,18piece of goat buccal tissue was glued to both glass vials with the mucosal side facing out.Before performing the test, these vials were kept at 37 °C for 10-15 min.On the lower vial, around 1 g of gel was applied before the upper vial was inserted and 1 g of weight was added to the pan.The weight was gradually increased until the two vials were still connected.The mucoadhesive force (gm) was calculated using the smallest weights that could separate the two vials.The bioadhesive force was determined using the equation below.
In vitro drug release study
Franz diffusion cell was used to conduct an in vitro drug (ChS BP) release study of an in situ gel.In the donor compartment, 1 mL of formulation (F3) (equal to 1 g of gel) was deposited, and in the receptor compartment, freshly produced PBS (pH 6.8) was poured.A cellophane membrane was fitted between the chambers.One cell as blank was filled with only filled PBS solution.The units were then placed on a magnetic stirrer with thermostat.The medium was maintained at a constant temperature of 37 °C ± 0.5.After each 1 h interval, 1 mL of sample was withdrawn and same amount of PBS solution from blank was transferred into the sample cell for maintaining sink condition.Then, withdrawal amount was diluted to 10 mL in PBS pH 6.8, and concentration of ChS BP was measured using a UV-visible spectrophotometer at 238 nm with PBS pH 6.8 as a blank.The calibration curve was plotted and used to determine the percent cumulative ChS BP release.The best fit model was tested for Korsmeyers, Peppas, and Fickinian diffusion mechanism for their kinetics. 15,18
Drug diffusion kinetic study
In vitro release data of the formulations was evaluated kinetically to determine drug kinetics.Microsoft Excel 2013 was used to fit the models.The models of zero order, first order, Higuchi, and Korsemeyer Peppas were investigated.Model with best fit was chosen because of its comparatively high correlation coefficient value. 18
Statistical optimization of in situ gel formulation
Gelatin temperature, viscosity of gel, diffusion of drug at 1 h, and time required for 90% drug diffusion are major variables for performance of the prepared in situ gel formulation.Formation of gel at oral temperature is fundamental to the prepared in situ gel.Drug release from gel is indirectly proportional to viscosity of the gel.Thus, viscosity of gel is a major variable to consider during design of in situ gel formulations.Salivation in the oral cavity restricts sustained release of gel formulations since gel may wash out with saliva.Thus, drug release at 1 h and the time required for 90% drug release must be considered.Both factors help to decide dosing frequency of the formulation.
For statistical optimization of in situ gel, following criteria for selection of a suitable feasible region were decided (Table 2).
Antimicrobial test
An antimicrobial study was conducted to assess the medication borax antibacterial activity and to determine whether the formulation had enough antimicrobial properties.The test was conducted using the well diffusion method against Gram-positive (Escherichia coli) and Gram-negative bacteria (Staphylococcus aureus).
5% (w/v) of Mac Conkey's agar for E. coli and 11.1% (w/v) mannitol agar for S. aureus was prepared and sterilized.The liquid was then put into sterile glass plate and allowed to set.The bacterial strains were dispersed aseptically over agar after solidification.Each agar plate had three wells; one for the test (F3), one for the standard (ZYTEE), and one for the plane borax solution.The samples were placed in the wells and kept in the refrigerator for 15-20 minutes to allow the materials to diffuse into agar.The plates were then incubated in an incubator at 37 °C for 24 h.Zone of inhibition was assessed after incubation period. 13,15,16imal model study The study indicated how the produced formulation affected the healing of an oral ulcer in rats.In this study, 15 healthy female Wistar albino rats (weighed 130-150 g) were chosen and separated into three groups, each with five animals.Before anaesthesia, a 5 mm diameter filter paper soaked in 50% acetic acid was placed on the tongue of rats for 60 s to form a circular ulcer.The test group received an optimized formulation (F3), the standard group received ZYTEE gel (a commercialized ChS product), while the control group received no treatment.[21] GURAV and HUSUKALE.In Situ Gel Formulation
Analytical UV-visible method development and validation
λ max of ChS in PBS 6.8 was found to be 238 nm.The drug follows linearity in the concentration range 10-50 g/mL with a correlation coefficient value of 0.9903.(Table 3).The accuracy of the method was checked by recovery experiments performed at three different levels, i.e. 80%, 100%, and 120%.Percentage recovery was found to be in the range of 98.54-99.98%.The low values of %RSD indicate accuracy and reproducibility of the method.Precision of the method was studied as intraday, interday variations, and repeatability.%RSD value <2 indicates that the method is precise (Table 3).Ruggedness of the proposed method was studied with the help of two analysts.
Formulation of in situ gel
Preparation and optimization of thermo-reversible PF 127 aqueous solution: The solution of poloxamer 407 with concentration of 10% w/v to 20% w/v was prepared in distilled water.Gelation temperatures of the solutions were found as depicted in Table 4.
Concentrations of 15% (w/v) to 20% (w/v) were considered as optimum for formulation.
Optimization of other ingredients with PF 127 concentration
Effect of carbopol 934P on gelling temperature: The optimum poloxamer concentration solutions were mixed with 0.1% (w/v) carbopol solution and gelling temperatures were observed as shown in Table 5.
It was observed that there was an increase in gelling temperature on addition of carbopol 934P.Thus, concentration of poloxamer was increased to form the gel near body temperature.Gelation temperatures were observed as given in Table 6.
Effect of other ingredients gelation temperature of solution poloxamer and carbopol 934P mixture: Other ingredients such as drug ChS (8%), borax (1%), and propylene glycol were added to poloxamer 407 and carbopol 943P solutions and gelling temperature were observed (Table 7), where there was no significant difference upon the addition of other ingredients.
The formulation of batches based on the design of experiment
Different formulation batches F1 to F4 were prepared based on the design of experiment by 2 2 factorial design (Table 7).
Evaluation of formulation
Appearance: In both solution and gel forms, all the formulations were determined to be clear and transparent.A clear translucent gel created on a mouth ulcer will increase patient compliance because it mimics natural oral mucosa, allowing for daytime application.
GURAV and HUSUKALE.In Situ Gel Formulation pH of the gel: pH of all formulations was found to be between 5.5 and 6.8 (Table 10).To avoid irritation of the mucosa and further damage to the ulcer, pH of the formulation produced to treat mucus ulcers must be close to neutral.In general, any formulation utilized for the mucosa should have a pH of 4.5 to 7.
Gelation temperature: Temperature at which the formulation's solution form transforms entirely into semisolid form is known as the gelation temperature.The gelling temperature is the most important requirement for in situ gel formulation.At close to body temperature, in situ gel formulation for the oral ulcer should quickly change from sol to gel (37 °C 5 °C), and the resulting gel should not erode or dissolve.The gelling temperature of the produced mixture was determined to be between 34 and 38 °C (Table 10).
The gelling temperature and integrity, on the other hand, are mostly determined by the polymer content.At 38 °C, formulation F2 formed the weakest gel, whereas formulation F1 generated a strong gel at 35 °C.It could be because F2 formulation had lower concentration of both polymers, while the F3 formulation had larger concentration of both polymers.
Because of the observed gelling temperature, it can be concluded that concentration of poloxamer 407 had a proportional effect GURAV and HUSUKALE.In Situ Gel Formulation ChS: Choline salicylate on gelling temperature, whereas the gelling temperature increased, when the carbopol 943P was added and it is also directly proportional to the carbopol 934P concentration.
Thermoreversible study: In the same way that an increase in temperature causes the sol to gel phase transition in in situ gel formulation, a decrease in temperature causes the gel to sol phase transition.The procedure is the polar opposite of sol-gel mechanism.As the temperature rises, the micelles generated at CMC come into touch with one another, resulting in polymerization and thus gel formation.As the temperature drops, micelle pack and micelle entanglement diminish, and the network breaks down.The formulation's gel form begins to transform into a solution form and at a certain point the gel is totally transformed into a solution.
Temperature difference between gel and sol is known as gel to sol temperature.
The gelation phenomenon will be aided by a mechanism based on micelle packing and entanglements as well as conformational changes in the orientation of the methyl group in the side chain of the poly (oxy propylene) polymer chain constituting the micelle's core and the expulsion of the hydrating water from the micelle.
It was discovered from the phase diagram in Figure 1 that, when the polymer concentration increased, the gelation temperature decreased, while the sol temperature increased.
In comparison to previous formulations, formulation F1 comprises a larger concentration of polymers resulting in lower gelation and solution temperatures.Similarly, formulation F2 has the lowest polymer concentration, thus it takes more heat to create a gel; but it converts to a sol form fast and at high temperatures, when compared to other formulations.
As can be seen from the phase diagram (Figure 2), the smallest concentration of the polymer has the highest gelation GURAV and HUSUKALE.In Situ Gel Formulation temperature and low sol temperature.The micelle created from the smallest amount of polymer was unstable and breaking the hydrogen bond formed during temperature aggregation needed the least amount of energy.The energy required to break the bond is provided by external heat.
Viscosity and rheological properties: This is one of the most significant requirements for in situ gel formulation.To remain for a long time at the site of application, in situ gel formulation should have a viscosity of more than 100 cps, when it is applied and less than 1000 cps and when it converts to the gel after administration.
Viscosity of all formulations F1, F2, F3, and F4 was found to be polymer concentration dependent.Viscosity increased in the order F1>F3>F4>F2 as the concentrations of polymers poloxamer 407 and carbopol 934P increased.Table 10 provides viscosity (centipoises) of the prepared formulations, and Figures 3a and 3b display the shear rate (sec) and shear stress (dyne/cm 2 ) of all batches.
It was discovered that viscosity varied depending on the shearing rate.In other words, the ratio of shear stress to the shear rate was not constant, and viscosity dropped as the shear rate increased.As a result, the prepared in situ gel was found to be a non-Newtonian fluid.As the shear rate increased, viscosity of the gel dropped.This demonstrated that in situ gel was shear thinning pseudoplastic by nature.
Drug content: As stated in Table 10, percent ChS BP of all formulations was determined to be in the range of 98 to 100%.It is possible that discrepancy in medication content is attributable to human mistake during dilution or to production loss during the formulation preparation.
Determination of mucoadhesive force:
Mucoadhesion is an interfacial phenomenon that involves two materials, one of which is the mucus layer of mucosal tissue, to which the medication is held together for a long time by interfacial forces.The longer the retention duration, the stronger the mucoadhesive force.
Various studies have shown that the presence of polyoxyethylene groups in poloxamer 407 is responsible for their mucoadhesion via H-bonding, but, when it forms gel, the cross linkage between poloxamer 407 increases rendering the polyoxyethylene groups unavailable for mucoadhesion.According to the diffusion interlocking hypothesis, when crosslink density rises, chain mobility falls, and therefore the effective chain length that may penetrate the mucus layer falls, lowering mucoadhesive strength.Thus, addition of carbopol 934P leads to an increase in mucoadhesion.Carbopol is a synthetic mucoadhesive agent.It adheres to the mucosa by a -COOH bond.Formulations F3 and F4 contain higher concentrations of carbopol and indicate strong bioadhesion as compared to other formulations (Table 10).
In vitro diffusion study: An in vitro diffusion study was conducted using Franz diffusion cell with pore size of 40 µm and cellophane membrane.In Figure 4, the percentage cumulative ChS BP diffusion obtained from all formulations is displayed.Formulation F2 had the fastest diffusion compared to the other formulations, while formulation FI had the slowest diffusion from the gel.In the case of F2, 90% of the drug was diffused up to 3.5 hours; however, in the case of F1, only 80% of the drug was diffused by 5 th hour.It could be because F2 had lower concentration of both polymers, while F1 had higher concentration of both polymers.
In general, the drug diffusion rate reduces as the crosslinking of the polymer in the formulation, such as gel, increases.
Based on the findings, it can be concluded that as the polymer concentration grew, the drug diffusion rate decreased.The diffusion of drugs is thus a polymer concentration-dependent process.An in situ gel that exhibits 40% drug release after 1 hour and 90% drug release after 4 h was tempted to prepare.F1 formulation was not determined to be optimum (Figure 4).
Diffusion kinetic study: According to data from diffusion studies, the generated in situ gel had significant initial drug release (burst effect) and then decreased as gelation progressed.This is a biphasic pattern, which is a common feature of matrix diffusion kinetics.As the concentration of polymer grew, the first burst effect decreased as in the case of F1, which contains high concentrations of both polymers (Table 11).
Korsmeyer-Peppas model is commonly used to confirm the drug release process from the matrix.The "n" value (Korsmeyer-Peppas model release exponential) was used to characterize the various release mechanisms in the following way: For each formulation, a graph of log CDR v/s log was plotted to determine the diffusion mechanism of the created in situ gel according to Korsmeyer-Peppas model.For all formulations, the correlation of co-efficients of all straight lines was determined to be in the range of 0.954 to 0.992.
The n value was recorded for all formulations and utilized to modify the diffusion mechanism from formulations.Since n values of 0.7 and 0.57 were reported, formulations F1 and F4 follow an atypical non-Fickian diffusion mechanism.Due to n: 0.43 and 0.48, respectively, F2 and F3 followed a quasi-Fickian diffusion mechanism (Table 11).
The dissolution data for Higuchi model was investigated to see, if the drug release was diffusion regulated or not.For all formulations, a graph of percentage CDR vs. square root of time was drawn.All straight line correlation coefficients were determined in the range of 0.943 to 0.996.As a result, all formulations followed Higuchi's diffusion model (Table 11).
Statistical optimization of in situ gel formulation: Primary process parameter analyses revealed that components such as poloxamer 407 (X1) and carbopol 934P (X2) had a substantial impact on gelation temperature, viscosity, and drug diffusion as well as the time required for 90% drug diffusion.As a result, these two variables were used in subsequent statistical optimization research.For all four formulation batches, all dependent variables revealed several data.
Software stat ease: Design Expert 10 was used to derive conclusions based on the amount of the coefficient and the mathematical sign (positive or negative) they carried.
Optimization of polymer concentrations for gelation temperature:
Concerning Y1 (gelation temperature) the data clearly indicated that it is strongly dependent on the selected variables X1 and X2 YI 36.56-0.98X1-49X2 + 0.042 X1X2 The findings of multiple linear analysis revealed that both coefficients B1 (-0.98) and 3 (-0.49)had a negative sign, indicating that, when individual concentrations of poloxamer 407 or carbopol 934 increase, the gelation temperature decreases.
Combination of two polymers, on the other hand, had positive effect on gelation temperature and micellar aggregation.
Only when the concentration of poloxamer 407 exceeds the micellar concentration, resulting in the micelle production, gel phase can occur.The hydrophobic sections of the pluronic are kept apart by hydrogen bonding between the POP chains and water, when the material is immersed in cold water.
Hydrogen bonding is broken as the temperature is elevated and hydrophobic interactions cause a gel to form.Carbopol 934P was added in escalating quantities to lower the gelation temperature even more.As the concentration of mucoadhesive polymers (carbopol 934P) increased, gelation temperature decreased.It is probable that the ability of mucoadhesive polymers to reduce gelation temperature is linked to increased viscosity the following polymer disintegration and the ability of mucoadhesive polymers to adhere to polyoxyethylene.Chains contained in poloxamer 407 molecules could explain their capacity to lower gelation temperature.This would encourage dehydration resulting in increased entanglement of neighboring molecules and increased intermolecular hydrogen bonding, lowering the gelation temperature.When bioadhesive agents and poloxamer 407 were combined, the effect on gelation temperature revealed that adding carbopol 934P increased micelle packing and tangling, resulting in a drop in gelation temperature.Using a response surface, the relationship between formulation variables (X and X2) and Y1 was further clarified.Figure 5c displays the effects of X1 and X2 on Y.The gelation temperature was reduced as the amount of poloxamer 407 and carbopol 934P was increased (Table 12).
Optimization of polymer concentrations for viscosity: According to the dependent results of multiple linear regression analysis, viscosity is strongly dependent on X 1 and X 2 .The fitted equation for the full model relating viscosity to selected factors can be explained by the following polynomial equation: Y2 661.33 + 114.41X 1 + 238.39X 2 + 33.51 X 1 X 2 The results revealed that both X 1 and X 2 have positive coefficients.Because of rising X 1 and X 2 values, viscosity is projected to rise.Both elements have favourable effect on viscosity, when used separately and in combination.The fact that X 2 has a higher coefficient value than X shows that X 2 is more effective in terms of viscosity than X 1 .Surface plot Figure 5d can be used to explain the relationship between selected parameters and response viscosity (Table 12).
Optimization of polymer concentrations for drug diffusion at 1 h:
The data clearly indicated that drug diffusion values at 1 h are substantially reliant on the specified independent variables, namely poloxamer 407 concentration and carbopol 934P concentration.Transformed factor is related to the response (release at 1 hour) by the fitted equation (for full model).
Y 1 +39.15 -7.89 X 1 -3.83X 2 -1.12 X 1 X 2 Coefficients 1 and 2 for the prediction of release at 1 h were found to be significant at p=0.05.Coefficients 1 (-7.89) and 2 (-3.83) have a negative sign according to the results of multiple linear regression analysis.It appears that increasing the amount of poloxamer 407 or carbopol 934P in the formulation reduces the release levels after one hour.Coefficient of poloxamer 407 is larger than that of carbopol 934P, indicating that poloxamer 407 is more effective than carbopol 934p in terms of 1 h release (Table 12).Using a response surface plot (Figure 5a), the link between formulation variables poloxamer 407 (X 1 ) and carbopol 934P (X 2 ) was further explored.
Optimization of polymer concentrations for the time required for 90% drug diffusion: In the case of Y2, the result of multiple regression analysis showed that the coefficient diffusion (+45) and P 2 (+40) bear positive signs.The positive sign of both X 1 and X 2 coefficients indicates that as concentration of both poloxamer 407 and carbopol 934P increased the time required for 90% drug diffusion increased.Summary of regression analysis can be explained by the following polynomial equation: Y2 exhibited a good correlation coefficient of 1.000 for all batches F1 to F4. XI had a p value of 0.0001 and X2 had a p GURAV and HUSUKALE.In Situ Gel Formulation value of 0.0001.Both p values were less than 0.05, indicating that the independent factors have substantial impact on the time necessary for 90% drug diffusion.The time required for 90 % drug diffusion increased as the concentrations of poloxamer 407 and carbopol 934P rose (Table 12).It could be attributed to an increase in cross-linkage because of higher polymer concentrations resulting in lower drug diffusion from in situ gels polymeric network.
Analysis of variance
The R 2 values for gelation temperature (Y), viscosity (Y2), CPR at 1 h (Y1), and time required for 90% drug release (Y) are 0.9822, 1.000, 0.9959, and 0.9255, respectively, suggesting that dependent and independent variables are well correlated.
Antimicrobial test
Antimicrobial medicines are also used to treat mouth ulcers; these inhibit microbial growth on the ulcer, allowing it to heal more quickly.Borax has antibacterial, antifungal, and antiallergic properties.As a result, borax can be used as both an antiulcer and a preservative.Zone of inhibition obtained by improved formulation (F3) in sol form, conventional ZYTEE gel, and glycerol-borax as shown in Figure 6 and Table 13 can act on both Gram-positive (E.coli) and Gram-negative bacteria (S. aureus).
There is a negligible difference between zones of inhibition of the standard and the formulation in gel form, which shows that the formulation has preservative properties similar to those of the standard.
Animal model study
In most cases, an oral ulcer heals on its own within 7 to 10 days.The formulations produced to treat mouth ulcers speed up the healing process, requiring less time than natural healing, and reducing the pain associated with ulcers.As a result, the patient's comfort with an oral ulcer will improve.
Wistar albino rats were used as an animal model in this investigation.In comparison to conventional ChS gel (ZYTEE), the effect of a developed formulation (F3) on the healing of an oral ulcer in rats.Ulcer healing properties of the formulation were found to be comparable to those of the reference (Figure 7).The observation was made based on the ulcer's every day ocular observations.Within 5 days, all animals in the test group that were given the formulation were free of ulcers.Similarly, all animals in the standard-treated group were cured on the fifth day after therapy began.However, on the fifth day, three out of five animals in the control group, i.e. those who were not treated, developed an ulcer, and it took them eight days to completely recover.As a result, the developed formulation of in situ gel containing ChS is effective for treating mouth ulcers.
CONCLUSION
Using the thermoreversible polymer poloxamer 407 and the mucoadhesive polymer carbopol 934P, a thermoreversible in situ gel containing ChS and borax for the treatment of mouth ulcers was successfully created.
It has been determined through compatibility studies that medications and polymers are compatible.When poloxamer 407 was used at a concentration of 14 to 16% (w/v), the gelling temperature was close to the body temperature (35-38 °C), however, when carbopol 934P was added, the GURAV and HUSUKALE.In Situ Gel Formulation gelling temperature was raised.Carbopol may cause micelle aggregation, size, and entanglement to decrease, resulting in an increase in gelation temperature.Addition of ChS and borax to the gelation temperature had no effect.The in situ gel was thus created based on the gelation temperature, pH, thermoreversibility, viscosity, mucoadhesion study, drug content, in vitro drug diffusion, drug diffusion kinetics, statistical formulation optimization, antimicrobial, and animal model study of optimized formulations were all examined.
Thermoreversibility of the formulations was discovered.
All formulations had pH between 5.5 and 6.8, which is considered a safe range for mucosal drug delivery.All formulations had viscosities of less than 1000 cps, allowing simple administration of the formulation to a mouth ulcer.Rheological tests revealed that the in situ gel had a non-Newtonian flow and was a shear-thinning pseudo-plastic.
It is thought to be a beneficial characteristic for in situ gel.
Content homogeneity of all the formulations was excellent.The insignificant discrepancy between them could be attributable to human error or loss of output.Mucoadhesion was good in all formulations.The formulations F3 and F4 with higher carbopol concentrations have better mucoadhesive properties than the other formulations F1 and F2.According to in vitro drug diffusion research, F4 had the lowest diffusion rate and F2 had the greatest.It can be argued that, when viscosity rises, drug diffusion decreases, and the concentration of both polymers is proportional to viscosity.Higuchi model of drug diffusion was seen in all formulations.Formulations Fl and F4 revealed non-Fickian diffusion mechanisms, while F2 and F3 showed quasi-Fickian diffusion mechanisms according to Korsmeyer-Peppas model.The formulation including 0.4% (w/v) carbopol 934P and 20% (w/v) poloxamer 407 i.e.F3 was found to be the most desirable.Antimicrobial testing of the improved sol formulation of F3 revealed a satisfactory zone of inhibition for Gram-negative and Gram-positive microorganisms.As a result, the formulation can be concluded to have good preservation properties.Nevertheless, it revealed a smaller zone of inhibition in gel form, implying that borax diffusion is reduced in gel phase of the formulation.It has antibacterial properties and can be used to treat mouth ulcers.In animal model research, formulation F3 was found to be as effective as standard (ZYTEE) in the healing of mouth ulcers.The formulation was found to be stable under accelerated temperature and humidity conditions in stability investigation.
As a result, a correctly developed in situ gel for oral ulcers can extend the duration spent at the application site and minimize the frequency of administration.
Future prospects
In situ gelling systems have garnered a lot of interest in the past decade.In situ gel meets the key requirement of a successful controlled release product, increasing patient compliance.The steady and prolonged release of drug from in situ gel and its good stability and biocompatibility make it a very reliable dosage form.The use of mucoadhesive compounds and polymers that can both gel in situ and interface with mucosa and/or mucus improves formulation performance even more.This system gels at the place of action, when given as a solution.Finally, in situ treatments are simple to use and reduce the size, pain, and the colour of lesions.However, more research on its stability and storage conditions statements must be carried out.The above successfully researched formulation looks forward to developing an in situ gel spray form for ease of administration in the oral cavity.
Radius of the container (in centimeters) ω ω = Angular velocity of the spindle (Rad/Sec) R b = Radius of the spindle (in centimeters) ω ω = 2 ÷ 60 x N X = Radius at which shear rate is to be calculated (normally the same value as R b ; in centimeters) N = Spindle speed in RPM Observed values: R c =1.5 cm; R b = 1.25 cm Shear stress (dynes/cm²) was calculated using the following formula: Shear stress = Shear rate (sec -¹) ÷ Viscosity (cps)
Figure 2 .
Figure 2. Thermoreversible gel to sol phase diagram of prepared in situ gel formulations
Figure 4 .
Figure 4.In vitro drug diffusion study of prepared in situ gel formulations
Figure 6 .Figure 7 .
Figure 6.Zones of inhibition of prepared in situ gel formation batch F3 (sol form) (a) Escherichia coli and (b) Staphylococcus aureus
Table 3 . Results for analytical UV-visible method development and validation mcg/mL Observation
SD: Standard deviation, LOD: Limit of detection, LOQ: Limit of quantification, UV: Ultraviolet
Table 10 . Observations of various evaluation tests
Figure 1.UV spectra of ChS BP UV: Ultraviolet, ChS: Choline salicylate
|
2022-09-29T15:05:19.645Z
|
2022-09-27T00:00:00.000
|
{
"year": 2023,
"sha1": "33bbc626ac9cb41961b1d7e5220b7a7e07fc58f3",
"oa_license": null,
"oa_url": "https://doi.org/10.4274/tjps.galenos.2022.25968",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "dc581fdd1bfc39f1c30881c4bc85fcf8ec08eefd",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4588203
|
pes2o/s2orc
|
v3-fos-license
|
Astronomy & Astrophysics L etter to the E ditor ALMA survey of massive cluster progenitors from ATLASGAL Limited fragmentation at the early evolutionary stage of massive clumps ?
The early evolution of massive cluster progenitors is poorly understood. We investigate the fragmentation properties from 0.3 pc to 0.06 pc scales of a homogenous sample of infrared-quiet massive clumps within 4.5 kpc selected from the ATLASGAL survey. Using the ALMA 7 m array we detect compact dust continuum emission towards all targets and find that fragmentation, at these scales, is limited. The mass distribution of the fragments uncovers a large fraction of cores above 40 M , corresponding to massive dense cores (MDCs) with masses up to ∼400 M . Seventyseven percent of the clumps contain at most 3 MDCs per clump, and we also reveal single clumps/MDCs. The most massive cores are formed within the more massive clumps and a high concentration of mass on small scales reveals a high core formation efficiency. The mass of MDCs highly exceeds the local thermal Jeans mass, and we lack the observational evidence of a sufficiently high level of turbulence or strong enough magnetic fields to keep the most massive MDCs in equilibrium. If already collapsing, the observed fragmentation properties with a high core formation efficiency are consistent with the collapse setting in at parsec scales.
Introduction
The properties and the evolution of massive clumps hosting the precursors of the highest mass stars currently forming in our Galaxy are poorly known. Massive clumps at an early evolutionary phase, thus, prior to the emergence of luminous massive young stellar objects and UC-H II regions, are excellent candidates to host high-mass protostars in their earliest stages (e.g. Zhang et al. 2009;Bontemps et al. 2010;Csengeri et al. 2011a,b;Palau et al. 2013;Sánchez-Monge et al. 2013). Large samples have only recently been identified based on large area surveys (e.g. Butler & Tan 2012;Tackenberg et al. 2012;Traficante et al. 2015;Svoboda et al. 2016;Csengeri et al. 2017), which show that the early evolutionary stages are short lived (e.g. Motte et al. 2007;Csengeri et al. 2014), as star formation proceeds rapidly. Using the Atacama Large Millimeter/submillimeter Array (ALMA), here we present the first results of a statistical study of early stage fragmentation to shed light on the physical processes at the origin of high-mass collapsing entities, and to search for the youngest precursors of O-type stars.
The sample of infrared quiet massive clumps
Based on a flux limited sample of the 870 µm APEX Telescope LArge Survey of the GAlaxy (ATLASGAL, Schuller et al. 2009;Csengeri et al. 2014), Csengeri et al. (2017) identified the complete sample of massive infrared quiet clumps with the highest peak surface density (Σ cl ≥ 0.5 g cm −2 ) 1 and low bolometric luminosity, L bol <10 4 L ⊙ , corresponding to the ZAMS luminosity 1 In the ATLASGAL beam of 19 ′′ .2. of a late O type star. Their large mass reservoir and low luminosity suggest that infrared quiet massive clumps correspond to the early evolutionary phase, some already exhibiting signs of ongoing (high-mass) star formation such as EGOs and Class II methanol masers. Here we present the sample of 35 infrared quiet massive clumps located within d ≤ 4.5 kpc, which could be conveniently grouped on the sky as targets for ALMA. They cover 70% of all the most massive and nearby infrared quiet clumps from Csengeri et al. (2017), and are thus a representative selection of a homogenous sample of early phase massive clumps in the inner Galaxy.
Observations and data reduction
We present observations carried out in Cycle 2 with the ALMA 7m array using 9 to 11 of the 7m antennas with baselines ranging between 8.2 m (9.5kλ) to 48.9 m (53.4kλ). We used a lowresolution wide-band setup in Band 7, yielding 4 × 1.75 GHz effective bandwidth with a spectral resolution of 976.562 kHz. The four basebands were centred on 347.331, 345.796, 337.061, 335.900 GHz, respectively. The primary beam at this frequency is 28.9 ′′ . Each source was observed for ∼5.4 min in total. The system temperature, T sys varies between 100−150 K. The targets have been split according to Galactic longitude in five observing groups (Table 1).
The data was calibrated using standard procedures in CASA 4.2.1. To obtain line-free continuum images, we first identified the channels with spectral lines towards each source, and excluding these averaged the remaining channels. We used a robust weight of 0.5 for imaging, and the CLEAN algorithm for the deconvolution, and corrected for the primary beam attenuation. The synthesized beam varies between 3.5 ′′ to 4.6 ′′ taking A&A proofs: manuscript no. aca-letter_subm_short_referee3_accepted Right: Line-free continuum emission at 345 GHz by the ALMA 7m array. Contours start at 7σ rms noise and increase in a logarithmic scale. White crosses mark the extracted sources (see Table 2). The synthesized beam is shown in the lower left corner. the geometric mean of the major and minor axes. The noise has been measured in an emission free area close to the center of the maps including the side-lobes. The achieved median rms noise level is 54 mJy/beam and varies among the targets due to a combination of restricted bandwidth available for continuum, dynamic range or mediocre observing conditions. In particular for groups 4 and 5, the observations have been carried out at low elevation resulting in an elongated beam and poor uv-sampling. The observing parameters per group are summarized in Table 1, and for each source in Table 2.
Results and analysis
Compact continuum emission is detected towards all clumps (see Fig. 1 for an example, and Fig. A .1 for all targets). We find sources that stay single (∼14%) at our resolution and sensitivity. Fragmentation is, in fact, limited towards the majority of the sample; 45% of the clumps hosts up to two, while 77% host up to three compact sources. Only a few clumps host more fragments.
We identify and measure the parameters of the compact sources using the Gaussclumps task in GILDAS 2 , which performs a 2D Gaussian fitting. A total number of 124 fragments down to a ∼7 σ rms noise level are systematically identified within the primary beam, where the noise is measured towards each field. This gives on average,N fr =3 sources per clump corresponding to a population of cores at the typically achieved physical resolution of ∼0.06 pc.
We can directly compare the integrated flux in compact sources seen by the ALMA 7m array with the ATLASGAL flux densities measured over the primary beam of the array as both datasets have similar centre frequencies 3 . We recover between 16-47% of the flux, the rest of the emission is filtered above the typically 19 ′′ largest angular scale sensitivity of the ALMA 7m array observations.
To estimate the mass, we assume optically thin dust emission and use the same formula as in Csengeri et al. (2017); where S 870µm is the integrated flux density, d is the distance, κ 870µm = 0.0185 g cm −2 from Ossenkopf & Henning (1994) accounting for a gas-to-dust ratio of 100, and B ν (T d ) is the Planck function. While on the ∼0.3 pc scales of clumps Csengeri et al. (2017) adopt T d =18 K, 2 Continuum and Line Analysis Single-Dish Software http://www.iram.fr/IRAMFR/GILDAS 3 The centre frequency for the ALMA dataset is at 341.4 GHz, while for the LABOCA filter, it is around 345 GHz. A spectral index of −3.5 gives 10% change in the flux up to a difference of 10 GHz in the centre frequencies. This is below our absolute flux uncertainty. (André et al. 2014), and CO clumps (Kramer et al. 1998).
on the smaller scales of cores heating due to the embedded protostar may result in elevated dust temperatures that are poorly constrained. Following the model of Goldreich & Kwan (1974), we estimate T d =15-38 K for the luminosity range of 10 2 − 10 4 L ⊙ at a typical radius of half the deconvolved FWHM size of 0.025 pc. We adopt thus T d =25 K which results up to a factor of two uncertainty in the mass estimate.
The extracted cores have a mean mass of ∼63 M ⊙ corresponding to massive dense cores (MDCs as in Motte et al. 2007), and about 40% of the sample hosts cores more massive than 150 M ⊙ . They are, in terms of physical properties, similar to SDC335-MM1 (Peretto et al. 2013), which is here the most massive core with ∼400 M ⊙ within a deconvolved FWHM size of 0.054 pc 4 . In these clumps the second brightest sources are also typically massive, on average 78 M ⊙ suggesting a preference to form more massive cores. Except for one clump, no core is detected below 35 M ⊙ which is well above the typical detection threshold considering the mean 7σ rms mass sensitivity of 11.2 M ⊙ at the mean distance of 2.6 kpc, and may indicate a lack of intermediate mass (between 10-40 M ⊙ ) cores. Similar findings have been reported towards a handful of other young massive sources by Bontemps et al. (2010) and Zhang et al. (2015). Clumps with single sources host strictly massive cores with M MDC >40 M ⊙ , and about half of them reaches the highest mass range of M MDC >150 M ⊙ .
We show the mass distribution of cores as ∆N/∆log M ∼ M α in Fig. 2, and indicate the 10σ rms completeness limit of 50 M ⊙ , set by the highest noise in the poorest sensitivity data. The distribution tends to be flat up to the completeness limit, and then shows a decrease at the highest masses. The distribution of M max MDC (hatched histogram) shows that the majority of the clumps host at least one massive core, while a few host only at most intermediate mass fragments. The least square power-law fit to the highest mass bins above the completeness limit gives α = −1.01 ± 0.20, which is steeper than the distribution of CO clumps (α=−0.6 to −0.8, Kramer et al. 1998), and tends to be shallower than the low-mass prestellar CMF and the stellar initial mass function (IMF) (α=-1.35--1.5, André et al. 2010), although at the high-mass end the scatter of the measured slopes is more significant (Bastian et al. 2010). Using Monte Carlo methods we test the uncertainty of α due to the unknown dust temperature, and simulated a range of T d between 10−50 K using a 4 Our mass estimates for SDC335-MM1 can be reconciled with Peretto et al. (2013) using a dust emissivity index of β∼1.2 between 93 GHz and 345 GHz. A similarly low value of β is also suggested by Avison et al. (2015). Notes. (a) Averaged properties. (b) The minimum and maximum σ rms noise is averaged over the line-free channels in the total 7.5 GHz bandwidth. Gray arrows show two models: 1) a uniform clump density, and 2) a single central object with an r −2 density profile. normal distribution with a mean of 25 K, and a power-law distribution. We fitted to the slope the same way, as above, and repeated the tests until the standard deviation of the measured slope reached convergence. In good agreement with the observational results, the normal temperature distribution gives α MC =-1.01±0.11, and thus constrains the error of the fit suggesting an intrinsically shallower slope than the IMF. A power-law temperature distribution in the same mass range with an exponent of −0.5, could reproduce, however, the slope of the IMF, assuming that the brightest sources are intrinsically warmer. Alternatively, a larger level of fragmentation of the brightest cores on smaller scales could also reconcile our result with the IMF.
Limited fragmentation from clump to core scale
The thermal Jeans mass in massive clumps is low (M J ∼1 M ⊙ at n cl =4.6×10 5 cm −3 , T =18 K), which is expected to lead to a high degree of fragmentation. In contrast, the observed infrared quiet massive clumps exhibit here limited fragmentation withN fr = 3, from clump to core scales. We even find single clumps/MDCs at our resolution. This is intriguing also because these most massive clumps of the Galaxy are expected to form rich clusters. The selected highest peak surface density clumps could therefore correspond to a phase of compactness where the large level of fragmentation to form a cluster has not yet developed. We find that the mass surface density (Σ) increases towards small scales (Fig. 3, c.f. Tan et al. 2014) corresponding to a high concentration of mass. 80% of the clumps host MDCs above 40 M ⊙ , and the most massive fragments scale with the mass of their clump. Two models are shown with arrows in Fig. 3: 1) clumps with a uniform mass distribution forming low mass stars correspond to a roughly constant mass surface density; 2) clumps with all the mass concentrated in a single object corresponding to n(r) ∼ r −2 density profile. The majority of the sources fit better the steeper than uniform density profile.
The early fragmentation of massive clumps thus does not seem to follow thermal processes, and shows fragment masses largely exceeding the local Jeans-mass (see also Zhang et al. 2009;Bontemps et al. 2010;Wang et al. 2014;Beuther et al. 2015;Butler & Tan 2012). The significant concentration of mass on small scales also manifests in a high core formation efficiency (CFE), which is the ratio of the total mass in fragments and the total clump mass from Csengeri et al. (2017) adopting the same physical parameters (Fig. 4). The CFE suggests an increasing concentration of mass in cores with the average clump volume density (n cl ), a trend which has been seen, although inferred from smaller scales, towards high-mass infrared quiet MDCs in Cygnus-X , and low-mass cores in ρ Oph (Motte et al. 1998), and a sample of infrared bright MDCs (Palau et al. 2013). Although the CFE shows variations at high densities withn cl > 10 5 cm −3 , exceptionally high CFE of over 50%, can only be reached towards the highest average clump densities.
Which physical processes influence fragmentation?
What can explain that the thermal Jeans mass does not represent well the observed fragmentation properties in the early stages? A&A proofs: manuscript no. aca-letter_subm_short_referee3_accepted A combination of turbulence, magnetic field, and radiative feedback could increase the necessary mass scale for fragmentation. Using the Turbulent Core model (McKee & Tan 2003) for cores with M MDC >150 M ⊙ at the average radius of 0.025 pc, we estimate from their Eq. 18 a turbulent line-width of ∆v obs 6 km s −1 at the surface of cores, which is a factor of two higher than the average ∆v obs at the clump scale (Wienen et al. 2015). The magnetic critical mass at the average clump density corresponds to M mag <400 M ⊙ at the typically observed magnetic field values of 1 mG towards massive clumps (e.g. Falgarone et al. 2008;Girart et al. 2009;Cortes et al. 2016;Pillai et al. 2016) following Eq.2.17 of Bertoldi & McKee (1992). This suggests that moderately strong magnetic fields could explain the large core masses, however, at the high core densities of n core =4×10 7 cm −3 considerably stronger fields, at the order of B>10 mG, would be required to keep the most massive cores subcritical. Although radiative feedback could also limit fragmentation (e.g. Krumholz et al. 2007;Longmore et al. 2011), infrared quiet massive clumps are at the onset of star formation activity and we lack evidence for a potential deeply embedded population of low-mass protostars needed to heat up the collapsing gas.
Can global collapse explain the mass of MDCs?
The rather monolithic fashion of collapse suggests that fragmentation is at least partly determined already at the clump scale, which would be in agreement with observational signatures of global collapse of massive filaments (e.g. Schneider et al. 2010;Peretto et al. 2013). If entire cloud fragments undergo collapse, and equilibrium may not be reached on small scales leading to the observed limited fragmentation and a high core formation efficiency at early stages. Mass replenishment beyond the clump scale could fuel the formation of the lower mass population of stars leading to an increase in the number of fragments with time, and allowing a Jeans-like fragmentation to develop at more evolved stages (e.g. Palau et al. 2015).
At the scale of cloud fragments, if collapse sets in at a lower density range ofn cloud = 10 2 cm −3 , the initial thermal Jeans mass could reach M J ∼50 M ⊙ assuming T=18 K, at a characteristic λ Jeans of about 2.3 pc. This is consistent with the extent of globally collapsing clouds, the involved mass range is, however, not sufficient to explain the mass reservoir of the most massive cores. Considering the turbulent nature of molecular clouds in the form of large-scale flows, their shocks could compress larger extents of gas at higher densities depending on the turbulent mach number (c.f. Chabrier & Hennebelle 2011), and lead to an increase in the initial mass reservoir. Fragmentation inhibition and the observed high CFE are thus consistent with a collapse setting in at parsec scales. The origin of their initial mass reservoir, however, still poses a challenge to current star formation models.
Towards the highest mass stars
The mass distribution of MDCs could be reconciled with the IMF either if multiplicity prevailed on smaller than 0.06 pc scales, or if the temperature distribution scales with the brightest fragments. Similar results have been found towards MDCs in Cygnus-X by Bontemps et al. (2010), but also towards Galactic infrared-quiet clumps, such as G28.
Article number, page 11 of 11
|
2018-05-31T22:30:35.123Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "198b3517847ba541cc3d2178b8063ad0c03882e4",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2017/04/aa29754-16.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "e11488742da57b668dc3925365fc1f405b99722f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
254302997
|
pes2o/s2orc
|
v3-fos-license
|
Dynamics of recovery sleep from chronic sleep restriction
Abstract Sleep loss is common in our 24/7 society with many people routinely sleeping less than they need. Sleep debt is a term describing the difference between the amount of sleep needed, and the amount of sleep obtained. Sleep debt can accumulate over time, resulting in poor cognitive performance, increased sleepiness, poor mood, and a higher risk for accidents. Over the last 30 years, the sleep field has increasingly focused attention on recovery sleep and the ways we can recover from a sleep debt faster and more effectively. While there are still many unanswered questions and debates about the nature of recovery sleep, such as the exact components of sleep important for recovery of function, the amount of sleep needed to recover and the impacts of prior sleep history on recovery, recent research has revealed several important attributes about recovery sleep: (1) the dynamics of the recovery process is impacted by the type of sleep loss (acute versus chronic), (2) mood, sleepiness, and other aspects of cognitive performance recover at different rates, and (3) the recovery process is complex and dependent on the length of recovery sleep and the number of recovery opportunities available. This review will summarize the current state of the literature on recovery sleep, from specific studies of recovery sleep dynamics to napping, “banking” sleep and shiftwork, and will suggest the next steps for research in this field. This paper is part of the David F. Dinges Festschrift Collection. This collection is sponsored by Pulsar Informatics and the Department of Psychiatry in the Perelman School of Medicine at the University of Pennsylvania.
Introduction
The prevalence of chronic sleep restriction in our 24/7 society is increasing with adults sleeping an average <7 h most nights [1,2]. Chronic sleep restriction can be defined as obtaining less sleep than an individual needs for multiple nights in a row. This is distinct from total sleep deprivation which describes no sleep at all for an extended period of time [3]. As well as sleep duration, sleep behaviors are also changing, with frequent cycling between short sleep on workdays and extended sleeps on days off and weekends as individuals try to recover [4]. Experiments in healthy humans have demonstrated that chronic sleep restriction results in cognitive performance deficits and increased sleepiness that accumulates over days [5][6][7][8]. These deficits can accumulate to similar levels found for several days of total sleep deprivation [8]. While many studies have confirmed that neurobehavioural and broader human physiological systems impairment accrues with sleep loss, the dynamics of the recovery process are only beginning to be understood.
It was nearly 60 years ago that Kleitman [9] suggested that "sleep debts" are "liquidated" by extending recovery sleep duration (p. 317) however, research is yet to address gaps in knowledge pertaining to the understanding of recovery sleep and its dynamics. For example, it is not yet known what the rates of homeostatic sleep drive buildup with sleep restriction and dissipation with recovery sleep are. Further, research to date has not demonstrated which components of sleep architecture are important for recovery of function nor developed recommendations regarding behavioral changes that might ensure an adequate number of days off duty for recovery from work schedules. This is likely in part due to the ways recovery has been studied and the complexity of the recovery process after chronic sleep restriction. Initially, recovery was studied only after total sleep deprivation, but the recovery dynamics are quite different after chronic sleep restriction [5]. To study recovery, participants must first undergo a period of chronic sleep restriction which requires lengthy, complex laboratory studies. As a result, few of these study types are undertaken. Understanding recovery dynamics is important, not only for individuals who need to resolve a sleep debt, but also to ensure organizations create work schedules that allow for adequate recovery time.
David Dinges seminal work investigating recovery sleep has greatly influenced the understanding of sleep-wake dynamics and recovery sleep. This review will feature and highlight his research and focus on human studies of recovery sleep dynamics. The review will summarize both seminal and recent literature on human studies of sleep restriction and recovery and discuss the recovery value of short sleeps, such as naps, split sleep opportunities, and banking sleep.
Impact of sleep debt on cognitive performance
Chronic sleep loss without adequate recovery sleep leads to what is referred to as "sleep debt" [10,11]. Sleep debt is common in many segments of society including new parents [12], shift workers [13], long-haul truck drivers [14], nurses [15], commercial pilots [16], and astronauts [17]. Chronic sleep loss is associated with behavioral risks that include increased errors, traffic accidents, injuries, poor team performance, and burnout [18,19].
Webb and Agnew [10] suggested that a "sleep debt" underpinned their finding of increased sleep duration following the restriction of sleep to either 2 or 4 h a night. They suggested an increased sleep duration may occur in response to a sleep debt, alluding to the corrective process now known as recovery sleep. These results built on the findings of Dement et al. [20] who found that supplemental REM sleep was critical to the reversal or recovery of the effects of restricted REM sleep.
Dinges et al. [7] conducted the first study of its kind to examine the impact of chronic sleep restriction on cognitive performance, sleepiness, and mood. Under controlled laboratory conditions, participants were subjected to restricted sleep (5 h per night) over 7 days. Results demonstrated that deficits in cognitive performance, sleepiness, and mood accumulated over days of chronic sleep restriction and that these deficits failed to be corrected after one recovery sleep opportunity of 10 h. This study highlights the complex relationship between sleep debt and recovery sleep. Continuing this work, Dinges et al. [8] conducted another chronic sleep restriction laboratory study comparing the cognitive performance consequences of either 4, 6, or 8 h' sleep per day for 14 days. This study would become a seminal work revealing that even relatively moderate amounts of sleep restriction for a short period could result in cognitive performance deficits similar to two nights without any sleep. This study also found that subjective sleepiness ratings largely stabilized after 2-3 days, despite continued cognitive performance decline [8]. These observations suggest that the sleep debt, which affects a large portion of the population, comes at a neurobehavioral cost that may be difficult to detect.
Further, Belenky et al. [5] conducted a similar study examining cognitive performance under different sleep doses (3, 5, 7, or 9 h) for 7 days and 3 subsequent days of 8 h' recovery sleep. While cognitive performance declined in the 3-h group, it appeared to plateau in the 7-h group. Evaluation of cognitive performance during the recovery period yielded interesting findings: following the first recovery night, the 3-h group demonstrated an improvement in cognitive performance but only to a level consistent with that of the 5-h and 7-h groups. There was sustained cognitive impairment relative to baseline for each of the 3-h, 5-h, and 7-h groups implying that 3 days of 8 h' sleep was insufficient for complete recovery [5].
Weekend recovery sleep and repeated periods of restriction and recovery
The 5 days for work and 2 days for rest structure of the week is a socially engrained work/rest interval [21]. This work configuration has resulted in widespread reports of shorter, restricted weekday sleep with longer, extended sleep on at least one weekend night (or day off from work), representing a cyclic schedule of chronic sleep restriction and recovery [4]. While this is a common sleep pattern for millions of people [4] it has rarely been studied.
One of the few studies to systematically examine the amount of sleep needed to recover a typical work week of chronic sleep restriction was by Banks et al. [22]. They sought to investigate the magnitude of recovery that could be achieved in a single night after five nights of sleep restriction. Neurobehavioral performance was assessed during 5 days of sleep restriction to 4 h' time in bed a night, and then after a recovery night of either 0, 2, 4, 6, 8, or 10 h' time in bed for sleep. A control group with a 10-h sleep opportunity each night of the study was also examined. Analyses assessed recovery to baseline for all groups including control. It was found that ability to maintain wakefulness and cognitive throughput improved as sleep duration, sleep stages, and sleep intensity increased across the recovery sleep doses (see Figure 1). However, vigilant attention, subjective sleepiness, and subjective mood did not follow this same pattern. Participants did not fully recover compared to their baseline or the control group even with 10 h' time in bed for sleep. This work suggests that different metrics may recover at different rates, with different recovery trajectories. The lack of full recovery has implications for individuals if they are re-exposed to further periods of sleep restriction as is typical with the cyclic weekday/weekend pattern. These results also suggest that the more time available for sleep after sleep restriction, the greater the recovery of neurobehavioural function. The 10 h' time in bed recovery condition had significantly more total sleep time (sleep duration), stage 2 sleep, and percentage of slow wave energy (or sleep intensity) on the recovery night than at baseline night (see Figure 2). It has been previously suggested that sleep intensity and sleep duration are only "marginally related, and that "sleep loss is primarily recovered by increasing sleep intensity and not necessarily by sleep duration" [23]. The data from Banks et al. [21] does not support this and would suggest that both sleep intensity and sleep duration are important for recovery of neurobehavioural function following chronic sleep restriction.
The role of preexisting sleep debt on the subsequent response to sleep restriction was addressed in a preliminary study by Banks et al. [24]. This study investigated whether a single night of sleep restriction to 4 h' time in bed following partial recovery from a sleep debt resulted in the same degree of neurobehavioral deficit as that found after a single night of sleep restriction to 4 h' time in bed following a period without sleep debt. Healthy individuals participated in a laboratory-controlled protocol, where they underwent two nights of baseline sleep of 10 h' time in bed; Followed by five nights of sleep restriction to 4 h' time in bed a night; Then a recovery night of between 8 and 12 h' time in bed; Followed by another night of sleep restriction to 4 h' time in bed. Change scores were calculated between the second baseline night and first night of sleep restriction (assessment 1; acute sleep restriction after no sleep debt), and between the recovery night and subsequent night of sleep restriction (assessment 2; acute sleep restriction after sleep debt). Vigilant attention at assessment 2, acute sleep restriction after sleep debt, was nearly twice the impairment compared to that at assessment 1, acute sleep restriction after no sleep debt. Thus, when recovery from sleep debt is incomplete, neurobehavioral vulnerability to further sleep restriction appears to be disproportionately increased. This pattern of weekday sleep restriction and weekend sleep extension has also been examined in the context of metabolic [25] and immune function [26] as an array of negative health outcomes are known to result from chronic sleep restriction [27]. Depner et al. [28] examined if ad libitum weekend recovery sleep would prevent metabolic dysregulation when re-exposed to chronic sleep restriction. During the weekend, participants slept approximately 1 h more than baseline, but during chronic sleep restriction following the weekend, the circadian phase was delayed, and after-dinner energy intake and body weight increased. Overall, they found that weekend recovery sleep did not protect against metabolic disruption during sleep restriction the subsequent week. There were residual effects of the first period of sleep restriction on the second period, regardless of the intervening weekend recovery sleep.
Simpson et al. [26] examined this same model of cyclic weekday restriction and weekend recovery sleep on stress and immune function. They examined the impact of three periods of sleep restriction to 4 h' time in bed a night for 5 days (weekdays) with 2 days recovery of 8 h per night (weekend) on physiological markers of stress. Results showed that physiological stress responses remained activated with repeated exposures to sleep restriction and "weekend" recovery. Immune function was increased during sleep restriction and remained increased after recovery sleep in weeks one and two. These results provide evidence that patterns of sleep restriction and recovery have implications for immune function and given the awareness that chronic low-grade inflammation can increase risk for cardiovascular and metabolic disease [29] these patterns of insufficient sleep may pose a significant health risk.
Collectively, evidence from the above studies suggest that even after extended periods of recovery sleep, recent exposure to chronic sleep loss can make an individual more vulnerable to the effects of re-exposure to sleep restriction. Weekends and time off appear to not provide much protection when cycling between periods of short sleep and longer recovery sleep. Despite intermittent opportunities for recovery sleep, individuals exposed to work schedules that regularly restrict sleep may become increasingly vulnerable to the adverse effects of sleep loss on performance. Therefore, prior sleep-wake history may greatly impact an individual's response to future sleep loss.
"Banking" sleep and extending sleep to maximize recovery
Banking sleep is characterized by extending habitual sleep duration in advance of a period of sleep restriction [30]. In a seminal study, Rupp et al. [30] sought to examine the impact of extended habitual sleep duration in advance of a period of chronic sleep restriction on performance. Participants had a week of either habitual (7 h a night) or extended (10 h a night) sleep opportunities before undergoing sleep restriction of 3 h a night for 7 days. This was followed by 5 days of 8 h' time in bed a night for recovery sleep. Participants in the extended sleep condition before the chronic sleep restriction showed less performance impairment compared to those in the habitual sleep condition. The additional sleep in the extended condition improved the participant's resiliency to the sleep restriction. Further, performance deficits were more quickly resolved in extended sleep group during the recovery phase, suggesting that both performance impairment under sleep restriction and the time course of subsequent recovery are influenced by prior sleep.
These findings demonstrate that there is a long-term effect of prior sleep history that can increase resilience or vulnerability to sleep restriction. Indeed, Banks et al. [24] found that when recovery from sleep debt is incomplete, cognitive performance during subsequent sleep restriction appears to be disproportionately increased (i.e. increased vulnerability).
Recovery for workers around the clock
Shiftworkers, particularly nightshift personnel, are often faced with the distinct challenges of daytime sleep and circadian misalignment. Sleep during the day is difficult due to the circadian system's drive for wakefulness. Sleep during the day is often shorter and of reduced quality [31]. Therefore, shiftworkers face increased vulnerability to sleep debt accrual, and a suboptimized opportunity to recover the debt. Additionally, the opportunity for recovery is often restricted as a byproduct of successive long shift (12 h+) duration in shiftworkers in operational environments. In a field study of nurses working three sequential 12-h night shifts, Geiger-Brown et al. [32] found an average sleep duration of 5.4 h between shifts, which was extended by only 0.67 h, on average, after the third shift, demonstrating the vulnerability to sleep debt and barriers to recovery in nightshift personnel. This is problematic as Jay et al. [33] reported that neurobehavioral functions are not properly recovered when recovery sleep durations are restricted.
In a novel, a split-sleep dose-response study involving a range of scenarios with chronically reduced nocturnal sleep, augmented with a diurnal nap, Mollicone and colleagues [34] showed that cognitive performance declined at the same rate regardless of whether sleep was consolidated or split into a nocturnal anchor sleep and nap. Cognitive performance was primarily a function of total time in bed per 24 h, with less total time in bed consistently resulting in a greater accumulation of performance impairment and subjective sleepiness across days. Provided total sleep time is the same, sleep can be split into two periods or consolidated in one. These findings have implications for individuals with work schedules that rarely permit long nocturnal sleep episodes. For these individuals, results suggest that splitting the sleep can provide adequate recovery to maintain performance.
Supplementing sleep in the anticipation of sleep loss is a common practice by many shiftworkers. Geiger-Brown et al. [32] found that nearly 75% of nightshift nurses reported napping prior to their first night shift. Although prophylactic napping, a concept originally proposed by Orne and Dinges [35], was for some time not considered to be possible "because sleep could not be stored (p131 [36] ) ," considerable real-world evidence supports the usefulness of the approach as does the experimental work on banking sleep described above by Rupp and colleagues [30]. Seminal work by Dinges et al. [35,37] investigated the recovery benefit of prophylactic naps in sleep-deprived adults. The research included five different 2-h nap opportunities during 2.5 days without sleep. One of these opportunities involved a nap on the first afternoon, after only 6 h of wakefulness (i.e. before 46 h of sustained wakefulness). Subjects who had undertaken a prophylactic nap on the first afternoon demonstrated improved reaction time on the psychomotor vigilance task, however it was not evident until 10 h post the nap (i.e. during the first full day of sleep loss). Once sleep deprivation was present, the positive effects of the nap on performance were evident within an hour after the nap and were sustained for between 6 and 30 h post nap [35]. Dinges concluded that afternoon naps, including those taken prophylactically before sustained wakefulness, have beneficial effects on performance and sleepiness up to 12 h after the naps.
Conclusions and Summary
In summary, as sleep loss is common and many individuals do not get adequate sleep, it is important to understand the dynamics of the recovery process. It is clear from the literature reviewed here that recovery from chronic sleep restriction is a complex process that is not possible with one or two nights of extended sleep. The pattern of weekend catch-up sleep does not permit full recovery of lost sleep or neurobehavioural function and does not provide protection if re-exposed to chronic sleep restriction. When recovery from chronic sleep restriction is incomplete, performance during subsequent sleep restriction appears to be disproportionately increased (i.e. increased vulnerability to the impact of sleep restriction). This indicates that prior sleep history is an important factor in how an individual will respond to sleep restriction. It is also clear from the reviewed literature that both sleep duration and sleep intensity is important for recovery. It is not evident that one sleep stage or component of sleep is more important or vital for recovery than another. This is echoed in studies that split sleep where the total amount of sleep obtained over a 24-h period is the important factor to maintain performance. Indeed, naps and short sleeps can supplement recovery when extended consolidated sleeps are not possible. Recovery is important to reverse the negative effects of sleep restriction and to maintain neurobehavioral function and health. Effectively managing recovery to ensure people do not develop a significant sleep debt, particularly those workers who have limited opportunities for recovery sleep, could have major impacts for wellbeing, increasing productivity and help to reduce road crashes and workplace accidents. Relatedly, studies examining the impact of chronic sleep restriction and associated recovery sleep dynamics in the sleep disorders patient population are in critical demand.
|
2022-12-07T18:59:43.460Z
|
2022-11-30T00:00:00.000
|
{
"year": 2022,
"sha1": "66958680189cdb4bd9b1d03fb66790c58e848a42",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/sleepadvances/article-pdf/4/1/zpac044/49844776/zpac044.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c94087ac0864dc44a7887a80f0704ab9102187a4",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
798253
|
pes2o/s2orc
|
v3-fos-license
|
Sustained Benefit Lasting One Year from T4 Instead of T3-T4 Sympathectomy for Isolated Axillary Hyperhidrosis
INTRODUCTION Level T4 video-assisted thoracoscopic sympathectomy proved superior to T3-T4 treatment for controlling axillary hyperhidrosis at the initial and six-month follow-ups of these patients. OBJECTIVE To compare the results of two levels of sympathectomy (T3-T4 vs. T4) for treating axillary sudoresis over one year of follow-up. METHODS Sixty-four patients with axillary hyperhidrosis were randomized to denervation of T3-T4 or T4 alone and followed prospectively. All patients were examined preoperatively and were followed postoperatively for one year. Axillary hyperhidrosis treatment was evaluated, along with the presence, location, and severity of compensatory hyperhidrosis and self-reported quality of life. RESULTS According to patient reports after one year, all cases of axillary hyperhidrosis were successfully treated by surgery. There were no instances of treatment failure. After six months, compensatory hyperhidrosis was present in 27 patients of the T3-T4 group (87.1%) and in 16 patients of the T4 group (48.5%). After one year, all T3-T4 patients experienced some degree of compensatory hyperhidrosis, compared to only 14 patients in the T4 group (42.4%). In addition, compensatory hyperhidrosis was less severe in the T4 patients (p < 0.01). Quality of life was poor before surgery, and it improved in both groups at six months and one year of follow-up (p = 0.002). There were no cases of mortality, no significant postoperative complications, and no need for conversion to thoracotomy in either group. CONCLUSION Both techniques were effective for treating axillary hyperhidrosis, but the T4 group showed milder compensatory hyperhidrosis and greater patient satisfaction at the one-year follow-up.
INTRODUCTION
Axillary hyperhidrosis is an important disease that may cause serious emotional and work-related problems. Local treatment and psychotherapy show low effectiveness. Injections of botulinum toxin offer good but temporary results lasting fewer than six months, and excision/resection of the eccrine sweat glands is less effective and allows for a higher recurrence rate than sympathectomy. [1][2][3][4][5] Video-assisted thoracic sympathectomy (VATS) is a recognized procedure for the definitive treatment of palmar hyperhidrosis, but its efficacy in treating axillary hyperhidrosis remains controversial. No randomized, prospective studies of axillary hyperhidrosis treatment have been published that compare different levels of thermoablation of the sympathetic chain in VATS.
Therefore, we carried out a randomized, prospective study to compare the results of VATS at two resection levels, T3-T4 versus T4. Efficacy of axillary hyperhidrosis treatment, presence and severity of compensatory hyperhidrosis, and patient satisfaction were evaluated over one year after the surgery using an interview and a quality-of-life questionnaire.
METHODS
After randomization, 64 patients with pure axillary hyperhidrosis, ranging in age from 17 to 46 years, were submitted to VATS. All patients received information regarding risks and chances of compensatory hyperhidrosis. Criteria for inclusion in the study were a complaint of axillary hyperhidrosis and the intention to undergo surgery. Criteria for exclusion were the existence of prior thoracic surgery; the existence of diseases such as cardiac diseases, pulmonary infections, neoplasia, or pleural or lung diseases that could increase surgical risk; or a body mass index (BMI) greater than 25. 6,7 All patients were submitted to surgery under general anesthesia with selective intubation and pulmonary ventilation. Two incisions were made in each hemithorax: the first at the fourth intercostal space on the anterior axillary line, and the second at the third intercostal space on the mid-axillary line. 8,9 After identification of each sympathetic chain, patients randomized into the T3-T4 group underwent sympathicotomy on the bodies of the third, fourth, and fifth ribs, followed by thermoablation of the segments isolated between them. Patients randomized into the T4 group underwent resection of the chain (sympathicotomy) at the fourth and fifth ribs, with thermoablation of the segment between them. After the sympathectomy, the lung was reexpanded under direct viewing and air was simultaneously aspirated from the pleural space using a small catheter (16 Fr). The same procedure was carried out on the contralateral chain. There was no routine use of a chest drain. A chest x-ray was performed following the operation in order to assess lung expansion.
Patients were followed for one year after intervention. At 12 months after the index procedure, systematic reexaminations were performed on all patients. The observers recording the findings were blinded to patients' treatments. The following were assessed: 1. Presence or absence of axillary hyperhidrosis reported by the patient and confirmed by the examiner.
2. Presence or absence of compensatory hyperhidrosis, along with its location and severity, as reported by the patient and confirmed by the examiner. The severity of the sudoresis was graded at one of three levels: mild, moderate, or severe. Patients who noticed no difference in the location or intensity of their body sweat were deemed unaffected by compensatory hyperhidrosis. Mild compensatory sweating was considered present when patients reported minor modifications in the location and severity of their perspiration, such as visible sweating, but did not express significant concern about it. Moderate compensatory hyperhidrosis was considered present when patients reported visible and embarrassing sweating or occasionally disabling situations caused by sweating.
Finally, severe compensatory hyperhidrosis was considered present when patients reported interference in their social and professional activities, such as the need for successive clothing changes caused by sweating of the same intensity as their previous axillary hyperhidrosis, but at other primary locations. It was defined as severe when it was visible, embarrassing, and led to at least one change of clothes during the day.
3. The patients' satisfaction with the final outcome of the procedure (including both the treatment and any complications) was subjectively evaluated using a multiplechoice subjective rating scale (four options): 1, deficient (dissatisfied); 2, fair; 3, very good; 4, excellent.
STATISTICAL ANALYSIS
For categorical variables, depending on the sample, the χ 2 or Fisher's exact tests were used for verifying associations between the type of surgery and possible results and complications. These statistical tests were used at each follow-up assessment to compare types of surgery with the variables of interest (axillary hyperhidrosis, incidence and severity of compensatory hyperhidrosis, and patient satisfaction). The associations between patients' ages, degrees of satisfaction, and ganglion resection level (T3-T4 or T4) were investigated using the Mann-Whitney U test. Significance for all tests was defined at 5%.
RESULTS
The mean age and gender distributions in the two groups were similar ( Table 1). The incidence and severity of compensatory hyperhidrosis are presented in Table 2. No recurrence of axillary hyperhidrosis was reported at the 12-month follow-up in either of the groups.
The incidence of compensatory hyperhidrosis was lower in the T4 group, at one month, six months, and 12 months of follow-up. Compensatory hyperhidrosis was less severe in the T4 group, and this group showed no cases of severe compensatory hyperhidrosis by the final follow-up at 12 months. The incidence and severity of compensatory hyperhidrosis in patients who underwent T3-T4 resection remained constant over the 12 months of follow-up, whereas both the incidence and severity of compensatory hyperhidrosis decreased in the T4 group from six to 12 months (p > 0.05). There was no difference in the location of compensatory hyperhidrosis between the two groups: the most affected regions were the abdomen, back, and legs. The groups also reported no difference in the situations that triggered the compensatory hyperhidrosis: the majority of patients in both groups (21 in the T3-T4 group, 12 in the T4) attributed it to heat and intense physical activity.
The reports of patient satisfaction are presented in Table 3. Patients of the T4 group reported higher satisfaction than those of the T3-T4 group (p < 0.05). It should be emphasized that, after one year, none of the patients in the T4 group were dissatisfied, but five patients in the T3-T4 group were dissatisfied.
DISCUSSION
Despite current local and systemic therapeutic modalities, axillary hyperhidrosis is still a frequent condition that affects a great number of patients, leading to disturbances in social and professional life. 2,3,6,10 Previous studies have shown that VATS is an effective treatment for axillary hyperhidrosis, with a success rate of 89%. 11,12 As a result of technical advances and a procedural change such that resection is carried out at a lower ganglion level (fourth ganglia), the technique now has a success rate of 94%. 13 In the present study, we observed that both T3-T4 and T4 resections were effective in all cases at both one month and 12 months of follow-up. We attribute this success rate to the extreme care taken in identifying patients who would benefit the most from surgical treatment, and accepting them for treatment only after they had received an adequate explanation of all the risks and of the possibility of compensatory hyperhidrosis and still expressed a desire to undergo surgery. This approach meant that only the patients best suited to the procedure and its aftermath were admitted for treatment.
One problem found in several case series is the degree of recurrent axillary hyperhidrosis, which has been reported to range from 15% 15 to 65%. 16,17 In our study, we did not observe any recurrence in either group at six or 12 months follow-up. This lack of recurrence is probably due to the absence of technical failure among the operated patients. 18 The compensatory hyperhidrosis observed in this study was distributed in the body in the same way as described in the medical literature, i.e. in the abdomen, back, feet, and gluteal region. In most cases, it is tolerable and does not lead to social disturbances or occupational disability since the patients have been previously informed about this possibility. Patients are inconvenienced only when their symptoms are severe or when they do not receive adequate preoperative information. It is very important that patients always be warned about this possible complication before surgery, because of the irreversibility of the method and the likelihood of compensatory hyperhidrosis.
The key to these good outcomes observed in this study are the resection of the T4 ganglion and information given to patients. 19 T5 resection is not necessary. 11,19 For T4 ganglion resection, a complete operation on T4 is necessary, involving sympathicotomy from the upper margin of the fourth rib to the lower margin of the fifth rib, followed by thermoablation of the chain. 20 Patient satisfaction at six months follow-up was greater in this group than in the T3-T4 group. 18,21,22 The high satisfaction in the T4 group increased even more at 12 months follow-up.
Compensatory hyperhidrosis is the most frequent complication of VATS , and it occurs when an ample resection of the sympathetic chain is performed, at a frequency of up to 89% of cases. [21][22][23][24] In our series, in which resections were carried out at lower levels, we found that 93.5% of the patients in the T3-T4 group and 57.6% in the T4 group experienced this complication after one year; 35.5% of the T3-T4 patients showed moderate or intense compensatory hyperhidrosis, whereas only one patient (12.5%) in the T4 group showed moderate compensatory hyperhidrosis. In addition, the T4 group showed no cases of intense compensatory hyperhidrosis. With respect to mild compensatory hyperhidrosis, we observed that between the six-month and one-year follow-up of the T3- T4 group, there was an increase in mild cases (from 16 to 18) and a decrease in the more serious cases, although the changes were not statistically significant. In contrast, the T4 group showed a decrease in the number of mild cases (from 14 to 13) since many patients started to feel that they were free from these effects. We did not use any objective measurement of sudoresis because these methods produce only data at a specific point in time. There are no methods capable of measuring hyperhidrosis over an entire day.
Despite the presence of compensatory hyperhidrosis, all the patients of our series reported that the procedure had improved their quality of life. The satisfaction was high in both groups, with no statistical difference between the groups at either one-month or six-month follow-ups.
Long-term follow-up of these two groups may show whether these results persist. In the event of late recurrence of symptoms among patients in the T4 group, reoperation could be carried out in order to extend the sympathectomy to the T3 ganglion. [25][26][27]
CONCLUSION
We conclude that resection of the T4 ganglion is preferable to resection of the T3 and T4 ganglia together. Despite their equal efficacy for reducing axillary hyperhidrosis, T4 resection leads to a lower rate of compensatory hyperhidrosis.
Compensatory hyperhidrosis in the T4 group tended to decrease over time, which was reflected in the statistically significant improvement in long-term personal satisfaction. 28
|
2016-05-12T22:15:10.714Z
|
2008-12-01T00:00:00.000
|
{
"year": 2008,
"sha1": "12335b536bd6bf5a1981fa360d9b93e1481df61e",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/clin/v63n6/11.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "40e791ed7cac081206fe790b3f670f949f1222e1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
268157992
|
pes2o/s2orc
|
v3-fos-license
|
Microneedle-assisted dual delivery of PUMA gene and celastrol for synergistic therapy of rheumatoid arthritis through restoring synovial homeostasis
Abnormal proliferation of aggressive fibroblast-like synoviocytes (FLS) and perpetuate synovial inflammation can inevitably accelerate the progression of rheumatoid arthritis (RA). Herein, a strategy of simultaneously promoting FLS apoptosis and inhibiting inflammation as mediated by macrophages is proposed to restore synovial homeostasis for effective RA therapy. A hyaluronic acid-based dissolvable microneedle (MN) is fabricated for transdermal delivery of dual human serum albumin (HSA)-contained biomimetic nanocomplexes to regulate RA FLS and macrophages. Upon skin insertion, dual nanocomplexes are released rapidly from the MN and accumulate in RA joint microenvironment through both passive and active targeting as mediated by HSA. Thioketal-crosslinked fluorinated polyethyleneimine 1.8 K (TKPF) was constructed to bind the plasmid encoding pro-apoptotic gene PUMA with HSA coating layer (TKPF/pPUMA@HSA, TPH). TPH nanocomplexes can upregulate PUMA through RA FLS transfection to trigger efficient apoptosis. Also, HSA nanocomplexes encapsulating the classic anti-inflammatory natural product celastrol (Cel@HSA, CH) can inhibit inflammation of macrophages through blocking NF-κB pathway activation. TPH/CH MN can deplete RA FLS and inhibit M1 macrophage activation, suppress synovial hyperplasia as well as reduce bone and cartilage erosion in a collagen-induced arthritis (CIA) mouse model, demonstrating a promising strategy for efficient RA treatment.
Introduction
Rheumatoid arthritis (RA) as a complicated autoimmune disease accompanies with persistent synovial inflammation, progressive cartilage erosion and even complications in other organs [1,2].The continuous progression of RA ultimately renders non-reversible joint deformity and disability, posing great healthcare and economic burden [3].Fibroblast-like synoviocytes (FLS) as resident stromal elements play an essential role in RA pathogenesis.The lining layer of FLS in the synovium are markedly expanded from a thickness of 1-3 cell to 10-20 cell and activated with the aggressive phenotype (RA FLS) in disease progression.This property correlates with disease duration, macrophage infiltration as well as the severity of the cartilage and bone damage through producing pathogenic mediators such as inflammatory cytokines, proangiogenic factors, and matrix-degrading enzymes [4,5].Targeting FLS has been deemed as an effective approach to restoring synovial homeostasis for reversing bone and cartilage destruction [6][7][8][9].
Emerging strategies to regulate RA FLS have attained attention via changing metabolic profile [10], modulating signal transduction [11] and regulating surface marker [12] to decrease RA FLS invasiveness and disease severity.Considering the uncontrolled proliferation and accumulation of aggressive RA FLS can lead to resistance to cell death signals under oxidative stress in RA microenvironment, overcoming protective signaling to induce effective cell death of RA FLS may represent a practical anti-arthritis treatment.For instance, the expression of pro-apoptotic gene PUMA (p53 upregulated modulator of apoptosis) is low in RA FLS, which partially accounts for the resistance of RA FLS to apoptosis [13].Delivering proapoptotic gene PUMA via viral vectors has been proven to be an effective apoptotic therapy for RA FLS [14,15], but the actual therapeutic efficacy of gene therapy based on viral vectors is restricted by the carcinogenesis, immunogenicity and limited packaging capacity [16][17][18].Our group has developed a reactive oxygen species (ROS) responsive polyethylenimine-based fluorinated polymers ( TK PF) for the delivery of gene drugs and has been used for cancer therapy [19,20], which could overcome the limitations of viral vectors and realized enhanced transfection efficiency and decreased cytotoxicity.Thus, developing non-viral gene vectors based on TK PF with enhanced stability, controlled payload release, and increased transfection capacity to regulate RA FLS is promising for RA gene therapy.
As FLS and macrophages are two core tissue resident cells forming the basis of joint microenvironment to maintain synovial homeostasis, the crosstalk between RA FLS and macrophages is significant in persistent synovial inflammation [21].The proinflammatory M1 macrophages could stimulate FLS to transform into invasive phenotypes through secreting tumor necrosis factor (TNF) and interleukin-6 (IL-6) [22].Hence, we hypothesized that inducing efficient apoptosis of RA FLS could be combined with anti-inflammatory drugs for regulating macrophages to enhance anti-arthritic efficacy.Various anti-inflammatory ingredients from traditional Chinese medicine such as celastrol, berberine and sinomenine have revealed great potential as alternative antirheumatic drugs [23,24].Celastrol has exhibited good therapeutic efficacy and low side effects in RA treatment [25][26][27][28].Cel also could suppress macrophage repolarization toward pro-inflammatory M1 phenotype through regulating NF-κB pathway to decrease inflammatory cytokine secretion [27].Accordingly, the simultaneous regulation of RA FLS by upregulating PUMA and inhibition of inflammatory responses of macrophages as mediated by Cel may represent a promising therapeutic strategy to reduce synovial inflammation and alleviate bone erosion through restoring synovial homeostasis in advanced RA.
Transdermal delivery systems for RA therapy have received growing attention to overcome the first-pass effect of oral route and increase patient compliance compared with systemic or intra-articular injection [29,30].Polymeric microneedles (MN) with the advantages of pain-free, non-invasiveness and self-administration has inspired researchers to explore their potential applications in the management of chronic diseases including RA [31].Moreover, multiple drugs or nanoformulations could be loaded into an MN patch in separated zones or in a mixed manner to achieve an increased convenience and user-friendliness for combined therapy compared with sequential drug injection [32][33][34].However, the lack of active targeting ability results in low efficiency for drug delivery to inflamed joints, severely undermining the anti-arthritic effect of therapeutic MN.The recent study unveiled that the highly expressed secreted protein acidic and rich in cysteine (SPARC) in the synovial fluid significantly enhanced the accumulation of human serum albumin (HSA) due to the inherent high affinity for albumin [35].Therefore, taking advantage of HSA both as the coating layer of the TK PF-pDNA polyplex and delivery carrier of Cel could prolong circulation and enhance RA targeting ability of the two nanoformulations.Up to now, there have been no reports employing MN patch to regulate RA FLS and macrophages for RA treatment.
Herein, we reported a hyaluronic acid (HA)-fabricated dissolvable microneedle loaded with dual nanomedicine that can induce RA FLS apoptosis and regulate macrophage for alleviating synovial inflammation and bone destruction.Specifically, TK PF was employed as an efficient gene carrier to bind the plasmid encoding pro-apoptotic gene PUMA, followed by HSA coating to form nanocomplex via electrostatic interaction ( TK PF/pPUMA@HSA, TPH).Albumin nanocomplex was employed to encapsulate the model anti-inflammatory drug celastrol (Cel@HSA, CH).Dual nanocomplex-loaded HA MN (TPH/CH MN) was strong enough for skin insertion and rapid drug release.The released TPH and CH NCs could accumulate in the RA joint through passive targeting based on ELVIS effect and active targeting due to the high affinity of albumin to SPARC.The disassembled TPH NCs released pPUMA responding to the high ROS level in RA joint and upregulated PUMA to trigger RA FLS apoptosis, which facilitated the restoration of synovial homeostasis in RA.Meanwhile, Cel was released from CH NCs in the slightly acidic environment of macrophage to relieve inflammation for enhanced anti-arthritic outcome.Taken together, the proposed MN-assisted delivery of dual nanomedicine for combined FLS targeted apoptosis and inflammation alleviation is expected to significantly attenuate arthritis progression for effective RA treatment.
Preparation and characterization of TPH and CH NCs
TK PF was synthesized as our group previously reported [19,20].To prepare binary TK PF/pDNA NCs, pDNA (250 μg/mL) was mixed with TK PF (5 mg/mL) at a mass ratio of 10:1 and incubated for a duration of 30 min.HSA dissolved in PBS (5 mg/mL) was added to TK PF/pDNA NCs at various HSA/ TK PF weight ratios from 1:10 to 7:10 for 30 min to form TPH NCs.TK PF/pPUMA NCs at different weight ratios of TK PF and pPUMA were conducted in 1% agarose electrophoresis with 150 V for a duration of 15 min to evaluate the nucleic acid condensation ability, which were further visualized by the gel-doc system (ChemiDoc MP, Bio-Rad, USA) after the electrophoresis ended.CH NCs were fabricated as the previously described method with some modification [36,37].
Briefly, 100 μL of Cel dissolved in DMSO (10 mg/mL) was added dropwise slowly to the 2 mL of HSA dissolved in PBS solution (5 mg/mL).The mixture was stirred overnight and then transferred into dialysis bags (MWCO 3500) in order to remove DMSO, free Cel and excess HSA.Dir-loaded HSA nanocomplexes were prepared as the similar method.The particle size and zeta potential of TK PF/pPUMA, TPH and CH NCs was measured by DLS (Nano-ZS Zetasizer instrument, Malvern, UK).Additionally, their morphology was observed via transmission electron microscopy (TEM, Ht-7800, Hitachi, Japan).The encapsulation efficiency and loading capacity of CH NCs were assessed by HPLC (Waters, Milford, MA) and calculated as the previously reported method [38].The UV-vis absorbance spectrum of HSA, Cel, and HSA NCs was measured by the GENESYS™ 150 UV-Vis-NIR spectrometer (Thermo Fisher, USA).To evaluate the in vitro drug release behavior, CH NCs were transferred into a dialysis bag in PBS containing 5% Tween-80 at pH 5.8 or 7.4.The solution (1 mL) was collected, and another fresh buffer was supplemented at the predetermined time.The released Cel was quantified by HPLC.
Cellular uptake assay
FLS were seeded into 24-well plates and permitted to incubate overnight.The culture medium was then replaced, and 10 ng/mL of recombinant human TNF-α was added for stimulation for a period of 4 h.YOYO-1 labeled plasmids were fabricated according to the P. Hua et al. manufacturer's instruction for further construction of TK PF/pDNA and TPH NCs.The complexes in serum-free medium were added to the well.After incubation for 0.5, 1, 2, 4 h, the cells were visualized by DMI8 inverted fluorescent microscope, and were harvested for analysis by flow cytometry (FCM, Cytoflex, Beckman, USA).RAW264.7 cells were cultured overnight and stimulated with or without LPS (1 μg/mL) for 24 h.FITC labeled CH NCs were added and cultured for 1, 2, 4 h.Cells were subsequently collected for FCM analysis.
Endosomal escape of TPH NCs
TPH NCs containing Cy-5-labeled pDNA were prepared according to the above-mentioned method.FLS were seeded into 35 mm confocal dishes, cultured for 24 h and stimulated by TNF-α.Then FLS were incubated with TPH NCs at the time intervals of 1, 2, 4, and 8 h.Lyso-Tracker green and Hoechst 33342 were used to stain the lyso/endosomes and nuclei, respectively.After being washed and fixed with 4% paraformaldehyde for 15 min, FLS were observed by confocal laser scanning microscopy (CLSM, Leica TCS SP8, Germany).
Transfection efficiency of TPH NCs in vitro
FLS were cultured in the 12-well plate, incubated overnight and then stimulated by TNF-α prior to transfection.TK PF/pDNA and TPH NCs containing 1 μg of pEGFP (≈5 Kb) were added into each well with serum-free RPMI 1640 medium for 6 h incubation.Then the medium was substituted with fresh complete medium and incubated for 24 h.The EGFP expression was observed by DMI8 inverted fluorescent microscope (Leica, Germany) and quantified by FCM analysis.
Cell proliferation assay
FLS were cultured in 96-well plates (5 × 10 3 cells per well) and activated with TNF-α for 4 h.TPH NCs containing 0.1 μg pPUMA was added to the well with serum-free medium.Upon a 6-h incubation, the medium was replaced with a complete medium and the FLS were cultured for an additional 24 h.Similarly, RAW264.7 were cultured overnight and treated with CH NCs at various concentrations for 24 h.The inhibition of cell proliferation treated by TPH NCs, and the cell cytotoxicity of CH NCs were evaluated by MTT assay.
Live/dead staining
FLS were cultured in 12-well plates at a density of 5 × 10 5 cells/well overnight and stimulated by TNF-α.Cells were then incubated with TPH NCs (1 μg pPUMA) for 6 h, followed by replacing fresh medium for another 24 h of incubation.After being stained by calcein-AM and propidium iodide (PI) for 15 min, FLS were captured by fluorescent microscope.
Cell apoptosis analysis
FLS were cultured in 6-well plates (3.0 × 10 5 cells/well) with the stimulation of TNF-α for 24 h.Cells were treated by TPH NCs in fresh serum-free medium (2 μg pPUMA) for 6 h, and then incubated with fresh medium for 24 h.To evaluate the pro-apoptosis effect of TPH NCs in vitro, Annexin V-FITC/PI staining were performed for analysis by FCM.
Western blotting
To evaluate the change of PUMA protein after TPH transfection, Western blot analysis was conducted.The cells were rinsed, lysed and collected via centrifugation at 10,000×g for 10 min.The total protein content of each group was quantified through BCA assay.Subsequently, proteins were separated by 12% SDS-PAGE and transferred onto PVDF membrane electronically.After being blocked in TBST buffer containing 5% BSA, the membrane was incubated with primary antibody overnight at 4 • C and secondary antibody for a duration of 1 h at room temperature.Finally, the membrane was visualized via Clarity Western ECL substrate and exposed by gel-doc system (ChemiDoc MP, Bio-Rad, USA).In addition, RAW264.7 cells were stimulated with LPS and treated with CH NCs.The changes of p65, p-p65, IκBα, pIκBα were also evaluated using Western Blotting as the procedures mentioned above.
Macrophage repolarization
RAW264.7 cells were seeded in 24-well plates and allowed to culture overnight.The LPS was added in the concentration of 1 μg/mL for a 24-h incubation period, followed by the addition of CH NCs at a dosage of 500 ng/mL for 12 h.After incubation, CD16/32 was added for a 10-min duration to block Fc-receptors.Subsequently, the cells were stained with FITC conjugated anti-CD86 antibody, and APC conjugated anti-CD206 antibody for 30 min and subsequently analyzed using FCM.
Preparation and characterization of TPH/CH MN
The TPH/CH MN was fabricated through the reported micromolding method [39].Briefly, the constructed TPH and CH NCs were mixed and added in HA aqueous solution (200 mg/mL).The mixture was poured in the PDMS mold and then centrifuged at the speed of 3000 rpm for 30 min to allow TPH and CH NCs fully deposited in the cavities of needles.
The excessive solution was removed and 600 μl of blank HA solution was added to the mold for further centrifugation at 3000 rpm for 30 min, which was dried overnight at room temperature.TPH/CH MN was obtained through detachment from the mold.HA MN that encapsulating YoYo-1 labeled TPH NCs and DiR-labeled HSA NCs were fabricated through the same method.The morphology of TPH/CH MN was characterized by SEM and fluorescence microscope.The mechanical strength of TPH/CH MN was assessed through a texture analyzer (TMS-PRO, FTC, VA, USA).
CIA mouse model
DBA-1J male mice (6-8 weeks old) were procured from Gempharmatech Co., Ltd (Jiangsu, China).The CIA model was established through double immunization as the literature reported [40,41].An immunization grade bovine type-II collagen solution (2 mg/mL) and complete Freund's adjuvant solution (4 mg/mL) were mixed by a homogenizer in an ice bath.For the first immunization, mice were intradermally injected with an emulsion of the mixture at the end of the tail.On Day 21, boost immunization was performed by injecting the mixed emulsion of type-II collagen and incomplete Freund's adjuvant.All animals were housed in pathogen-free conditions and subjected to standard 12-h light/12-h dark periods, with access to sufficient food and water.All animal experiments were conducted in accordance with the regulations approved by the Animal Care and Use Committee of Zunyi Medical University (Zhuhai campus, permit no.ZHSC-2-2023-043).
In vivo therapy protocol
The study randomly assigned the mice into six groups as follows (n = 6): Normal, model (saline-treated), TPH/CH NCs (20 μg pPUMA, 100 μg Cel), TPH MN (20 μg pPUMA), CH MN (100 μg Cel) and TPH/CH MN (20 μg pPUMA, 100 μg Cel) group.After 28 days of the first immunization, TPH MN, CH MN and TPH/CH MN were topically applied in the back skin every three days for 7 times.The right hind paw thickness of e was quantified every 3 days by a vernier caliper and the clinical score sum of four limbs were recorded according to the scoring criteria [41].
Micro-CT analysis
At day 49, posterior limbs were imaged using a computed P. Hua et al. tomography scanner (nanoScan PET/CT 82s, Mediso, Hungarian) under 50-kV and 1-mA x-ray beam.The reconstructed three-dimensional images were obtained from the micro-CT data set and processed via RadiAnt DICOM Viewer (Version 2022.1,Medixant, Poland).
In vivo serum cytokine evaluation
The mice were euthanized after micro-CT imaging at day 49 to collect the blood samples.The samples were clotted at 37 • C for 30 min and then centrifugated at a speed of 3000×g for 15 min.The collected supernatant serum was measured to reveal the level of TNF-α, IL6 and IL-1β as guided by the instructions of the ELISA kits.
Histological analysis
The mice were euthanized at the endpoint of study to collect the hind ankle joints.These joints were fixed in a 4% paraformaldehyde solution, followed by decalcification using a 15% (w/v) tetrasodium ethylenediaminetetraacetic acid solution for three weeks.After being embedded in paraffin, the decalcified joints were subsequently sectioned for H&E, safranin-O and toluidine blue (T&B) staining.Immunohistochemistry was performed for PUMA staining to verify the in vivo transfection of TPH, and for CD68 and CD90 staining to investigate the abundance change of macrophage and RA FLS.The sectioned tissues were visualized under an DMI8 inverted microscope (Leica, Germany).
Biocompatibility analysis
The major organs of the mice were collected and fixed using a 4% (w/v) paraformaldehyde solution, followed by embedding the organs in paraffin.Slices of 3-μm thickness were obtained from the organs for H&E staining, and images of these slices were obtained using an Olympus microscope (Tokyo, Japan).
Statistical analysis
All quantitative data were reported as mean with standard deviation (mean ± SD).For statistical analysis, Student's two-tail t-test was applied for two-group comparisons while one-way analysis of variance (ANOVA) was utilized to analyze data involving multiple comparisons.A significance level of p < 0.05 was considered statistically significant.
Fabrication and characterization of TPH and CH NCs
The fabrication process of TPH and CH nanocomplex is illustrated in Scheme 1.For the construction of TPH nanocomplex, ROS-responsive polyethylenimine-based fluorinated polymers ( TK PF) was first Scheme 1. Schematic illustration of (A) the fabrication procedures of the HA-based microneedle encapsulated with dual HSA-contained nanocomplexes (TPH/CH MN) and (B) the RA therapeutic mechanism through inducing RA FLS apoptosis and regulating inflammatory macrophages.
synthesized by crosslinking PEI 1.8 K with the thioketal linker, followed by the modification of heptafluorobutyric anhydride to obtain the thioketal-crosslinked fluorinated polyethyleneimine as our previous work reported [19,20].The binary nanocomplex was obtained via electrostatic interactions by mixing the cationic polymer TK PF and negatively charged PUMA plasmid at the ratio of 10:1, which showed a particle size of 168 ± 4 nm with polydispersity index (PDI) of 0.26 ± 0.02) and a positive zeta potential of +33.1 ± 1.2 mV (Fig. 1A).The negatively charged human serum albumin (HSA) was then coated on the surface of TK PF/pPUMA NCs through electrostatic attraction [42,43].The change of size distribution and zeta potential was investigated after mixing different amount of HSA with TK PF/pPUMA at different weight ratios between HSA and TK PF from 1:10 to 7:10.The zeta potential of TK PF/pPUMA@HSA NCs dropped gradually from positive to negative with HSA adding incrementally (Fig. S1).TPH NCs at the weight ratio of 5:10 exhibited uniform size (204 nm, PDI 0.269) and slightly positive charged (1.6 mV), which was screened as the optimal ratio for further experiments (Fig. 1E).The obtained TPH NCs still revealed spherical-like morphology under transmission electron microscope (TEM) (Fig. 1B).TK PF exhibited excellent pDNA condense ability as shown in the gel retardation assay (Fig. 1F).The migration of pDNA could be totally retarded in TK PF/pDNA at a low mass ratio of 1:1 ( TK PF: pDNA).Moreover, the coating of HSA onto TK PF/pPUMA NCs did not result in the plasmid leakage (Fig. 1F).In addition, TPH NCs was stored at 4 • C with uniform and stable size for 1 week (Fig. S2).Cel was encapsulated within the HSA hydrophobic pocket via hydrophobic interactions to obtain CH NCs according to the method our previous study reported [37].According to dynamic light scattering (DLS) data, CH NCs were well dispersed in phosphate-buffered saline (PBS) (PDI 0.22 ± 0.03) with the average hydrodynamic diameter of 135.4 ± 2.1 nm and the zeta potential of − 27.1 ± 1.8 mV (Fig. 1C).The TEM image of CH NCs revealed common spherical structures (Fig. 1D).The encapsulation efficiency and loading capacity of Cel in CH NCs was determined as 80.6 % and 7.33 %, respectively.The successful encapsulation of Cel in the HSA nanocomplex was further confirmed by UV-vis-NIR spectra in Fig. 1G, where free Cel and CH NCs showed a strong absorbance at around 430 nm.The long-term stability of CH NCs was assessed by monitoring the hydrodynamic size in PBS for a week with almost no change (Fig. S3).Locally decreased environmental pH is characteristic for many chronic inflammatory diseases such as atherosclerosis and RA [44,45].The pH responsive drug release behavior of CH NCs was then investigated.The release percentage of Cel from CH NCs was only 36.3 ± 3.4% in neutral condition while Cel was released rapidly for 52.7 ± 1.5% in acidic environment after 48 h, demonstrating CH NCs easy to release payloads in M1 macrophages (Fig. 1H).
Characterization of TPH/CH@MN
Microneedle as a non-invasive transdermal delivery tool can enhance drug penetration through generating micropores of skin, showing convenience and comfort compared with intravenous injection and intraarticular injection for RA therapy.The dissolvable TPH/CH@MN patch was fabricated using biocompatible hyaluronic acid (HA) via a micromolding approach [39].The TPH/CH@HA MN patch was 6 mm in diameter contained 136 needle tips.MN tip exhibited conical shape, possessing a base diameter of 400 μm and a height of 600 μm as shown in the SEM image (Fig. 1I).To effectively pierce the skin for drug transdermal delivery, microneedle tips should possess sufficient mechanical strength.The force-displacement curve of TPH/CH@MN didn't exhibit obvious change compared with blank MN, indicating that the encapsulation of dual nanocomplexes did not significantly influence the mechanical strength of MN (Fig. 1J).The fracture forces of the blank MN and TPH/CH@MN were quantitatively measured as 0.30 and 0.26 N per needle, both of which surpass 0.1 N, the reported minimum skin insertion force [46].As shown in Fig. 1K, dual fluorescence was observed in the fluorescence images of the MN patch containing YoYo-1-labeled TPH NCs and DiR-labeled CH NCs, indicating the uniform distribution of TPH and CH NCs in the MN tips.The results confirmed the successful fabrication of MN patch to encapsulate both TPH and CH NCs.The in vitro skin insertion capacity was further evaluated through applying MN to pierce excised porcine skin.The Rhodamine B-loaded MN was pressed against porcine skin for 10 min and then withdrawn.The red fluorescent spots were observed by fluorescence microscope (Fig. S4).To further visualize the penetration depth, Rhodamine B-loaded MN was applied in mouse skin for 10 min to observe the red fluorescence signal at different depths.The constructed HA-based MN could pierce the skin with a penetration depth of around 270 μm (Fig. S5), indicating the fabricated MN could successfully inserted into the skin.
Cellular uptake and endosomal escape of TPH NCs in RA FLS
The cellular uptake efficiency of TPH NCs was investigated in TNFα-activated FLS by flow cytometric analysis.TNF-α was used to trigger abnormal proliferation of FLS and to induce the invasive phenotype of RA FLS [47,48].As revealed by the uptake efficiency and mean fluorescence intensity (MFI) in Fig. 2A-D, YOYO-1 labeled TK PF/pDNA and TPH NCs both showed enhanced cellar internalization than PEI 25 K/pDNA at 2 h and 4 h, which could be attributed to the fluorination strategy endowing the cationic polymer with both hydrophobic and lipophobic features to improve the cellular uptake efficiency [20,49,50].To be noted, TPH NCs with much lower positive charge than TK PF/pDNA showed similar cellular internalization (93.3 ± 0.7%) to TK PF/pDNA (96.4 ± 1.1%), which was significantly higher than PEI 25 K/pDNA (62.7 ± 1.9%) in RA FLS (Fig. 2B).It could be attributed to the specific targeting of HSA to SPARC protein overexpressed in activated macrophage and FLS [51].After endocytosis, effective lyso/endosomal escape is the prerequisite for successful transfection [52].The endosomal escape capability of TPH NCs was investigated on RA FLS by confocal scanning laser microscope (CLSM, Fig. 2E).TPH NCs loaded with Cy5-labeled pDNA (red) mainly localized in cell membranes, which was not effectively endocytosed after 1 h incubation.At 2 h, TPH NCs were taken up and localized in green-stained lysosomes.Subsequently, the internalized TPH NCs gradually escaped from endosomes at 4 h.After incubation for 8 h, Cy5-labeled pDNA of TPH NCs eventually accumulated into the nucleus.These results confirmed the effective lysosome escape and nucleus targeting capacity of TPH NCs.
Transfection efficiency of TPH NCs in RA FLS
Cytotoxicity is a crucial issue for the gene delivery systems.We evaluated the cytotoxicity of TK PF in both NIH/3T3 and RA FLS using MTT assay (Fig. S6).TK PF showed negligible cytotoxicity, which could be attributed to the ROS-sensitive thioketal linkages leading to the reduction of the molecular weight [53].The transfection of TPH NCs was evaluated on RA FLS in comparison with PEI25K polyplexes at the mass ratio of 2:1, which is the gold standard of non-viral vectors.The plasmid encoding enhanced green fluoresce protein (EGFP) as a reporter gene was utilized.The transfection efficiency of TK PF/pDNA polyplexes (27.2 %) was much higher than that of the control PEI 25 K/pDNA (6.0 %) in TNF-α-treated RA FLS (Fig. 2F).HSA coating slightly enhance the transfection capacity of TPH NCs, which could be attributed to caveolin-dependent endocytosis that HSA entered cells through to bypass the fusion of lysosomes [54,55].These comparisons demonstrated that TK PF/pDNA and TPH NCs polyplexes were both efficient in transfecting RA FLS (Fig. 2G and H).
RA FLS apoptosis evaluation of TPH NCs
As demonstrated above, TPH can upregulate p53 up-regulated modulator of apoptosis (PUMA) by transfection to induce apoptosis of RA FLS.Furthermore, we first investigated the anti-proliferation effect of TPH in RA FLS.In Fig. 3A, TK PF/pPUMA showed decreased cell viability (63.3 ± 2.3%) while TPH triggered higher killing effect of RA FLS (48.1 ± 3.6%) (p < 0.001), which may be attributed to the enhanced transfection capacity of TPH NCs.The mechanism to clarify the apoptosis effect of TPH was investigated by Western blot assays.The result displayed a significant increase in PUMA expression in both the TK PF/pPUMA and TPH groups in comparison to the blank sample (Fig. 3B and C).Moreover, the apoptotic effect on RA FLS was investigated through Annexin V-FITC/PI staining using flow cytometry (Fig. 3D and E).As expected, RA FLS treated with TK PF/pPUMA demonstrated a significant increase in apoptosis (39.0 ± 3.0%) and TPH induced higher apoptosis efficiency (46.4 ± 1.6%).Live/dead staining result further confirmed the RA FLS killing effect consistent with the cytotoxicity effect (Fig. 3F).These results indicated the excellent ability of gene vector TK PF plasmid vector to express the proapoptotic gene PUMA in inducing RA FLS apoptosis.
Cytotoxicity and cellular uptake of CH NCs in macrophages
Lipopolysaccharide (LPS)-stimulated RAW264.7 macrophages were utilized as the inflammatory cell model to evaluate the antiinflammatory effect of CH NCs.The viability of RAW264.7 cells incubated with CH NCs were measured through MTT assay.As shown in Fig. 4A, no significant toxicity of CH NCs was observed at investigated doses ranging from 25 to 500 ng/mL.In addition, CH NCs showed no cytotoxicity to RA FLS at the investigated concentrations (Fig. S7).The cellular uptake efficiency of FITC-labeled CH NCs was investigated via flow cytometry analysis (Fig. 4B).More efficient cellular uptake of CH NCs was observed in LPS-activated macrophages than macrophages without LPS activation (Fig. 4C), which could be attributed to the increased demand of inflammatory macrophages for albumin [35].After endocytosis, the acidic microenvironment resulted from excessive lactate generation and anaerobic glycolysis due to the highly metabolic pattern was adventurous for pH responsive Cel release to exert anti-inflammatory effect.
Anti-inflammatory efficacy of CH NCs in LPS-stimulated macrophages
Cel has been reported to suppress inflammation through inhibiting NF-κB signaling pathway [27,56].The changes of essential proteins (p65, pp65, IκBα, pIκBα) involved in NF-κB signaling pathway after CH NCs treatment were evaluated by Western blot analysis.Upon LPS stimulation, IκBα and p65 of macrophages were significantly activated.However, CH NCs could obviously lower the phosphorylation of p65 and IκBα, indicating the activation of the NF-κB pathway was significantly inhibited by CH NCs (Fig. 4D).ELISA results also revealed the anti-inflammatory effect of CH to decrease the levels of TNF-α, IL-6 and IL-1β, which were significantly elevated after LPS stimulation (Fig. S8).Then the effects of CH NCs on macrophage polarization were investigated by flow cytometry analysis.As shown in Fig. 4E and F, LPS stimulation significantly increased the proportions of macrophages expressing pro-inflammatory M1-phenotype marker CD86 and decreased those expressing anti-inflammatory M2-phenotype marker CD206.In contrast, CH NCs treatment with resulted in a notable reduction in CD86-labeled M1 phenotype macrophages and a significant increase in CD206-labeled M2 phenotype macrophages in comparison to the LPS group.CH NCs obviously reduced the percentage of CD86-positive M1 macrophages from 33.5% to 9.7% while increased CD206-positive M2 macrophage proportion from 2.4% to 42.4% compared with the LPS-treated group (Fig. 4G and H), demonstrating efficient M1-to-M2 transition induced by CH NCs.
In vivo biodistribution of TPH/CH MN
DBA/1 J mice was utilized to establish the collagen-induced arthritis (CIA) mouse model, following previously documented methods [47].Subsequently, the in vivo fluorescence imaging employing TOTO-3 labeled TPH/CH MN was performed to evaluate the biodistribution of payloads.As shown in Fig. 5A, TOTO-3 labeled TPH NCs were rapidly accumulated in the inflamed joints of CIA mice 3 h post MN application.Conversely, the fluorescence signals observed in the paws of non-arthritic mice was notably weaker than those in CIA mice at both 3 and 6 h.Major organs and paws from normal and CIA mice were collected 6 h post MN application for ex vivo fluorescence imaging.Quantification of the fluorescence intensity not only further confirmed the more intraarticular accumulation in inflamed joints than normal joints, but also revealed more accumulation of TPH NCs in paws than major organs (Fig. 5B and C).These results indicated that the payloads released from TPH/CH MN could efficiently accumulate intraarticularly in situ.Although a higher fluorescence in the liver was observed in CIA mice than that in normal mice, negligible tissue damage or inflammatory lesions was found (Fig. S9), which was partially attributed to the rapid elimination of nanoparticles by the liver [57].
TPH/CH MN ameliorated arthritic progression in CIA mice
In order to assess the therapeutic effect of TPH/CH MN in the treatment of RA, the MN were applied topically in an CIA mouse model as the treatment regimen outlined in Fig. 6A.DBA-1J mice were subcutaneously injected with saline, TPH/CH NCs, applied topically with TPH MN, CH MN and TPH/CH MN every three days for 7 times to examine the therapeutic efficacy.No significant alterations in body weight were observed during the treatment period, suggesting the safety profile of TPH/CH MN (Fig. 6C).H&E-stained histological changes of major organs including heart, liver, spleen, lung, and kidney were observed after different treatments to further investigate the potential toxicity.All the major organs did not exhibit any distinct histological changes as compared to the saline group, indicating the biocompatibility of TPH/CH MN (Fig. S9).The severity degree of arthritis could be reflected by paw thickness and clinical scores.Significant increases in hind paw and ankle thickness could be observed in the CIA mice compared with healthy mice (Fig. 6B and D).The TPH MN and CH MN could partially decrease the swelling paw thickness by 49.2% and 40.5%, respectively.By comparison, CIA mice treated by TPH/CH NCs and TPH/CH MN significantly lowered the swelling paw thickness by 67.9% and 74.6%, indicating the excellent therapeutic effect in reducing articular inflammation.The similar therapeutic efficacy of TPH/CH NCs and TPH/CH MN also demonstrated the effectiveness of MN as transdermal drug route.The same trend was also observed in the clinic scoring data (Fig. 6E).TPH MN and CH MN group showed moderate anti-arthritic efficacy of lowering the clinical scores while TPH/CH MN exhibited the lowest clinical scores.Considering bone erosion is a crucial symptom of severe RA, the micro-computed tomography (micro-CT) was employed to investigate the impact of TPH/CH MN on bone erosion in inflamed ankle joints for RA treatment (Fig. 6F).Compared with normal mice, the articular surfaces of toes and ankles of saline-treated CIA mice revealed rough bone surfaces and distinct bone erosion.TPH MN and CH MN group showed moderate therapeutic efficacy against bone destruction in CIA mice.Notably, neglectable bone erosion was observed in the combined treatment (TPH/CH NCs and TPH/CH MN group), which implied inducing RA FLS apoptosis plus relieving inflammation could effectively protect the bones from destruction and erosion.
TPH/CH MN improved the arthritic inflammation and cartilage damage by FLS depletion and inhibition of macrophages
Bone and cartilage damage are the main features of RA, which can be aggravated and sustained by proliferating synovial tissue.Therefore, histological analysis of arthritic inflammation and cartilage destruction was performed on the ankle joints of CIA mice from different groups to further confirm the therapeutic efficacy of TPH/CH MN.In contrast to healthy mice, CIA mice in saline group demonstrated clear indications of synovial hyperplasia, inflammatory cell infiltration and pannus invasion, as revealed by the H&E-stained results (Fig. 7A).CH MN and TPH MN group partially inhibited both synovial hyperplasia and inflammatory cell infiltration.TPH/CH treatment displayed more efficient joint and synovial recovery with minimal pathologic features such as synovial hyperplasia, inflammatory cell infiltration and pannus invasion.In addition, the results of safranin O that stains the glycosaminoglycans of cartilage reveled evident cartilage damage in saline-treated CIA mice (Fig. 7B).CH MN couldn't effectively reduce cartilage damage, which indicated the limited cartilage protection effect of merely antiinflammation of Cel.Instead, the cartilage of mice treated by TPH MN and TPH/CH MN remained intact structure and component similar as the normal mice, suggesting the cartilage preservation potential by promoting RA FLS depletion.Similar cartilage preservation results could be observed in toluidine blue (TB) staining (Fig. 7C).The serum levels of pro-inflammatory cytokines (TNF-α, IL-6 and IL-1β) were assessed by enzyme-linked immunosorbent assay (ELISA) to evaluate the potential systemic response and therapeutic efficacy of TPH/CH MN (Fig. 7D-F).As expected, the concentrations of TNF-α, IL-6 and IL-1β in serum were significantly increased in saline-treated CIA mice compared with normal mice.CH MN administration significantly decreased three proinflammatory cytokine levels owing to the anti-inflammatory effect of Cel.TPH MN also partially lowered pro-inflammatory cytokine levels, possibly attributed to the RA FLS depletion decreasing the secretion of pro-inflammatory cytokines [58].TPH/CH MN group exhibited the lowest level of pro-inflammatory cytokines, which could be the outcome of simultaneous regulating these two RA-related cell types including RA FLS and macrophages.
To confirm the PUMA upregulation effect induced by TPH, we performed the immunohistochemical staining of PUMA protein in the joints.The numbers of PUMA-positive cells in ankle joints treated by TPH MN and TPH/CH MN surpassed those in saline and CH MN group, which indicated the transfection capacity of TPH (Fig. 7G).
Immunohistochemical staining of macrophage-specific biomarker (CD68) and FLS-specific biomarker (CD90) were carried out to investigate the abundance of macrophages and FLS in inflamed joints, respectively [58,59].Compared with saline group, CH MN, TPH MN and CH/TPH MN could both reduce the quantity of CD68 positive macrophages existing in the synovium (Fig. 7H).The effect of decreasing inflammatory macrophages through inducing FLS apoptosis could be attributed to the fact that FLS aggravates synovial inflammation by producing cytokines to recruit and activate immune cells [60].In addition, TPH MN and CH/TPH MN treatment remarkably reduced the number of FLS while CH MN group didn't reveal obvious FLS decrease (Fig. 7I), which suggested FLS depletion effect via PUMA upregulation.These results implied that TPH/CH MN effectively decreased the abundance of both inflammatory FLS and macrophages in CIA mice.
Conclusion
In conclusion, we reported a dissolvable microneedle loaded with dual HSA-contained nanocomplexes to realize both apoptosis induction of RA FLS and inflammatory inhibition of macrophages based on genechemo combined therapy.The fabricated TPH/CH MN exhibited the following features: (1) MN-mediated transdermal delivery of dual nanodrug delivery systems to allow subsequent joint accumulation based on ELVIS effect and HSA-assisted biomimetic delivery; (2) the constructed thioketal-crosslinked fluorinated polyethyleneimine 1.8 K ( TK PF) exhibited superior transfection efficiency of PUMA plasmid in RA FLS; (3) Cel-loaded HSA nanocomplex with pH-responsive release property could suppress the LPS-stimulated inflammatory responses of RAW264.7 macrophages.(4) Simultaneously regulating RA FLS and macrophages to restore synovial homeostasis could effectively attenuate CIA symptoms, reduce inflammation infiltration, as well as relieve cartilage damage and bone erosion.Moreover, the proof-of-concept trials provided a versatile and user-friendly platform to treat chronic or autoimmune diseases through multiple targeting therapy.
Fig. 1 .
Fig. 1.Characterization of TPH NCs, CH NCs and TPH/CH MN. (A) DLS analysis of TK PF/pPUMA and TPH NCs.(B) Morphology of TPH NCs capture by TEM.Scale bar, 100 nm.(C) Particle size of CH NCs.(D) Morphology of CH NCs capture by TEM.Scale bar, 100 nm.(E) Size and PDI of TPH NCs with different ratios of HSA and TK PF.Data represent mean ± SD (n = 3 independent samples).(F) Agarose gel electrophoresis of TK PF/pPUMA at various TK PF/pPUMA weight ratios and TPH NCs.(G) UV-vis absorbance spectra of Cel (0.1 mg/mL), HSA (0.5 mg/mL), and CH NCs.(H) In vitro release profile of Cel from CH NCs.(I) SEM image of TPH/CH MN.Scale bar, 100 μm.Data represent mean ± SD (n = 3).(J) Mechanical strength characterization of the HA-fabricated MN and TPH/CH loaded HA MN patch.(K) Fluorescence images of MN containing YoYo-1 labeled TPH NCs and DiR-labeled HSA NCs.Scale bar, 200 μm.
Fig. 3 .
Fig. 3. Evaluation of RA FLS apoptosis of TPH.(A) The cellular viability of RA FLS after being treated by PEI25 K/pPUMA, TK PF/pPUMA and TPH NCs.(B) and (C) PUMA expression detected by blot assay.(D) Annexin V-FITC/PI staining of RA FLS treated by PEI25 K/pPUMA, TK PF/pPUMA and TPH NCs analyzed by Flow cytometry and (E) the quantitative result of apoptotic cell percentage in different treatment groups.(F) Fluorescence images of RA FLS double stained by FDA and PI.Data are presented as mean ± SD (n = 3).*p < 0.05, **p < 0.01, ***p < 0.001.
Fig. 4 .
Fig. 4. Cellular uptake and anti-inflammatory evaluation of CH NCs in LPS-induced RAW264.7 macrophages.(A) Cell viability of RAW 264.7 cells treated by CH NCs under various concentrations of Cel.(B) Cellular uptake of FITC-labeled CH NCs in RAW264.7 cells at 4 h analyzed by flow cytometry.(C) Cellular uptake efficiency of FITC-labeled CH NCs detected by flow cytometry at different time.(D) Protein expression of p65, pp65, IκBα and pIκBα measured by western blotting.(E) The percentages of CD86-positive and (F) CD206-positive macrophages after CH NCs treatment evaluated by flow cytometry.(G) Quantitative analysis of M1 and (H) M2 polarization after CH NCs treatment.Data are presented as mean ± SD (n = 3).*p < 0.05, **p < 0.01, ***p < 0.001.
Fig. 5 .
Fig. 5.In vivo biodistribution of TPH/CH MN applied in CIA mice.(A) IVIS fluorescent images of normal mice and CIA mice treated by TOTO-3 labeled TPH/CH MN at 3 and 6 h.(B) Fluorescence images of the major organs collected from normal and CIA mice after TOTO-3 labeled TPH/CH MN application post 6 h.(C) Quantitative fluorescence analysis for the major organs and paws.Data are shown as the mean ± SD (n = 3).*p < 0.05, **p < 0.01, ***p < 0.001.
Fig. 6 .
Fig. 6.Inhibitory effect of TPH/CH MN on arthritic progression in CIA mice.(A) Schematic illustration of the treatment for CIA mice by TPH/CH MN. (B) Representative hind paw images in normal, saline, TPH/CH NCs, TPH MN, CH MN and TPH/CH MN groups at day 30 and day 48.(C) Body weight change vs time in different groups.(D) Paw thickness of mice in normal, saline, TPH/CH NCs, TPH MN, CH MN and TPH/CH MN groups.(E) Clinical scores of mice at the end of the therapy.(F) Micro-CT analysis of the hind paw of mice after treatments.Data are shown as the mean ± SD (n = 6, *p < 0.05, **p < 0.01, and ***p < 0.001).
|
2024-03-03T16:18:07.935Z
|
2024-03-01T00:00:00.000
|
{
"year": 2024,
"sha1": "792017e6d6cc6dc0402854eb04adf4da877fb44a",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.bioactmat.2024.02.030",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e716de311227a9caa2305eccd719986586ecd8c9",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204271008
|
pes2o/s2orc
|
v3-fos-license
|
Blue-Green Infrastructure for Sustainable Urban Stormwater Management—Lessons from Six Municipality-Led Pilot Projects in Beijing and Copenhagen
: Managing stormwater on urban surfaces with blue-green infrastructure (BGI) is being increasingly adopted as an alternative to conventional pipe-based stormwater management in cities. BGI combats water problems and provides multiple benefits for cities, including improved livability and enhanced biodiversity. The paper examines six municipality-led pilot projects from Beijing and Copenhagen, through a review of documents, site observations and interviews with project managers. Beijing’s projects attempt to divert from a pipe-based approach but are dominated by less BGI-based solutions; they could benefit from more integration of multiple benefits with stormwater management. Copenhagen’s projects combine stormwater management with amenity improvement, but lack focus on stormwater utilization. Reviewed municipality-led pilot projects are shown to play an important role in both testing new solutions and upscaling them in the process of developing more sustainable cities. Key lessons are extracted and a simple guideline synthesized. This guideline suggests necessary considerations for a holistic solution that combines stormwater management and urban space improvements. Key lessons for sustainable solutions include defining a clear water technique priority, targeting both small and big rain events, strengthening ‘vertical design’ and providing multiple benefits. An integrated stormwater management and landscape design process is a prerequisite to the meaningful implementation of these solutions. Research and documentation integrated with pilot projects will help upscale the practice at city scale.
Introduction
Cities nowadays face great challenges in the management of stormwater from frequent heavy rainfalls exacerbated by climate change, water stress and deterioration of the water environment, all of which impede efforts to improve living conditions. Having learned that pipe-based drainage systems alone are inadequate to these challenges, cities are searching for new ways to manage stormwater and to achieve multiple sustainability goals at the same time [1]. The urban landscape can contribute to these new solutions by harnessing the power of some overlapping concepts and terms such as sustainable drainage system (SUDS), low impact development (LID), water sensitive urban design (WSUD), (blue) green (stormwater) infrastructure (BGI), and sponge city (SC) [2,3]. Techniques related to these concepts have been explored as niche practices, i.e., novel and still-unstable solutions developed and implemented by dedicated but often fringe actors in cities around the world. These practices are mainly driven by each city's own water stress [1].
The blue-green infrastructure (BGI) approach seeks to mitigate flooding and improve the quality of stormwater discharge by applying decentralized blue-green elements that mimic the natural hydrograph. These elements manage stormwater through processes of infiltration, evapotranspiration, retention, detention and slow transport, while providing such multiple benefits to cities as conserving local water resources, improving livability and supporting biodiversity [2]. Despite the relatively well-known principles, knowledge of cities' BGI for stormwater management (SWM) practices is lacking. This study has been motivated by a desire to learn practical lessons and to bridge the gap between research and practice.
Theoretical Background
According to the hydrological processes, water techniques for BGI can be categorized into three types [4,5]. (i) "Onsite control" by small-scale solutions, such as green roofs, raingardens, and permeable pavement, all of which aim to retain as much stormwater locally as possible; The process is mainly retention, i.e., "absorbing" stormwater onsite, through infiltration, evapotranspiration or reuse, generally without discharging runoff further downstream. "Onsite control" contributes positively to flood mitigation, water quality improvement and local water balance [6]. (ii) "Process control" by using swales and ditches to transport stormwater slowly downstream. These processes may reduce floods by increasing the concentration time, but can also improve water quality and local water balance through infiltration [7]. (iii) "Downstream control" or controlled discharge by the use of larger scale facilities like dry basins, ponds and wetlands, for temporary detention and slow discharge to recipients or downstream urban drainage systems. Downstream detention contributes to flood prevention and water quality improvement through sedimentation, but does not improve the local water balance.
To facilitate the processes of retention, detention or transportation, BGI systems need to be able to manage a certain volume of stormwater. This volume is directly related to the size of the effective impervious area (EIA) [8], i.e., the area that generates stormwater runoff to the BGI element, the BGI's hydrologic function, the earthwork required for landscape construction, and the BGI's potential benefits to cities. Storage volume is often related to a service level, i.e., the rainfall return period a system is dimensioned to handle. For example, with a service level of three years, the stormwater drainage system is designed to handle a three-year (3-year) rain event, which is the worst rain event that statistically occurs once every three years, that is, a 3-year return period. When managing stormwater volumes on the urban surface, as part of the urban landscape, these systems may provide multiple benefits to the city, such as socio-cultural benefits (recreation, aesthetics of urban landscape, playfull urban space, public education), biodiversity and other ecological benefits, and improved economic performance. Therefore the design of an optimal BGI-based SWM system needs to be integrated with landscape design. When targeting smaller stormwater storage volumes for 'daily' rain events that occur frequently (up to 0.2-year), water features are likely to be visible more often (and thus have good potential as landscape assets), the construction investments are relatively low, and the system contributes to managing a large fraction of the annual rainfall [9]. When targeting larger stormwater storage volumes for heavier rain events that occur more rarely (e.g., >1-year), water features are seldom visible or reach the system's full capacity, the construction investment is relatively high, and the system mainly functions as flood prevention (ibid.). To make a system sufficiently robust to handle rare rain events as well as more common events, BGI with double-profile functions are relevant both for on-site and downstream control. Visible water appears in the lower profile during small rain events, and during heavy rain events detention capacity is available in the higher profile. The higher profile, which is designed to accommodate rare periods of temporary flooding can be integrated with such other urban functions as pedestrian paths, parking lots, streets and playgrounds.
Transition management theory is engaged with ways to facilitate and accelerate sustainable development. As a sub-component of transition management, niche practices incubate innovations and build internal momentum that challenges the cognitive routines in the professional community, thus opening the possibility for developing more sustainable, large-scale practices over time [10]. For niche innovations to lead to a wide breakthrough, their technical and financial performance, learning processes for improving system design, and the involvement of the most influential actors in relevant practices are crucial. Municipality-led pilot projects as niche practices may play important roles in the sustainability transition [11] of the urban SWM system. They provide opportunities to explore new approaches, technologies and products. Pilot projects concerning both SWM functions and multiple benefits to cities are real-life performance tests and provide lessons relevant both for improving the less successful practices and for the upscaling the successful practices. To optimize the process of learning from pilot projects, project documentation and performance monitoring are important. Based on literature relevant to performance evaluation of SWM projects, e.g., [12][13][14] and the identified potential benefits of such approaches [15], eight major foci of BGI projects for sustainable urban SWM projects are summarized in Table 1. Table 1. Major focuses of blue-green infrastructure projects for sustainable urban stormwater management. Based on e.g., [12][13][14][15].
Major Focuses Principle
Flood/runoff control Volume retention/detention, runoff reduction, peak flow reduction, size of effective impervious area 1 (EIA), size of blue-green infrastructure element
Stormwater utilization
Stormwater reuse for non-drinking water supply, infiltration and groundwater recharge Aesthetics and amenity Water visibility, playful water, aesthetics, form
Water-landscape design integration
Water dynamics in relation to landscape elements, vertical/dimensional design
Water quality
First flush separation and treatment, sedimentation, vegetation treatment, soil filtration, UV treatment, etc.
Biodiversity/ecological performance Vegetated area, multi-species, native species, multi-layer, habitat for wildlife Inter-sector/stakeholder collaboration Collaboration between water engineers and landscape designers/planners; stakeholder involvement
Innovation & documentation
Research and technical/design innovation embedded in the project, monitoring before and after implementation, document effects 1 Effective impervious area (EIA), i.e., the area that generates stormwater runoff to the BGI element.
Research Gap and Objective
Both Beijing and Copenhagen have started to explore the potential of BGI as a step towards sustainable urban SWM. In addition to integrating the BGI approach in their flood management and climate change adaptation plans, both cities have been implementing BGI pilot projects. This study is an extension of an earlier investigation on Beijing's and Copenhagen's climate resilient strategies and their linkages with sustainability [15], where details about the reasons for studying Beijing and Copenhagen, the general background of the two cities, and their major water management challenges, strategies and activities were provided. In summary, Beijing and Copenhagen were used for the study due to their front-runner status in their countries' search for resilient solutions to the condition of climate change, thus satisfying the specific funding frame of this research.
A gap exists between the technical aspects of SWM and the planning and design practices applied to achieve multiple benefits, as well as between the final technical solution and the processes intended to generate such a solution. Most studies focus on the hydraulic performance of a specific BGI element, e.g., [12][13][14]. Only a few studies (e.g., [16,17]) actively link SWM and multiple benefits. There are many guidelines and tools related to the application of BGI elements for SWM. However, a systematic approach to planning/designing such projects is lacking: What knowledge and considerations should be available during various stages of the project process, and what steps could lead to a holistic and sustainable project solution? Further, literature introducing BGI pilot projects in a holistic way is scant. A substantial collection of data from a diverse range of sources seems necessary to understand, compare and analyze these initiatives.
This paper aims to address these gaps by systematically presenting and critically reflecting on selected BGI pilot projects. The objective of this paper is to extract key lessons from earlier pilot projects from Beijing and Copenhagen, as stepping stones to indicate ways forward for future practices. The paper highlights how the pilots in Beijing and Copenhagen can inform planners and designers on the process of developing sustainable urban water systems. Thus, based on these new lessons and pre-existing knowledge, the paper provides a simple guideline that visualizes necessary considerations and vital steps towards a holistic solution of BGI for SWM projects. The paper, mainly targeted at urban planners and landscape architects involved in BGI for SWM projects, helps to bridge the gap between the technical side of urban water management-dominated by environmental and civil engineering practices-and the 'softer' aspects of landscape architecture and planning, which are relevant to the livability of cities. This will strengthen planners and designers' capacity to engage in dialogue with engineers and other technical professionals, by making engineering knowledge readily available to them. Simultaneously, this paper provides engineers with arguments on how technical solutions to SWM can serve a city better, at a reasonable cost, when multiple benefits are incorporated.
Materials and Methods
The initial purpose of this study was to generate an overview of Beijing's and Copenhagen's pilot projects: their goals and strategies, applied SWM elements, documented effects, and perceived challenges. Lessons learned from these analyses were extracted with a view to improving the planning, design and management of BGI-based SWM projects.
Case Study Design
Six municipality-led pilot projects were studied: three from Beijing and three from Copenhagen (see Table 2). All projects have been implemented and continue to be in operation. Selected case projects fit the following criteria:
1.
The project is among the early generation pilot projects in the city.
2.
The project is driven, or partially driven, by city administrations.
3.
The selected projects represent different types of projects, for example, projects in residential areas, public parks and available urban spaces.
Due to the limited number of implemented pilot projects, the selected pilot projects in the two cities are not directly comparable in terms of size, type and implementation time. However, the selected projects give an overview of the cities' major early approaches to the exploration of alternative SWM. Further, in line with Flyvberg [18], the limited number of cases enabled in-depth investigation.
Data Collection and Analyses
Data sources included project plans and documents, site observations and semi-structured interviews with key project managers. Project documents were retrieved from project owners and complemented with data publicly available on websites and in libraries. Each project site was visited at least twice by the authors. Interviews were selected as a method to complement the information provided in the written documents. One or two in-person interviews with key project managers from each project were followed by telephone and email communications for clarification. Based on the theoretical background (Section 1.1), the collected data was organized and analyzed according to the following framework:
Design factors related to hydraulic function, including size of the project, its location within the catchment, priority of water techniques, designed service level and vertical design, i.e., design of various landscape elements and their spatial relations, including elevations of the technical elements (inlet, outlet, overflow) for the hydraulic functions for SWM 3.
Designed BGI elements, forms and functions as related to SWM 4.
The performance of the project after implementation, including impact and barriers Through a reflexive cognitive process, lessons from the six pilot projects, combined with the existing knowledge (Section 1.1), were synthesized into a guideline towards a holistic solution for BGI SWM projects.
Results
Overviews of the six municipality-led SWM pilot projects in Beijing and Copenhagen are provided in Table 3. See also the Supplementary Material.
Characters of the Case Projects in Beijing
The three Beijing cases were begun many years before the Copenhagen cases, and the two in dense urban areas are dominated by less BGI-based alternative solutions. All three projects prioritize the retention SWM technique, which contributes to both flood control and improves water balance and flood control. Engineering elements (such as underground water storage tanks and permeable pavement) combined with sunken green spaces are applied for on-site flood control (Table 3). Infiltration, stormwater cleansing, stormwater harvesting and groundwater recharge were applied to improve water balance. All three projects have over 80% stormwater utilization rate, i.e., 80% of annual runoff is captured and reused through infiltration and groundwater recharge, or collected in storage tanks (i 1 = interviewee 1). Collected stormwater in tanks is intended for non-potable use, including watering nearby green space, street cleaning, fire-fighting and car washing (i 1,2 ).
Compared with the Copenhagen projects, Beijing's three case projects apply more engineering elements for SWM, and these have mainly technical functions with few added livability or ecological benefits. Only a few visible water elements were designed as part of the urban landscape, and even these are less articulated (or "designed") for recreational, aesthetic or educational purposes ( Table 3). The Olympic Park plan had considered the use of collected stormwater to supply a fountain, but this was either not implemented or is not visible (personal observation). The Gravel Pit project included a circular wet pond, showing some consideration of providing visible water but with little endeavor to enhance its aesthetic value (pers. obse.). Beijing's pilot projects treat stormwater through first flush separation, sedimentation and filtration through vegetated substrate soil or permeable pavement [19,20]. Biodiversity and ecological performance were considered to a limited extent by including native plants, sunken green space and a vegetated riverbank, and by using stormwater for watering vegetation (ibid.). Research, technical innovation and monitoring of technical performance were emphasized (ibid.). Monitoring was conducted during the initial years, and then stopped due to lack of budget and personnel resources (i 2 ). The documented performance included construction cost, pollutant reduction, annual stormwater utilization volume/rate, runoff co-efficient reduction, annual discharge reduction volume/rate and impact on groundwater level [19]. Beijing's case projects played an important role during the city's early stage of SWM practice (i 1,2 ). They locally adapted and demonstrated the feasibility of non-pipe based solutions for SWM projects targeting the city's water challenges and have been used as models for many other projects in Beijing and other Chinese cities (i 1,2 ). They also produced a rich set of experiences and technical data, which were used to develop local technical guidelines for SWM projects. Both pilot projects and technical guidelines have had a great impact on implementation of city-scale SWM projects in the past 15 years (i 1,2 ; pers. obse.). For example, water storage tanks, permeable pavement and sunken green spaces have been widely implemented in Beijing (ibid.).
Characteristics of the Copenhagen Case Projects
Copenhagen's case projects focus more on flood control than stormwater utilization. Landscape elements (raingarden, swale, vegetated or paved recreational area as detention basin) are major components of these relatively new SWM systems (pers. obse.), and these elements are often combined with engineering elements (water storage tanks, soakaways etc.) for flood control, still with minor consideration of stormwater utilization (i 4,6,7 ). Due to stringent considerations on water quality for recreation with human contact, the Lindevang Park project even dropped an early idea to reuse stormwater from roofs and roads that was collected in an underground basin to supply the fountain in the square (i 4 ). Collected water is slowly discharged to the sewer. Taasinge Square and Lindevang Park combined retention with detention, contributing to both water balance improvement and flood control, although the contribution to flood control was minor due to the limited size of the connected EIAs and their relative upstream locations within the catchments (Table 3). With mainly detention but also some consideration for reusing stormwater for watering vegetation (ibid.), Sct. Annae Square contributes mainly to flood control, with a minor contribution to water balance improvement.
In Copenhagen's case projects, the landscape elements were integrally designed for both SWM and to provide multiple benefits. Projects in Taasinge Square and Lindevang Park included water elements during small rain events, for the purposes of aesthetics, play and environmental education (Table 3). In Sct. Annae Square, early ideas for visible water elements in playgrounds and pedestrian areas were dropped, so the site's historical architectural features could be better preserved (i 4 ). Copenhagen's cases ensure that stormwater runoff into the environment is be of acceptable quality, mainly through allowing runoff from roofs, non-motor-traffic and non-de-icing surfaces to be treated before infiltration and discharge into surface waters. Treatments often include bio-filtration with filter soil. UV treatment is sometimes applied, especially for stormwater to be reused for recreational purposes. Stormwater quality is not systematically monitored. Biodiversity and ecological performance were considered to a limited extent, by careful introduction of native plants, water-and drought-resistant plants, and fruit trees and bushes. Research, technical innovation and monitoring of technical performance have not been carried out (i 4,6,8 ), therefore little technical performance documentation exists, although the major elements and the whole project have been observed generally to work (i 4,5,6,8 ). Parameters considered for performance evaluation include area disconnected from sewers, infiltration rate of vegetated or permeable surfaces, appreciation and use of urban space by local citizens and businesses, and construction costs in relation to conventional engineering solutions (Table 3).
Copenhagen's case projects have been used to showcase integrated solutions that combine SWM with the provision of multiple benefits in urban spaces (pers. obse.). They continue to be used intensively for international communication and city branding, and contribute greatly to Copenhagen's high reputation for applying BGI solutions to cloudburst management, even though the city's Cloudburst Management Plan (2012) is mainly based on detention (pers. obse.). The fact that Copenhagen's case projects have little research and documentation makes it difficult to disseminate solutions, techniques and lessons learned to the city managers and practitioners for the purpose of upscaling.
Comparison of the Six Pilot Projects
Comparing the outcomes of the projects and the goals stated in the project documents and by interviewees, it is observed that not all project intentions have been implemented (Table 3). The six case projects apply very different SWM techniques, concerning on-site control (retention) versus controlled discharge (detention), EIA size beyond BGI elements, service level and types of selected retention-detention elements (Table 3). On Sct. Annae Square, an existing drainage pipe constrained the intended vertical design of a deeper sunken green space, which led to an adjustment of the dimensions of the sunken green space. Delineation of the EIA of a project seems to be affected by targeted water problems and by other SWM systems in or near the project area. When EIA outside of the BGI elements is smaller, a higher SWM service level can be achieved. Setting up a sustainable service level needs to consider all resulting benefits of an investment. For Taasinge Square, with an upstream location, designing raingardens for on-site retention of up to a 500-year rainfall may be over-dimensioned, considering the limited EIA they serve. A larger EIA could potentially be included if a lower service level is determined to be acceptable.
The landscape expression of Beijing's cases reveals less integration of SWM design and landscape design. SWM elements are less visible and have fewer functions during small rain events. This seems to relate to the prioritized goals of the city and the separated design processes of landscape and SWM system, each with different actors (Table 3). SWM intervention was led mainly by the water sector and designed by engineers, while landscape design was led mainly by landscape designers in a separate process. It seems that the engineers emphasized utility functions over aesthetics and social-cultural benefits, while the landscape designers' understandably limited technical competence on hydraulics may have prevented them from integrating SWM functions into the design of landscape forms and functions (pers. obse.). In Copenhagen's cases, landscape designers played a much larger role in devising plans for the integration of SWM systems into the urban landscape, and engineers provided relevant technical support (i 4-8 ).
Beijing's projects target the city's challenges related to water supply and flood control, and are well-aligned with the city's water management strategies and plans [15]. Combining research with the pilot projects made it possible to include lessons learned in technical guidelines [32] for upscaling the projects in the city (i 1 ). On the other hand, since these first-generation pilot projects included relatively few BGI elements, the city may need to take a more proactive effort in order to integrate multiple benefits with water management, probably by showing the way in a new generation of pilot projects.
Copenhagen's projects focus mainly on combining flood control with livability, and generally align with Copenhagen's climate resilience strategy. They showcase more BGI retention solutions than that the city's Cloudburst Management Plan (2012) indicates, and provide values for upscaling towards a more sustainable direction. Taasinge Square has improved livability and biodiversity through citizen involvement and by integrating landscape design with SWM. Lindevang Park shows how an upstream park, with both on-site control with visible water elements for small rains and potential detention volume for 100-year rain events, can provide multiple benefits. Sct. Annae Square shows an SWM solution in a downstream, historically important urban setting, by targeting flood control of a large catchment area. Water utilization for local water balance played little role in the Copenhagen cases. If a green and sustainable city is the ambition, this issue should be addressed by future pilot projects. Unlike Beijing, Copenhagen had not devised technical guidelines that designers for the three case projects could refer to. Ironically this may have enabled the designers to focus on the unique aspects of their sites, and thus to maximize multiple benefits from their projects.
Both cities have increased their investment in SWM and flood control, and both increasingly realize the socio-cultural benefits that BGI based solutions can contribute to a city. Therefore, more projects with integrated stormwater and landscape design are foreseen in the future. Unveiling potential methods and processes for achieving a good design for SWM projects is thus expected to benefit future practice.
Discussion
Key considerations for integrated urban SWM projects are discussed below.
A Simple Guideline for Planning and Design
Important considerations for reaching a suitable planning and design solution, integrating SWM and multiple potential benefits in urban space, are summarized in Figure 1, which is a key guideline for planners and designers embarking on a sustainable SWM journey.
Key Considerations and Priority of Water Techniques
Site-catchment relation (i.e., location and hydraulic relation), specific site conditions like terrain, construction and soil, and the design objectives targeting the city's water challenges and other (re)development needs can limit water technique selection and thus are important considerations for finding relevant project solutions. The ability to clearly prioritize water techniques concerning infiltration and ground water recharge, evapotranspiration, reuse, detention and discharge is a prerequisite for the overall project solution. The SWM priority that best contributes to improving the urban water balance is: 1st priority: Retention (cleansing water and infiltration, evapotranspiration, harvesting and reuse); 2nd priority: Detention (cleansing) before throttled discharge to receiving surface water bodies; 3rd priority: Discharge to sewers. A sustainable solution needs to target both frequent small rain events and rare events that generate large runoff volumes. On-site retention for small rains and detention-discharge for heavy rain events appear to be priorities for upstream and downstream locations respectively, although considerations for both small and extreme rains are relevant for all projects that seek to achieve multi-functional success. The right mix and match of options depends on the conditions of the specific site and catchment.
Site Condition and Urban Context
Of the site conditions, stormwater quality, groundwater risk and soil conditions seem to be decisive for whether retention (infiltration, evapotranspiration, reuse) can be prioritized, in combination with the availability of unpaved surfaces, terrain conditions, existing site infrastructure (i1-8; pers. obse.). In addition, local regulations on water quality influence water management
Key Considerations and Priority of Water Techniques
Site-catchment relation (i.e., location and hydraulic relation), specific site conditions like terrain, construction and soil, and the design objectives targeting the city's water challenges and other (re)development needs can limit water technique selection and thus are important considerations for finding relevant project solutions. The ability to clearly prioritize water techniques concerning infiltration and ground water recharge, evapotranspiration, reuse, detention and discharge is a prerequisite for the overall project solution. The SWM priority that best contributes to improving the urban water balance is: 1st priority: Retention (cleansing water and infiltration, evapotranspiration, harvesting and reuse); 2nd priority: Detention (cleansing) before throttled discharge to receiving surface water bodies; 3rd priority: Discharge to sewers. A sustainable solution needs to target both frequent small rain events and rare events that generate large runoff volumes. On-site retention for small rains and detention-discharge for heavy rain events appear to be priorities for upstream and downstream locations respectively, although considerations for both small and extreme rains are relevant for all projects that seek to achieve multi-functional success. The right mix and match of options depends on the conditions of the specific site and catchment.
Site Condition and Urban Context
Of the site conditions, stormwater quality, groundwater risk and soil conditions seem to be decisive for whether retention (infiltration, evapotranspiration, reuse) can be prioritized, in combination with the availability of unpaved surfaces, terrain conditions, existing site infrastructure (i 1-8 ; pers. obse.). In addition, local regulations on water quality influence water management priorities. Due to Copenhagen's stringent considerations and regulations on stormwater quality for infiltration and recreational use, different SWM priorities are applied to different stormwater sources, and stormwater reuse and infiltration is limited mainly to roof water management [33,34]. In Beijing, regulations associated with stormwater infiltration are less strict, and therefore infiltration is more commonly applied. However, the impact of stormwater infiltration on groundwater quality requires further examination. This difference calls for clearer standards, maybe internationally, for stormwater quality control and environmental impact.
Vertical Design and Landscape Design for Multiple Benefits
Vertical design plays an important role, especially for the selection and design of SWM elements. Since water flow is based on gravity, the placement of elements and their relations to each other influence how water can run through the designed system and the way it can be treated, detained, retained or reused. The placement of outlets and overflows in BGI elements marks the distinction between detention and retention elements. Vertical design is also an integrated part of landscape planning and design, and thus requires thorough consideration of site conditions and expected socio-cultural functions (aesthetic, recreational etc.). The optimal final planning and design solution seems to emerge through a process intertwined with selection and design of SWM elements, vertical/dimensional design and landscape design for multiple benefits. The planning and design process organizes SWM elements spatially, associates multiple benefits with each element, and adapts the elements into meaningful forms that strengthen the multiple benefits and multiple urban functions. These multiple urban functions often relate to a situation with little or no rain. An integrated SWM and landscape design process seems to be a prerequisite for an integrated solution with multiple benefits, which indicates an interesting area for future research and calls for co-design and interdisciplinary cooperation in the planning and design practice.
Conclusions
This study has identified gaps among goals, performance and other potential considerations related to sustainable SWM of six municipality-led pilot projects in Beijing and Copenhagen. Hence, this study serves as a relevant source of knowledge for city administrations, consultancies and researchers engaged with SWM and BGI. The two cities' practices, each with their strengths and weaknesses, can serve as inspiration in the search for sustainable city solutions. Beijing's case projects served to test and locally adapt non-pipe-based solutions to SWM and provided inspiration for future projects in Beijing and throughout China. SWM techniques were dominated by engineering and drew less on BGI-based alternatives for both flood control and stormwater harvesting through detention and retention, calling for a more proactive effort to integrate multiple benefits with stormwater management in urban spaces. Copenhagen's case projects took an integrated approach to combine SWM techniques with amenity improvements, supporting Copenhagen's brand as a green city. Improving the local water balance played only a marginal role in the Copenhagen cases, calling for future action if a green and sustainable city is the ambition.
A simple guideline for the planning and design of sustainable BGI projects was developed and discussed. This guideline illustrates a range of technical and procedural indications for future BGI projects for SWM. Defining clear priorities among possible SWM techniques, targeting both small and big rain events, strengthening vertical design and providing multiple benefits through landscape design were identified as key steps to achieve a sound project solution. An integrated SWM and landscape design process is seen as a prerequisite for a sustainable solution with multiple benefits. Identifying theoretical and empirical knowledge that can help tackle these key steps, and understanding more precisely how integration between SWM and landscape design process can be accomplished would be interesting areas for future research. The number of cases included in the study was limited, partially because monitoring data and project documentation for pilot projects are generally lacking in both cities. Future investigation of a larger number of pilot projects may provide more information for further refining the findings from the current study. This calls for a future practice that combines research and documentation with pilot projects, thus facilitating empirical learning and guiding the upscaling of BGI practices in a more sustainable direction.
Author Contributions: L.L. conducted the investigation, including document review, site investigation and interviews, carried out formal analysis and conceptualization, and prepared the original draft and figure. O.F. contributed to structuring, reviewing and editing the article, as well as validation of the research methodology and the presented data and results of the Copenhagen cases. S.Z. contributed to selection and investigation of the case projects in Beijing, validated the presented data and results of those cases, and reviewed the article.
|
2019-10-03T09:10:45.927Z
|
2019-09-28T00:00:00.000
|
{
"year": 2019,
"sha1": "eb709c5b3f4a4fd2142f43d56a52445ccd11ce84",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/11/10/2024/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5b9c7d1c4768d8a5f76c640ec92f93293bc3f3f9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Geology"
]
}
|
4026235
|
pes2o/s2orc
|
v3-fos-license
|
Recent Demographic History and Present Fine-Scale Structure in the Northwest Atlantic Leatherback (Dermochelys coriacea) Turtle Population
The leatherback turtle Dermochelys coriacea is the most widely distributed sea turtle species in the world. It exhibits complex life traits: female homing and migration, migrations of juveniles and males that remain poorly known, and a strong climatic influence on resources, breeding success and sex-ratio. It is consequently challenging to understand population dynamics. Leatherbacks are critically endangered, yet the group from the Northwest Atlantic is currently considered to be under lower risk than other populations while hosting some of the largest rookeries. Here, we investigated the genetic diversity and the demographic history of contrasted rookeries from this group, namely two large nesting populations in French Guiana, and a smaller one in the French West Indies. We used 10 microsatellite loci, of which four are newly isolated, and mitochondrial DNA sequences of the control region and cytochrome b. Both mitochondrial and nuclear markers revealed that the Northwest Atlantic stock of leatherbacks derives from a single ancestral origin, but show current genetic structuration at the scale of nesting sites, with the maintenance of migrants amongst rookeries. Low nuclear genetic diversities are related to founder effects that followed consequent bottlenecks during the late Pleistocene/Holocene. Most probably in response to climatic oscillations, with a possible influence of early human hunting, female effective population sizes collapsed from 2 million to 200. Evidence of founder effects and high numbers of migrants make it possible to reconsider the population dynamics of the species, formerly considered as a metapopulation model: we propose a more relaxed island model, which we expect to be a key element in the currently observed recovering of populations. Although these Northwest Atlantic rookeries should be considered as a single evolutionary unit, we stress that local conservation efforts remain necessary since each nesting site hosts part of the genetic diversity and species history.
Introduction
Natural populations are dynamic systems facing variations in time and space that are directly or indirectly related to environmental changes. Consequently, population genetics deals with non-equilibrium states, meaning that alongside long-term adaptive processes, other complex mechanisms have to be incorporated such as the balance of gene flows among populations, changes in the sizes of populations, population dispersals to gain new or depleted habitats, and movements between breeding and feeding areas. Among population dynamics models, the metapopulation concept has been extensively considered and refers to an assemblage of ephemeral interacting subpopulations (i.e. including emigration and immigration events) that persist over time in a dynamic balance of local declines and increases [1,2]. The extent of these interactions defines the strict metapopulation model, consisting of successive stages of extinction and colonization of local subpopulations, irrespective of the demography of other populations [3]. In contrast, the island model considers a total population divided into subgroups, each breeding randomly within itself, but with some migrants removed from the entire group [4,5]. In both cases, dispersions between populations result in gene flows that influence the genetic diversity of sources and sink populations [6,7].
Metapopulation theory also addresses demography and structure of subpopulations, and thus their extinction probability [8].
Higher loss of heterozygosity with lower migration rates induces lower effective population size [9,10]. Also, when a new population is established by a very small number of individuals from a larger population, founding events are source of genetic drift, with populations of different ages showing different levels of structuration according to colonization time [11].
Demographic events and migrations also result in contrasted signatures of genetic diversity. A decrease in the effective population size results in an excess of gene diversity at neutral loci, because the rare alleles that were lost contributed little to the heterozygosity of the ancestral population [12]. In contrast, recent population expansion and founder effect result in a heterozygosity deficit [13]. In respect to migratory behavior -a trait that integrates behavioral, physiological and morphological characters as well as life histories [14] -the spatial segregation of breeding and nesting sites may result in successive stages of mixing and isolation of genetic stocks. Migration makes the assessment of differentiation within sympatric and parapatric populations [15] and the investigations of demographic histories [13] more difficult to achieve.
A good understanding of the history, magnitude and drivers of past changes is necessary if we hope to adequately assess the current status of threatened species and populations and make future projections of their likelihood of extinction or recovery [16,17]. The leatherback turtle (Dermochelys coriacea, Vandelli, 1761) is a pelagic marine species widely distributed in tropical and subtropical waters and is currently classified as ''critically endangered'' with a constantly declining global population trend [18]. Today, the Atlantic Ocean hosts most of the world's populations, some of them showing stable and even positive trends in terms of nesting activity [19]. Most of the largest Atlantic rookeries are located in the north-eastern part of South America/ West Indies and in western Central Africa [20], considered as part of the Regional Management Unit (RMU) of the northwest Atlantic, and southeast Atlantic RMU, respectively [21]. The NW Atlantic RMU has been classified as 'low risk' and is considered to face low threats [22]. The leatherback turtle's life cycle involves pluriannual migrations after the nesting periods [23,24] and female natal homing behavior [25,26], and this complexity makes the issues of population dynamics and status difficult to address. Nevertheless, our understanding of phylogeographic patterns, population dynamics and behavior in sea turtles has been greatly improved thanks to molecular markers [26][27][28][29][30][31][32]. Autosomal microsatellite variability has been shown to provide relevant estimates of both the timescale and strength of past demographic events [33,34] thus allowing the assessment of recent changes in population size and potential recovery [17,35]. However, few studies using microsatellite markers have been performed on leatherbacks [28,29]. No founder effect and/or bottlenecks were evidenced, and consequently a metapopulation model was suggested, with a rapid turnover of rookeries and settlements of new populations resulting from massive arrivals of a large number of migrants [29,36].
In this study, we aimed to investigate the recent demographic history and the current fine-scale structure of the NW Atlantic Ocean RMU using the most recent markers, including some recently published [37,38] and sensitive analytical methods [33,39,40]. We focused on three nesting rookeries that are very different in terms of population sizes and recent trends in nesting activities. Two of these rookeries are in French Guiana, namely (i) the historical major nesting site of Awala-Yalimapo, where thousands of nests have been recorded yearly for decades [20,41], and (ii) the recent nesting site of Cayenne [42] where nesting activity increased from 3,000 nests to 9,000 nests/year during the last decade. The study is completed by the small nesting sites of Guadeloupe and Martinique (French West Indies) where only a few dozen females are observed every year [43]. We tested individuals within these rookeries for a set of 10 microsatellite markers, and sequenced the control region and the mtDNA cytochrome b gene to consider: (i) The small-scale structure of these rookeries and the strength of migrations among the rookeries in order to achieve a precise evaluation of nest-site fidelity and geographic level of gene flow within the NW Atlantic RMU, (ii) The historical baselines of effective population sizes, in order to understand the possible extent of recent demographic changes and their significance for current and future population status.
Field Sampling and DNA Storing
Skin biopsies (with Biopsy Punch 4 mm, Kruuseß, conserved in 99% ethanol), or blood samples (with a heparinised syringe in the venous sinus in the hind flipper) were collected from nesting leatherbacks during oviposition between 1990 and 2010. Three sets of samples were considered, and corresponded to the three following rookeries: (i) Awala-Yalimapo (AY), Western French Guiana, at the border with Suriname (n = 52); (ii) Cayenne (CAY), East French Guiana, 300 km east of AY (n = 95); and (iii): Martinique (n = 56) and Guadeloupe (n = 12) in the French West Indies (FWI), 200 km apart and 2,000 km northwest of French Guiana ( Figure 1). Total DNA was extracted following the phenol/chloroform procedure [44].
Microsatellite standardization
We built two microsatellite enriched genomic libraries [45]. One of the libraries was enriched for dinucleotide sequences using (CT) 8 and (GT) 8 biotinylated microsatellite probes and the other for tetranucleotide sequences using (GATA) 4 and (GACA) 4 biotinylated microsatellite probes. The selected fragments were amplified by PCR then cloned into the pGEM-T vector (Promegaß). Plasmids were introduced into XL-1 blue cells and transformed cells were cultivated on agar plates (incubation temperature = 37uC) containing 100 mg/ml of X-galactosidase and ampiciline. About 500 clones containing inserts were sequenced with the ET Dye terminator Cycle Sequencing Kit (Amersham Biosciences) following the manufacturer's recommendations for sequencing in an automated MegaBACE 1000 DNA analysis system. Repeated microsatellite motifs were found using the Gramene Project SSR tool [46]. 39 sequenced clones presented microsatellite motifs from which 14 primer pairs were designed and synthesized. For those 14 microsatellites, PCR conditions were optimized after the successive amplifications of five samples, and thereafter 32 samples were amplified to check putative genotyping errors, polymorphism and the quality of the peaks discriminating alleles. This procedure allowed us to identify four of the most informative markers, namely Dc003, Dc005, Dc008 and Dc013) ( Table 1).
Microsatellite genotyping
Besides the four new markers developed in this study (see above), we also analyzed P186, Dc99 [47], Nigra32 [48], LB141 [37], Derm 5 and Derm 34 [38]. Part of the genotyping (Dc003, Dc005, Dc013, P186, LB141, Derm5 and Derm34) was performed on a Beckman Coulter automated sequencer, using pre-labeled primers with labels D2, D3 or D4. Polymerase chain reaction (PCR) mixes were performed in a 9 ml total reaction volume including 1 ml of genomic DNA (,10 ng), 0.5 U of Taq polymerase (BioLineß), 200 mM of deoxynucleoside triphosphates, 1X Tris-KCl buffer, 1.0-3.0 mM MgCl 2 (BioLineß), and 0.5 mM (for P186 and Nigra 200) or 1.0 mM of each primer. Other loci (Dc008, Dc99 and Nigra32) were analyzed using MegaBACE 1000, in which primers were synthesized with a M13 tail and fluorescent complementary sequences were added in the PCR reactions [49]. PCR conditions were the same, but in this case we used 0.3 U of Taq Platinum (Invitrogenß) and added 1.0 mM of the complementary M13 reverse primer and 0.1 mM of the forward primer, both labeled with FAM or HEX fluorescences. The amplification program for Dc008 consisted of 3 min at 94uC, followed by 10 cycles of 45 s at 94uC, 45 s at 66uC, 90 s at 72uC and 25 cycles of 45 s at 94uC, 45 s at 50uC, 90 s at 72uC, and a final extension step of 40 min at 72uC. For LB141, Derm5 and Derm34 we followed the conditions previously used by the authors. For the other markers, the amplification program consisted of 3 min at 94uC, followed by 30 cycles of 30 s at 94uC, 30 s at specific annealing temperature, 30 s at 72uC, and a final extension step of 30 min at 72uC.
Locus
Primer sequence ( Microsatellite data analysis GIMLET software [53] was used to quantify genotyping errors for the 10 microsatellites by repeat-genotyping. We randomly selected approximately 25% of all samples (n = 47 individuals) and independently repeat-genotyped these four times for all loci. Across the four genotypings, averaged across loci and across samples, we detected low error values: 1.2% of dropout, 1,1% of false allele, 0.4%, 0.4%, 0.5%, 0.4% and 0.2% of type 1, type 2, type 3, type 4, and type 5 errors, respectively.
We checked for occurrence of linkage disequilibrium among the 10 microsatellite loci with GENEPOP 1.2 [54] and verified any presence of null alleles with MICROCHECKER 2.2.3 [55] and INEst 1.0 [56], the latter also making it possible to adjust genotype frequencies using the PIM estimator [57]. The Markov chain method was used to assess Hardy-Weinberg equilibrium and observed heterozygote excess of microsatellites, using GENEPOP 1.2. Nucleotide diversity was calculated with FSTAT 2.9.3.2 [58]. ARLEQUIN 3.5 [59] was used to calculate nucleotide diversity, to evaluate the differentiation among populations (RST and FST) and perform neutrality tests (Ewens-Watterson neutrality test and Chakraborty's amalgamation test). An asymmetric estimate of the migration rate between a subset of pairwise populations was calculated using MIGRATE 3.2.19 [60], with Bayesian inference strategy and single-step model. Initial runs were set estimating theta (H = 4Ne6 m, with Ne = effective population size and m = mutation rate) and Nm (number of migrants) with FST, allowing Nm to be asymmetric. Reruns were set using the parameter estimate found in the first run and lengthening the Markov Chain Monte Carlo. MIGRATE 3.2.19 allowed to define not only emigration and immigration rates, but also their evolution and the evolution of theta and Nm through time.
A Bayesian clustering approach implemented in STRUCTURE 2.3.1 [61] was used to determine whether any hidden population structure resulting from distinct ancestral stocks could falsely generate a signature of population collapse [62]. This method uses a Markov Chain Monte Carlo (MCMC) approach in order to group individuals into K populations based on their genotypes without any prior information. We tested K = 1to K = 10, using the admixture population model, 1,000,000 iterations, 50,000 burn-in replicates and five independent replicates per K value. The best K value was defined using the log probability of the data Pr(X | K) for each value of K [63].
We also used a multivariate method to make assumptions regarding data structure. Unlike STRUCTURE, multivariate models do not assume that populations are in Hardy-Weinberg equilibrium. Accordingly, a Discriminant Analysis of Principal Components (DAPC, [39]) was performed with the package adegenet in R 2.13.0 [64] in order to identify and describe sequence clusters. The DAPC relies on data transformation using Principal Component Analysis (PCA) as a prior step to Discriminant Analysis (DA), which maximizes the separation between groups. The optimal number of clusters was predicted using the sequential K-means clustering method, and the Bayesian Information Criterion (BIC) was used to choose the best number of groups (K) from 1 to 10.The number of clusters was assessed using the function find.clusters. In all analyses, 40 principal components (PCs) were retained, corresponding to the number of principal components that explained 90% of the cumulative variance.
We used MSVAR1.3 [40] to analyze the demographic histories of each leatherback rookery, the effective ancestral and current population sizes and time since collapse or expansion for each of them. A priori mutation rates of nuclear DNA ranging from 6610 24 to 9.5610 23 were previously set in several marine turtle species [65] for pre-runs, and posterior values of mutation rates after convergence were used for final runs. An exponential model was used [34]. The convergence was checked in TRACER [66] to ensure that all parameters had an Effective Sample Size (ESS) of at least 100. Generation time for leatherback ranges from 10 to 30 years [67][68][69]: demographic features were explored using an intermediate value of 16.1 years [69].
We also used the Extended Bayesian Skyline Plots (EBSP) [33] to estimate the population size through time. This method allows inference of the population demographic history by averaging over a nested set of microsatellite mutation models that incorporate length dependency, mutation bias and step size. We ran the analysis in BEAST v. 1.7.1 for 500,000,000 iterations, and parameters were sampled after every 5,000 iterations. The convergence was checked in TRACER [66] with ESS.100. Mutation rate and generation time were identical to those used for MSVAR estimates. The range of the mutation rate was set as a uniform distribution, and the mutation model was set as the Two- Step. We also modified the operators according to the EBSP tutorial (http:// http://beast.bio.ed.ac.uk). A preliminary analysis was performed using the Coalescent prior, and constant population size was also run in BEAST in order to estimate the population size. We used these results to set the population size prior in the EBSP analysis, using a uniform prior and the 95% CI estimated with the constant population size. In order to compare models and check if the EBSP results were different from a constant size model we calculated a Bayes Factor (i.e., the harmonic mean of the log likelihood [70]), and thus obtain support for one model over another, using both the EBSP and the constant population size prior.
Mitochondrial DNA data analysis
Both CR and Cyt-b gene sequences were analyzed for haplotype and nucleotide diversities with DNAsp 4.20.2 [71]. Tests for differentiation between populations (FST, and Exact Test of Differentiation) as well as neutrality tests (Tajima's selective neutrality test, Ewens-Watterson neutrality test, Chakraborty's amalgamation test and Fu's neutrality test [72]) were performed with ARLEQUIN [59]. We used BEAST 1.7.1 to generate Bayesian Skyline Plots (BSP) [73] for an assessment of historical changes in the effective population size (Ne) over time. We applied a strict molecular clock and a piecewise-constant Bayesian skyline tree prior. A mutation rate of 2% per site per Million Years Ago (MYA) was considered [36].The most likely mutation model was estimated with MRMODELTEST [74]. Convergence was checked based on likelihood, as previously described.
Microsatellite data
Since leatherbacks from AY had been sampled over a long period, two preliminary approaches were implemented in order to control a putative bias resulting from genetic drift during this period: (i) differentiation among rookeries was calculated using RST and FST indexes between 2 periods: samples collected in 1990-2000 vs. those collected in 2001-2010, and no structuration was evidenced; (ii) STRUCTURE was used to investigate the number of ancestral stocks within this sample. It revealed that a single stock (K = 1) was the most probable solution. Consequently, all the samples from AY were considered as a single rookery.
All the 10 microsatellite loci were polymorphic and linkage disequilibria were not significant (p.0.05) after Bonferroni correction. Regarding the 4 new microsatellite markers developed specifically for this study (Dc003, Dc005, Dc008 and Dc013), Dc008 presented the highest allelic richness (AR = 10.64, averaged among sample sets), while Dc005 presented the lowest allelic richness (AR = 4.35). The number of alleles per locus ranged from 4 (Nigra32) to 29 (LB141) ( Table 1). Gene diversity (Gd) of Dc008 ranged from 0.82 in AY to 0.84 in CAY rookery, with similar diversities among sampling rookeries (Table 1). Considering all loci, gene diversities were comparable in CAY and FWI rookeries, and slightly lower in AY (Table 2). Dc008 presented null alleles in CAY and FWI rookeries, Derm34 and LB141 presented null alleles in AY rookery.
The analysis of stocks using STRUCTURE indicated that the most probable number of populations (K value) was 1, therefore failing to recover any ancestral structure. The DAPC results were similar, and although the lowest value of BIC indicated a K = 6 (with a possible range from K = 3 to 7), individuals from all three sample sites were assigned in all six groups. Therefore DAPC also suggested a single ancestral stock.
When analyzing all ten loci with model genotypes that were either original or adjusted with PIM model [64], RST was only significant between CAY and AY (RST = 0.0289, p,0.05). However when excluding the three loci with null alleles, RST was significant between AY and FWI (RST = 0.0106, p,0.05), and between CAY and FWI (RST = 0.0211, p,0.05). FST provided a stronger structuration signal and was significant between the 3 rookeries with the 10 loci dataset, but not significant between CAY and AY only with the 7 loci (Table 3). According to AMOVA more than 98% of the genetic variation was within populations, while less than 2% was between populations.
Observed and expected heterozygosities, and Inbreeding coefficients, are shown in Table 2 for each locus and each population. None of the populations in original and adjusted genotypes in any of the ten microsatellite loci were in Hardy-Weinberg equilibrium, and all of them presented heterozygote deficit (Table 4). However, all of the populations were in Hardy-Weinberg equilibrium, when the loci with null alleles were excluded. According to analysis of ten microsatellites, AY has the highest inbreeding coefficient (FIS = 0.080) and gene diversity over loci (Gd = 0.732 respectively), while FWI has the lowest FIS (0.029) and CAY, and the lowest Gd over loci (0.677) ( Table 4). When the three loci with null alleles are excluded, CAY presented the highest FIS (0.052) and FWI the lowest (0.001).
We found a high rate of gene flow among rookeries, with 13 to 33 migrants per generation (between FWI and AY, and between CAY and AY, respectively). In analyses which excluded the loci with null alleles, the number of migrants between AY and CAY was seen to increase to 80, whereas it remained in the same range (12) between AY and FWI. Whatever the set of data used, emigrants from CAY and FWI to AY were twice as numerous as immigrants from AY to other rookeries.
Sensitive Bayesian methods implemented in MSVAR showed dramatic declines in effective population sizes, with ancestral effective population sizes ranging from 120,000 (Awala-Yalimapo) to 1,600,000 (CAY) shrinking to current effective population sizes ranging from 70 (FWI) to 120 (CAY) (Figure 2). This corresponds to a decline of 99.99%, leaving a total effective population size around 500-1,500 females for each rookery (Figure 3). These bottlenecks occurred at two periods, namely around 2,000 to 3,500 YA for AW and FYI, and earlier (10,000 YA) for CAY ( Figure 3). MIGRATE revealed a slight increase of theta in all three rookeries 100-200 YA, suggesting low but increasing effective population sizes, which is congruent with MSVAR results (Figure 3). Unlike the above mentioned tests, the Extended Bayesian Skyline Plot (EBSP) graph shows a flat line for all the three rookeries through time, with a fast recent increase less than 20 generations ago ( Figure 3). Yet, when the Bayes Factor (BF) was calculated for the EBSP and assuming a constant population size
Mitochondrial DNA data
Regarding the CR (711 bp), a total of five haplotypes were evidenced, all of which were present in AY, with Dc_A5 and Dc_C3 being newly reported and exclusive of this rookery. All three rookeries shared the other three haplotypes, revealing the presence of Dc_C2 for the first time ( Figure 1, Table 5). The structuration coefficient of (FST) was low but significant between CAY and FWI (FST = 0.0955, p = 0.045) and between AY and FWI (FST = 0.0995, p = 0.037) ( Table 6). An exact test of differentiation showed significant differences between AY and CAY (p,0.05), and AY and FWI (p,0.05), but not between CAY and FWI (p.0.05). AY presented the largest gene diversity (Gd = 0.794), showing all five CR haplotypes. FWI was the rookery displaying the lowest gene diversity (Gd = 0.352) ( Table 7). None of the rookeries showed any deviation of neutrality according to Tajima's selective neutrality, Ewens-Watterson neutrality, Chakraborty's amalgamation and Fu's neutrality tests. Full Cyt-b sequences (1,111 bp) showed very low variability, with only two haplotypes differentiated by one polymorphic site, both present in the three rookeries. There is no evidence of population structure and, in this case, gene diversities were low (Gd ranging from 0.340 to 0.492). The Bayesian Skyline Plot was inferred with the CR only, since the Cyt-b showed extremely low levels of variability. The most probable substitution model was HKY with invariant sites; BSP failed to show any significant change in population size over the last 12 MYA, but this result should be interpreted with caution given the low variability observed in this gene.
Discussion
Leatherback turtles exhibit complex life traits, including female homing and migration, migration patterns of juveniles that remain little known to date, and climate that has been shown to strongly influence resources, breeding success and sex-ratio. Based on a comprehensive integrated approach combining microsatellite and mitochondrial DNA, our study provides new insights into the population dynamics of leatherbacks in the Northwest Atlantic, considered as one of the world's largest populations [20], with significant recovery potential [75]. Our genetic data are expected to contribute to a better understanding of their history and current dynamics, and ultimately play a part in their conservation.
Methodological issues
This work puts forward the complexity of analytic choices with concurrent approaches. We used three methods based on Bayesian inference, namely MSVAR, BEAST and MIGRATE, to explore recent demographic history and changes in the evolution of effective population size in distant rookeries with contrasted numbers of nesting females. MSVAR has been shown to be a relevant tool to detect expansions and declines in different species [34], including sea turtles [17]. Bottlenecks of variable extent and dates were detected by MSVAR, but not by the EBSP method. However, to our knowledge this is the first time the EBSP method has been used with data from natural populations, and thus precludes further analysis of the comparative sensitivity of these methods. Interestingly, all three approaches identified low and congruent values of current effective population sizes, and the very recent expansion signal detected by MIGRATE is indicative of populations recovering after bottlenecks.
One other key point in our study was a high estimated occurrence of null alleles in our dataset. Null alleles result in lower heterozygosity and consequently impact the structuration signal among populations, overestimating the distances among clades [76]. The recent adaptation of the PIM model [57] makes it possible to assess inbreeding coefficients and allele frequencies with a high level of confidence [64]. In our study however, the use of our full dataset, including loci without null alleles and loci with corrected frequencies, resulted in lower structuration among rookeries than when only loci without null alleles were used. Consequently, as observed in other endangered species [77] and confirmed by the low rate of genotyping errors, we conclude that a true homozygote excess has resulted from low population sizes, inbreeding and genetic drift [78], rather than a high occurrence of null alleles.
Fine scale population structure and genetic diversities
The concurrent use of different methods to analyze sequences of the control region and autosomal microsatellite variability in this study has revealed that the Northwest Atlantic stock of leatherback turtles derives from a single ancestral origin, but shows a current genetic structure at a small geographic scale that is related to the distribution of nesting rookeries.
The sequencing of the entire Cyt-b revealed evidence of only two haplotypes, probably the same as those previously described and based on shorter sequences (876 bp) [79]. But despite those longer sequences, a signal of limited structure was evidenced, reinforcing the idea of a wide North Atlantic stock [36]. Analysis of the control region, a more variable gene, revealed the signature of some structure between French Guiana and French West Indies, which contrasts with the previous study showing the presence of only one 496 bp haplotype in the Guianas and three haplotypes in the West Indies [36]. These differences could be explained by the longer sequences used in this study, hence improving the resolution of mtDNA for comprehensive phylogeographic studies [32].
Among our five CR haplotypes, only two have been previously described [32,36]. Our large sample set also enabled a significant increase in the diversity indexes of the Northwest Atlantic Leatherback populations, contrasting with the first assessments made [36], and resulting in the highest diversities reported in the species along with Indo-Pacific nesting populations [32]. The highly sensitive microsatellite markers revealed low, small-scale structuration that was also observed between the Awala-Yalimapo and Cayenne rookeries despite the short distance between these two sites (,300 km).This pattern could seem intriguing, considering the very long distance and the behavioral plasticity of the leatherback during its pluriannual migrations [20,23,75,80,81], but supports the argument for fidelity to nesting sites [25,26].
Late Pleistocene and Holocene demographic changes
Microsatellite markers revealed a low genetic diversity compared to other marine turtle species [82,83]; this is probably related to their recent demographic histories. Our results indicate that the North Atlantic population of leatherbacks experienced Polymorphisms from positions 150 to 308 are also included in the 496 bp haplotypes. doi:10.1371/journal.pone.0058061.t005 bottlenecks in the Late-Pleistocene and Holocene, with two major events, in 12,000 YA and from 3,500 to 2,000 YA. Ancestral size of effective population collapsed from 120,000-1,500,000 females, falling to the present estimations of 70 (AY) to 250 (CAY) females for each rookery. The population declines we found for the north Atlantic leatherback were of similar magnitude than those reported in the north Atlantic olive ridley turtle (Lepidochelys olivacea) [17], the green Chelonia mydas and the hawksbill Eretmochelys imbricata turtles in the wider Caribbean region [84], as well as marine mammals [35,85,86]. Most of these declines are assumed to have occurred in the Holocene and have to be considered as a widespread pattern in the large vertebrate populations of the North Atlantic.
Following the idea of recent megaufauna extinction and the controversial ''blitzkrieg'' hypothesis [87] collapses in leatherback populations could be attributed to human interactions such as historical egg poaching, selective harvesting and hunting [19,35,84]. The collapses may also be the result of previous climate oscillations during the Holocene [88][89][90]. Fine-scale differences in the use of feeding areas, and/or distinct behavioral patterns [75] may explain why the rookeries were not affected concomitantly. Environmental conditions may impact marine turtles either directly by harming females and hatchlings, and affecting temperature-dependent sex-ratio [91], or indirectly by affecting nesting beach quality and availability [92], the ability of oceanic-driven hatchlings to home to their birth site [93], and trophic conditions in foraging areas [94,95].
Population dynamics models
The recovery of populations suggested by both recent increases of effective population sizes and positive trends of nesting activities [20] will be influenced by population dynamics models [8].
Although the population dynamics of leatherbacks has been extensively discussed on the basis of capture/mark/recapture data [review in 20], little attention has been paid to this question in relation to high resolution genetic data [36]. A metapopulation model has been accepted for the Atlantic [29] and western Pacific populations [96], considering that settlements of new populations would result from massive arrivals of a large number of migrants. Previous results support this idea, illustrating the absence of signatures for founder effects and/or bottlenecks [29]. A different approach to these results is now possible thanks to the use of new markers and more powerful methods of analysis to identify these signatures.
Metapopulation functioning implies that some groups are separated by habitat types that are not relevant for feeding and/ or breeding activities [2]. In the case of the leatherback, it seems that such patterns are driven by nesting activity, due to the phylopatry of nesting females [25] rather than by feeding areas. However, structuration index values remain low despite significant small-scale structure, and high numbers of migrants are observed.
Thus, leatherbacks may be driven by an island model rather than a strict metapopulation model that would imply successive cycles of extinction and recolonization [4]: in response to ecological opportunities, demes size would locally increase and decrease, but maintaining gene flow among demes. Emigrant and immigrant rates provide further information on the dynamics of the Northwest Atlantic Leatherback Turtle population. The CAY rookery, despite lower diversity, displays a higher number of emigrants than immigrants arriving from the two other rookeries. Higher population sizes, resulting from the recent expansions, may favor dispersal of breeders. All the methods used showed that current effective population sizes in the three rookeries were rather low, and no relationship with nuclear and mitochondrial genetic diversities was found. It can be suggested that the high number of migrants is associated with males rather than females, but this cannot be certified without further studies of male-mediated gene flow and its contribution to population dynamics and diversity.
Although the metapopulation theory implicitly refers to nonmigrating species, this model has also been explored in migrating species, and namely in birds [8]. The leatherback thus represents an exciting new model to investigate the impact such behavioral traits could have on the genetic structure of populations.
Conservation issues
Assessments of genetic diversity based largely on neutral variation provide essential information about population history and demography [97]. The Regional Management Units of the leatherback, up to and including the suggested geographic limits of the populations, are mainly managed using nesting population evaluations, information gained from pit-tags, satellite tracking and previous genetic assessment of structure using markers with low resolution [21]. Here we highlight fine-scale structure, and the importance of every single nesting rookery that hosts its own richness despite the dispersal of animals during their transoceanic migrations [98]. Efficient conservation programs should then focus not only on shared areas used during long-distance migration [81,99], but also on each nesting rookery harboring a specific nuclear genetic signature.
Maintaining a high level of genetic diversity is assumed to be essential for the conservation of viable populations [100]. However, some species with historical low genetic diversity, no doubt due to cycles of bottlenecks and expansions, are not necessarily endangered [101]. Thus, as soon as an island model is assumed, the maintenance of high number of migrants among rookeries could ensure the future of populations [102,103], despite the low nuclear diversity and low effective population sizes. In the French West Indies and French Guiana, nesting activity showed clear positive trends, as also reported in the Wider Carribbean [104]. To some extent, this trend can be explained by ongoing conservation efforts [105], the biological and ecological characteristics of the species [75] and island population dynamics that enhance the ability of the leatherback species to recover from population oscillations related to changing environmental conditions.
|
2017-04-13T05:08:47.706Z
|
2013-03-13T00:00:00.000
|
{
"year": 2013,
"sha1": "5905b7c46c8c46fe8ea055d193387e6da15e7853",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0058061&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5905b7c46c8c46fe8ea055d193387e6da15e7853",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
235213953
|
pes2o/s2orc
|
v3-fos-license
|
The Influence of Diet and Sex on the Gut Microbiota of Lean and Obese JCR:LA-cp Rats
There is an increased interest in the gut microbiota as it relates to health and obesity. The impact of diet and sex on the gut microbiota in conjunction with obesity also demands extensive systemic investigation. Thus, the influence of sex, diet, and flaxseed supplementation on the gut microbiota was examined in the JCR:LA-cp rat model of genetic obesity. Male and female obese rats were randomized into four groups (n = 8) to receive, for 12 weeks, either (a) control diet (Con), (b) control diet supplemented with 10% ground flaxseed (CFlax), (c) a high-fat, high sucrose (HFHS) diet, or (d) HFHS supplemented with 10% ground flaxseed (HFlax). Male and female JCR:LA-cp lean rats served as genetic controls and received similar dietary interventions. Illumine MiSeq sequencing revealed a richer microbiota in rats fed control diets rather than HFHS diets. Obese female rats had lower alpha-diversity than lean female; however, both sexes of obese and lean JCR rats differed significantly in β-diversity, as their gut microbiota was composed of different abundances of bacterial types. The feeding of an HFHS diet affected the diversity by increasing the phylum Bacteroidetes and reducing bacterial species from phylum Firmicutes. Fecal short-chain fatty acids such as acetate, propionate, and butyrate-producing bacterial species were correspondingly impacted by the HFHS diet. Flax supplementation improved the gut microbiota by decreasing the abundance of Blautia and Eubacterium dolichum. Collectively, our data show that an HFHS diet results in gut microbiota dysbiosis in a sex-dependent manner. Flaxseed supplementation to the diet had a significant impact on gut microbiota diversity under both flax control and HFHS dietary conditions.
Introduction
Studies understanding the association between human health and the gut microbiota have received significant interest in the last decade [1]. The link between gut microbiota dysbiosis and disease development remains unclear, and it is still unknown whether adverse metabolic changes precede or follow the alterations in the composition of gut microbiota [2]. The gut microbiota plays a crucial role in various functions of the digestive tract such as: (a) maintaining the intestinal epithelial integrity, and thus protecting against pathogenic bacteria [3], (b) metabolizing dietary fiber and helping in the absorption of short-chain flaxseed supplemented regular chow diet (CFlax), (c) HFHS, or (d) 10% ground flaxseed supplemented HFHS diet (HFlax), for 12 weeks. Age-matched male and female lean (cp/?) JCR rats served as control animals and were given similar diets for 12 weeks.
The regular chow was a Prolab ® RMH 3000 regular rodent chow, and the HFHS included AIN-93G chow with 35% fat (lard) and 36% carbohydrate (mostly sucrose) (TestDiet, Richmond, IN, USA). The ground flaxseed was BakePur milled flaxseed obtained from Pizzey Ingredients, Russell, Manitoba, Canada. Throughout the study duration, animals had free access to water and food.
Biological Sample Collection
After 12 weeks of feeding, 24-week old animals were fasted overnight (16 h) and then anesthetized with isoflurane (5%) the following morning. The depth of anesthesia was assessed by a pedal withdrawal reflex. After anesthesia, the blood sample was collected from the inferior vena cava by opening the thoracic cavity and the heart was immediately excised. One or two fecal pellets were removed from the distal colon of each animal. Samples were snap frozen in liquid nitrogen and stored at −80 • C until further analysis.
Quality Control
The possibility of contamination was investigated by co-sequencing DNA amplified from feces samples and from 7 each of template-free controls and extraction kit reagents processed in the same manner as the specimens. Operational taxonomic units were considered putative contaminants (and were removed) if their mean abundance in controls reached or exceeded 25% of their mean abundance in specimens.
Short Chain Fatty Acid Measurement
SCFAs acids were extracted from the feces samples and derivatized as previously described [38]. Extracted SCFA supernatants were stored in 2-mL GC vials, with glass inserts. SCFA were detected using gas chromatography (Thermo Trace 1310, Thermo Fisher Scientific, Waltham, MA, USA) using Thermo TG-WAXMS A GC Column, 30 m, 0.32 mm, 0.25 µm coupled to a flame ionization detector (Thermo). The following settings were used for detection: Flame ionization detector temperature was kept at 240 • C, hydrogen at 35.0 mL/min, air at 350.0 mL/min, makeup gas (Nitrogen) at 40.0 mL/min Inlet, and carrier pressure at 225 kPa. Column flow was set at 6.00 mL/min with purge flow of 5.00 mL/min and split flow of 12.0 mL/min at temperature 200 • C.
Sequence Metrics and Taxonomic Composition
The microbiota was analyzed by sequencing the 16Sv4 rRNA amplicons generated from the fecal pellets of 24-week-old JCR:LA-cp rats. Miseq generated high quality filtered files were clustered into 122,196 operational taxonomic units (OTUs) at a similarity cutoff of 97%. An average of 44,497 quality-filtered reads were generated per sample (Table S1 and Figure S2). High quality reads were classified using Greengenes v. 13_8 as the reference database. OTUs were aggregated into each taxonomic rank.
Statistical Analysis
Alpha diversity was estimated with the Shannon index on OTU abundance tables after filtering out contaminants and rarefaction using a minimum total count of (19,443). The significance of diversity differences was tested with three-way ANOVA (Y = α + β1? Sex + β2? Diet + β3? Genotype + β12? Sex × Diet + β13? Sex × Genotype + β23? Diet × Genotype + β123? Sex × Diet × Genotype). Tukey post-hoc test determined significant pairwise differences. To estimate β-diversity across samples, we excluded OTUs occurring with a count of less than 3 in at least 10% of the samples and then computed Bray-Curtis indices. We visualized β-diversity, emphasizing differences across samples, using Principal Coordinate Analysis (PCoA) ordination. Variation in community structure was assessed with permutational multivariate analyses of variance (PERMANOVA) with treatment group as the main fixed factor and using 9999 permutations for significance testing. All analyses were conducted in the R environment. Pairwise contrasts were tabulated and the FDR method used to correct p-values for multiple comparisons. DESeq2 package was used to identify differentially abundant taxa among Sex, Diet and Genotype variables. The following linear model was used for the test:~Sex x Diet x Gentoype, and the reduced terms of the likelihood ratio test are:~1.
Diet, Sex, and Genotype Alters the Composition and Diversity of Gut Microbial Ecology
The alpha-diversity (Shannon index) was calculated for each sample (Figure 1). This diversity is a measure of richness (number of OTUs) and evenness (even distribution of OTUs) in a sample. A three-way ANOVA revealed differences in the Shannon diversity index with significant main effect of diet (p < 0.001). Consumption of a HFHS diet resulted in decreased microbial diversity. Animals on the HFHS diet had significantly lower Shannon index compared to animals on the Con (p < 0.0001) or CFlax diet (p < 0.001). No significant difference in the diversity between CFlax and HFlax diet groups (p < 0.087) was noted. There was a significant difference in Shannon diversity between sexes (p < 0.016). Obese female JCR:LA-cp rats had lower values for Shannon diversity index compared to lean females (p < 0.015), but there was no significant difference in the diversity between obese and lean males (p < 0.495).
We summarized OTU abundances into Bray-Curtis dissimilarities and performed a principal component analysis (PCoA) ordination. The PCoA ordination plot assesses whether distinct clusters of the relative abundance of gut microbiota are produced as an impact of diet, genotype, or sex. This generates a graphical representation of microbiota composition dissimilarity among samples (β-diversity). A clear cluster of gut microbiota from obese male and female animals on the HFHS and HFlax diet was observed separated from the Con and the CFlax diet ( Figure 2). Accordingly, a PERMANOVA determined a significant difference in β-diversity among genotype, diet, and sex. Obese and lean animals had an adonis R2 = 0.035, p < 0.0001, and there is a significant difference among diet with an adonis R2 = 0.1624, p < 0.0001. The PCoA plot for β-diversity also shows a separation among male and female ( Figure 2) with an adonis R2 = 0.0705, p < 0.0001. There were also significant differences in β-diversity among sex and diet (R2 = 0.0314, p < 0.0077), sex and genotype (R2 = 0.0336, p < 0.0001) and diet and genotype (R2 = 0.0305, p < 0.0001). We summarized OTU abundances into Bray-Curtis dissimilarities and performed a principal component analysis (PCoA) ordination. The PCoA ordination plot assesses whether distinct clusters of the relative abundance of gut microbiota are produced as an impact of diet, genotype, or sex. This generates a graphical representation of microbiota composition dissimilarity among samples (β-diversity). A clear cluster of gut microbiota from obese male and female animals on the HFHS and HFlax diet was observed separated from the Con and the CFlax diet ( Figure 2). Accordingly, a PERMANOVA determined a significant difference in β-diversity among genotype, diet, and sex. Obese and lean animals had an adonis R2 = 0.035, p < 0.0001, and there is a significant difference among diet with an adonis R2 = 0.1624, p < 0.0001. The PCoA plot for β-diversity also shows a separation among male and female ( Figure 2) with an adonis R2 = 0.0705, p < 0.0001. There were also significant differences in β-diversity among sex and diet (R2 = 0.0314, p < 0.0077), sex and genotype (R2 = 0.0336, p < 0.0001) and diet and genotype (R2 = 0.0305, p < 0.0001).
The four most abundant phyla observed in all the fecal samples were Actinobacteria, Bacteroidetes, Firmicutes, and Verrucomicrobia ( Figures S3-S7). Taxonomic based analysis of the relative abundance of the gut microbiota revealed variances among the treatments at the phylum level ( Figure 3A). A DESeq2 package was used to identify differentially abundant taxa among diet, sex and genotype variables. The phylum Bac- Figure 1. Alpha diversity of gut microbiota. Shannon diversity plot from fecal pellets of lean and obese, male or female JCR:LA-cp rats fed (a) control diet (Con), (b) control + flax (CFlax), (c) high fat, high sucrose (HFHS) or (d) high fat, high sucrose + flax (HFlax). Con and HFHS (p < 0.001), Con and HFlax (p < 0.001), CFlax and HFHS (p < 0.003), male and female (p < 0.016), lean females and obese females (p < 0.015). n = 8. teroidetes had two different unclassified species from the family Muribaculaceae (previously known as S24-7) (OTU000025, P.adj = 5.3e-113 and OTU000010, P.adj = 3.6e-39) that were significantly lower in rats fed HFHS and HFlax diets compared to Con and CFlax diets ( Figure 3B,C). The relative abundance of gut microbiota also varied at the genus level ( Figure 4A). There were 13 bacterial genera from the phylum Firmicutes that were significantly differentially abundant. The abundance of many of these bacteria was affected by the HFHS diet compared to the Con diet. A Lactobacillus species (P.adj = 5.7e-48) had a very low abundance in HFHS and HFlax fed rats compared to the Con and CFlax fed rats ( Figure 4B). Three bacterial species from the genus Ruminococcus were differentially abundant. Ruminococcus gnavus (P.adj = 5.5e-40) and Ruminococcus unclassified (P.adj = 1.7e-33) had a lower abundance in the HFHS group. Conversely, Ruminococcus flavefaciens (P.adj = 7.2e-36) was slightly elevated in the HFHS group ( Figure 4C-E).
A higher abundance of Clostridium cocleatum (P.adj = 6.8e-42) was observed in rats fed the HFHS diet compared to the Con diet ( Figure 4F) while a lower abundance of an Oscillospira species (P.adj = 2.9e-46) was observed in the HFHS ( Figure 4G). A higher abundance of an unclassified species from the family Lachnospiraceae was observed in rats fed the HFHS diet ( Figure S10). The four most abundant phyla observed in all the fecal samples were Actinobacteria, Bacteroidetes, Firmicutes, and Verrucomicrobia ( Figures S3-S7). Taxonomic based analysis of the relative abundance of the gut microbiota revealed variances among the treatments at the phylum level ( Figure 3A). A DESeq2 package was used to identify differentially abundant taxa among diet, sex and genotype variables. The phylum Bacteroidetes had two different unclassified species from the family Muribaculaceae (previously known as S24-7) (OTU000025, P.adj = 5.3e-113 and OTU000010, P.adj = 3.6e-39) that were significantly lower in rats fed HFHS and HFlax diets compared to Con and CFlax diets ( Figure 3B,C). The relative abundance of gut microbiota also varied at the genus level ( Figure 4A). There were 13 bacterial genera from the phylum Firmicutes that were significantly differentially abundant. The abundance of many of these bacteria was affected by the HFHS diet compared to the Con diet. A Lactobacillus species (P.adj = 5.7e-48) had a very low abundance in HFHS and HFlax fed rats compared to the Con and CFlax fed rats ( Figure 4B). Three bacterial species from the genus Ruminococcus were differentially abundant. Ruminococcus gnavus (P.adj = 5.5e-40) and Ruminococcus unclassified (P.adj = 1.7e-33) had a lower abundance in the HFHS group. Conversely, Ruminococcus flavefaciens (P.adj = 7.2e-36) was slightly elevated in the HFHS group ( Figure 4C-E). A higher abundance of Clostridium cocleatum (P.adj = 6.8e-42) was observed in rats fed the HFHS diet compared to the Con diet ( Figure 4F) while a lower abundance of an Oscillospira species (P.adj = 2.9e-46) was observed in the HFHS ( Figure 4G). A higher abundance of an unclassified species from the family Lachnospiraceae was observed in rats fed the HFHS diet ( Figure S10). Flaxseed supplementation in the HFlax diet group also differentially affected the abundance of bacterial species. A Dorea species (P.adj = 4e-128) was elevated only in rats fed HFlax, but not in HFHS fed rats ( Figure 4H). A Blautia species (P.adj = 1.9e-49) was significantly elevated in the HFHS diet compared to the Con diet. Flax supplementation significantly lowered a Blautia species compared to the HFHS group ( Figure 4I). An Allobaculum species (P.adj = 5.2e-41) was elevated in lean and obese males fed the CFlax diet ( Figure 4J). Eubacterium dolichum (P.adj = 1.3e-51) was elevated in the HFHS diet compared to the Con group. This was significantly reduced in obese rats fed the HFlax diet ( Figure 4K). Subdoligranulum variabile (P.adj = 6.2e-36) and a SMB53 species (P.adj = 6.2e-48) were significantly high in the obese animals on the HFHS diet and flax supplementation in the HFlax diet reduced the abundance of these bacteria (Figures S8 and S9).
Diet, Sex, and Genotype Impacts the Gut SCFA Composition
The gut microbiota impacts host physiology by fermenting dietary fiber and producing SCFAs. Thus, we investigated the effect of genotype, sex, and diet on the SCFAs content in the fecal pellets of JCR:LA-cp rats. The fecal SCFAs detected by GC-FID were acetate, propionate, isobutyrate, butyrate, isovalerate, valerate, and hexanoate. For acetic acid, there was a significant effect (p < 0.008) of genotype as obese animals had lower values compared to their lean counterparts ( Figure 5A). Diet (p < 0.004) and sex (p < 0.002) as factors also had a significant effect on the acetic acid levels. JCR:LA-cp rats of both genotypes on the HFHS diet had lower values compared to the control and CFlax diets. Further, male animals had lower levels compared to female animals on all diet Flaxseed supplementation in the HFlax diet group also differentially affected the abundance of bacterial species. A Dorea species (P.adj = 4e-128) was elevated only in rats fed HFlax, but not in HFHS fed rats ( Figure 4H). A Blautia species (P.adj = 1.9e-49) was significantly elevated in the HFHS diet compared to the Con diet. Flax supplementation significantly lowered a Blautia species compared to the HFHS group ( Figure 4I). An Allobaculum species (P.adj = 5.2e-41) was elevated in lean and obese males fed the CFlax diet ( Figure 4J). Eubacterium dolichum (P.adj = 1.3e-51) was elevated in the HFHS diet compared to the Con group. This was significantly reduced in obese rats fed the HFlax diet ( Figure 4K). Subdoligranulum variabile (P.adj = 6.2e-36) and a SMB53 species (P.adj = 6.2e-48) were significantly high in the obese animals on the HFHS diet and flax supplementation in the HFlax diet reduced the abundance of these bacteria (Figures S8 and S9).
Diet, Sex, and Genotype Impacts the Gut SCFA Composition
The gut microbiota impacts host physiology by fermenting dietary fiber and producing SCFAs. Thus, we investigated the effect of genotype, sex, and diet on the SCFAs content in the fecal pellets of JCR:LA-cp rats. The fecal SCFAs detected by GC-FID were acetate, propionate, isobutyrate, butyrate, isovalerate, valerate, and hexanoate. For acetic acid, there was a significant effect (p < 0.008) of genotype as obese animals had lower values compared to their lean counterparts ( Figure 5A). Diet (p < 0.004) and sex (p < 0.002) as factors also had a significant effect on the acetic acid levels. JCR:LA-cp rats of both genotypes on the HFHS diet had lower values compared to the control and CFlax diets. Further, male animals had lower levels compared to female animals on all diet groups. A similar effect of diet (p < 0.004) was observed for propionic acid as rats of both genotypes on the HFHS diet had lower values compared to the animals on control and CFlax diets ( Figure 5B). However, no significant differences were observed for genotype and sex. males had higher values than males and animals on the HFHS, and there was also a sig nificant effect from diet (p < 0.000), as HFlax diets had lower values compared to the con trol and CFlax diets ( Figure 6A). No changes were reported for genotype. Although n changes were noted for the effect of diet and sex on the isovaleric acid values, genotyp had a significant effect (p < 0.009), as obese animals had higher values compared to the lean counterparts ( Figure 6B). Diet had a significant effect (p < 0.013) on hexanoic aci levels as animals on the HFHS diet had lower values compared to the animals on th CFlax diet (Figure 7). No effect of genotype, diet, or sex was observed on isobutyric aci and valeric acid levels ( Figure S11A,B). A significant effect of sex (p < 0.002) was observed for the butyric acid values as females had higher values than males and animals on the HFHS, and there was also a significant effect from diet (p < 0.000), as HFlax diets had lower values compared to the control and CFlax diets ( Figure 6A). No changes were reported for genotype. Although no changes were noted for the effect of diet and sex on the isovaleric acid values, genotype had a significant effect (p < 0.009), as obese animals had higher values compared to their lean counterparts ( Figure 6B). Diet had a significant effect (p < 0.013) on hexanoic acid levels as animals on the HFHS diet had lower values compared to the animals on the CFlax diet (Figure 7). No effect of genotype, diet, or sex was observed on isobutyric acid and valeric acid levels ( Figure S11A,B).
Discussion
In the present study, the impact of diet, sex, and genotype on the gut microbiota of JCR:LA-cp rats was examined. The results demonstrate that sex alters the microbial composition of the gut and a HFHS diet induces significant differences in the gut microbiota diversity and SCFA profile of both male and female animals. Flaxseed supplementation improved the taxonomic abundance in both sexes of obese animals.
The richness and diversity of the gut microbiota is a factor in determining health and obesity. Greater bacterial richness and diversity is typically associated with better health [4]. The present findings demonstrate that male and female JCR:LA-cp rats have a different degree of alpha-diversity (evenness and richness) of gut microbiota. The overall β-diversity of bacteria types in the gut differed between sex as well. The bacterial population was mostly from the phyla Firmicutes, Bacteroidetes and Verrucomicrobia, with Actinobacteria and Proteobacteria also present in lower proportions. Although obese male rats had similar degree of alpha-diversity as lean males, obese females had substantially lower alpha-diversity than lean females for all diets. This indicates that genetic obesity alters the composition of the gut microbiota in female JCR:LA-cp rats, as obese females have less bacterial richness and diversity than the lean females.
Diet greatly affected the diversity of gut microbiota as the HFHS diet showed lowered diversity than control diets. Diet can modify the gut microbiota of humans [17] and rodents [39] very rapidly. Here, the rats were fed HFHS for a longer period of time (12 weeks); thus, the alpha-diversity of the gut microbiota of rats fed the HFHS diet was dramatically altered and the β-diversity plot showed a huge difference between HFHS and control diets. An HFHS diet is often associated with a decrease in bacteria from the phylum Bacteroidetes and an increase in bacteria from the phylum Firmicutes [16]. In our study, there were two species belonging to the family Muribaculaceae (previously known as S24-7) from the phylum Bacteroidetes that were significantly lower in animals fed the HFHS diet. Many bacteria from the phylum Firmicutes increased in abundance
Discussion
In the present study, the impact of diet, sex, and genotype on the gut microbiota of JCR:LA-cp rats was examined. The results demonstrate that sex alters the microbial composition of the gut and a HFHS diet induces significant differences in the gut microbiota diversity and SCFA profile of both male and female animals. Flaxseed supplementation improved the taxonomic abundance in both sexes of obese animals.
The richness and diversity of the gut microbiota is a factor in determining health and obesity. Greater bacterial richness and diversity is typically associated with better health [4]. The present findings demonstrate that male and female JCR:LA-cp rats have a different degree of alpha-diversity (evenness and richness) of gut microbiota. The overall β-diversity of bacteria types in the gut differed between sex as well. The bacterial population was mostly from the phyla Firmicutes, Bacteroidetes and Verrucomicrobia, with Actinobacteria and Proteobacteria also present in lower proportions. Although obese male rats had similar degree of alpha-diversity as lean males, obese females had substantially lower alpha-diversity than lean females for all diets. This indicates that genetic obesity alters the composition of the gut microbiota in female JCR:LA-cp rats, as obese females have less bacterial richness and diversity than the lean females.
Diet greatly affected the diversity of gut microbiota as the HFHS diet showed lowered diversity than control diets. Diet can modify the gut microbiota of humans [17] and rodents [39] very rapidly. Here, the rats were fed HFHS for a longer period of time (12 weeks); thus, the alpha-diversity of the gut microbiota of rats fed the HFHS diet was dramatically altered and the β-diversity plot showed a huge difference between HFHS and control diets. An HFHS diet is often associated with a decrease in bacteria from the phylum Bacteroidetes and an increase in bacteria from the phylum Firmicutes [16]. In our study, there were two species belonging to the family Muribaculaceae (previously known as S24-7) from the phylum Bacteroidetes that were significantly lower in animals fed the HFHS diet. Many bacteria from the phylum Firmicutes increased in abundance in rats fed HFHS including Ruminococcus gnavus, which is linked to Crohn's disease [40] and Clostridium cocleatum, which is positively correlated with LPS, common in patients with chronic liver disease [41]. These animals demonstrated some evidence of liver disease (unpublished observations). Conversely, several genera from the phylum Firmicutes were reduced in abundance due to the HFHS diet. At the genus level, Oscillospira showed a reduction in abundance in rats fed the HFHS diet regardless of genotype. Oscillospira may be associated with a steady and healthy gut microbiota [42]. A decrease in a Lactobacillus species was also observed, a SCFA producing genus that is generally considered to be part of a healthy microflora [43]. Overall, the HFHS diet reduced the abundance of several bacterial genera deemed beneficial and increased the quantity of bacteria associated with certain diseases.
The HFHS diet-induced dysbiosis of the gut microbiota was not completely ameliorated by the addition of flaxseed to the diet, yet there were several significant improvements in the abundance of certain bacterial species of interest. The HFHS diet caused a large increase of a Blautia species compared to the control diets in both males and females regardless of genotype. However, when flaxseed was added into the diet, the amount of Blautia returned to control levels. This is an interesting finding, as the genus Blautia has been positively associated with visceral fat (VF) accumulation in humans [44]. Possibly dietary intake of flaxseed could lessen the quantity of Blautia in the intestinal track, which may result in a decrease of VF. There was also an increase in abundance of the bacteria Eubacterium dolichum when rats were fed the HFHS diet. E. dolichum has similarly been associated with VF via an unhealthy diet [45]. The addition of flax lowered the abundance of E. dolichum for both males and females, but only in the obese genotype. VF accumulation is a major factor in metabolic and cardiovascular diseases [46]. It is encouraging that flax supplementation can reduce the abundance of bacteria related to VF. The genus Allobaculum has previously been shown to be less abundant on a high fat diet compared to a low fat diet [47]. The present results demonstrated that the abundance of Allobaculum was slightly lower in the HFHS diets, but it was dramatically increased with the CFlax diet. Allobaculum, which has been positively correlated with butyrate production [48], may be beneficial for host physiology and is associated with energy homeostasis [49]. The abundance of a Dorea species was also increased in lean and obese rats fed a HFHS diet supplemented with flax. Although found in a normal healthy gut microbiota, an abundance of Dorea and Blautia are found in alcohol-dependent subjects with high intestinal permeability [50]. Even though the abundance of Dorea was increased with flax, it was only increased with the HFHS diet. This suggests that some of the potential beneficial aspects of flax consumption may be diminished when taken in conjunction with an unhealthy diet. This demonstrates the complex challenge researchers face when trying to determine which prebiotics regulate the quantities of which bacteria and what abundances of these bacteria are important for human health. We conclude that flax has an effect on the JCR:LA-cp rat gut microbiota at the genus level, decreasing the numbers of potentially unhealthy bacteria, and potentially improving the health of the gut.
The association between obesity, diet, and SCFAs produced by gut microbiota is not yet fully understood. Intestinal bacteria are known to produced SCFAs including acetate, propionate, and butyrate as crucial end-products by fermenting dietary fibers [51]. Up to 200 kcal/day of human energy can be attributed to these SCFAs [52]. SCFAs accumulate in adipocytes and contribute to lipogenesis [53]. The SCFAs exert their biological effects by interacting with G-protein coupled receptors (GPR41 or GPR43, which are also known as free fatty acid receptor 3 and 2, respectively) [54]. Obesity and diets rich in high carbohydrate attenuate the binding of SCFAs to GPRs, consequently leading to impaired intestinal energy harvest and hepatic lipogenesis [55][56][57]. Alterations in SCFAs concentration may be related to gut dysbiosis, gut permeability, obesity, and other cardiovascular risk factors [58]. However, changes in SCFA amounts are typically related to alterations in the gut bacterial community due to diet [59].
Our data revealed altered levels of main SCFAs such as acetate, propionate, and butyrate in the JCR:LA-cp rat strain. Acetate is the major SCFA found in the gut. Pathways for acetate production pathways are broadly distributed between bacteria [60]. Murphy et al. investigated the relationship between obesity, diet, and time on the gut microbiota. The fecal acetate levels were shown to be higher in ob/ob (leptin-deficient) and high-fat-fed mice at age 7 and 11 weeks compared to their lean counterparts. Conversely, the levels dropped in 15-week old animals [61]. This indicates that the acetate levels decrease progressively over time. Our data aligns with this previous observation, as animals had lower fecal acetate levels when analyzed at 24 weeks of age. These alterations could be due either to gut dysbiosis, or an increase in the acetate uptake or absorption in response to genetic obesity or a high-fat diet consumption [61]. Acetate activates the tricarboxylic acid cycle and changes the expression profile of hypothalamic neuropeptides that suppress appetite [62]. The decreased acetate levels thus support the hyperphagic behavior of obese animals in this study. Propionate administration to obese patients reduced excess adiposity and overall weight gain by enhancing the secretion of glucagon-like peptide-1 and gut hormone peptide YY [63]. The decreased levels of propionate in the HFHS group observed in the present study may have impacted the body weight. However, the association between the increased body weight and the lowered fecal propionate levels in the JCR:LA-cp rats that received a HFHS diet need a valid assessment in further studies. Butyrate is generally considered to be beneficial to human health including maintenance of the colonic epithelium [60]. The butyric acid levels in an animal study has been reported to prevent the translocation of lipopolysaccharide (LPS), which is a potent inflammatory molecule originating in the cell membrane of gram-negative bacteria. Through its adverse inflammatory effect, LPS can cause metabolic endotoxemia, insulin resistance, and obesity [64]. The decreased fecal levels of butyrate observed in the HFHS diet groups in our study could thus explain the metabolic abnormalities observed in these animals.
Bacteroidetes usually contribute to acetate and propionate production, whereas Firmicutes mainly produce butyrate [65]. The abundance of Muribaculaceae from the phylum Bacteroidetes has been strongly associated with propionate levels [66]. Members of Ruminococcus, from the phylum Firmicutes, have been related to increased butyrate concentrations [66]. Consistent with these earlier observations, our study also found corresponding changes in SCFA with altered abundances of Bacteroidetes and Firmicutes.
It was unclear previously if SCFAs contribute to obesity or reflect the gut dysbiosis due to obesity. Our finding in the JCR:LA-cp rat strain of genetic obesity has unraveled this novel and interesting association. There were no differences in the fecal levels of propionate and butyrate among obese and lean genotypes. However, both genotypes showed altered levels of the main SCFAs such as acetate, propionate, and butyrate when administered an HFHS diet compared to the Con diet. Unless maintained on an HFHS diet, the obese genotype had similar levels of SCFAs as their lean counterparts. Therefore, established genetic obesity does not impact SCFAs. Instead, a western diet known to contribute to excess adiposity alters SCFAs which may subsequently affect energy homeostasis and cause weight gain. In summary, the dysbiosis of the gut microbiota caused by an HFHS diet is reflected in the SCFA profile. This study demonstrates that the gut microbiota is modified due to sex and genotype. We also show that an unhealthy diet leads to a dysbiosis of gut microbiota and a reduction of SCFA, demonstrating that the microbial composition of the gut is very dynamic. Supplementing a healthy diet with prebiotics, such as flaxseed, can establish and enhance a healthy microbial production in the human gut, which in turn can lead to production of healthy SCFAs.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/microorganisms9051037/s1, Figure S1: Analytical flowchart describing data processing and analysis, Figure S2: The box-and-whisker plot illustrates the total number of quality filtered per sample, Figure S3: The plot illustrates the mean and standard error of the relative abundances of the 5 most abundant genus-level taxa within the 4 most abundant Phyla, Figure S4: Taxonomic composition at the class level, Figure S5: Taxonomic composition at the order level, Figure S6: Taxonomic composition at the family level, Figure S7: Taxonomic composition at the species level, Figure S8: Differential abundance of Subdolingranulum variabile as a function of dietary interventions in lean and obese, male and female, JCR:LA cp rats fed different diets, Figure S9: Differential abundance of a SMB53 species as a function of dietary interventions in lean and obese, male and female, JCR:LA cp rats fed different diets, Figure S10: Differential abundance of an unclassified species from the family Lachnospiraceae as a function of dietary interventions in lean and obese, male and female, JCR:LA cp rats fed different diets, Figure S11: The effect of genotype, sex, and diet on the SCFA content, Table S1: The following table summarizes
|
2021-05-28T05:19:57.212Z
|
2021-05-01T00:00:00.000
|
{
"year": 2021,
"sha1": "a84f7958ef0ac8b812cea1170ad499219d399732",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2607/9/5/1037/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a84f7958ef0ac8b812cea1170ad499219d399732",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
189946898
|
pes2o/s2orc
|
v3-fos-license
|
A convergence result on random products of mappings in metric trees
* Correspondence: salmezel@kau. edu.sa Department of Mathematics, King Abdulaziz University P.O. Box 80203, Jeddah 21589, Saudi Arabia Full list of author information is available at the end of the article Abstract Let X be a metric space and {T1, ..., TN} be a finite family of mappings defined on D ⊂ X. Let r : N ® {1,..., N} be a map that assumes every value infinitely often. The purpose of this article is to establish the convergence of the sequence (xn) defined by x0 ∈ D; and xn+1 = Tr(n)(xn), for all n ≥ 0.
Introduction
Many problems in mathematics [1] and physical sciences [2][3][4] uses a techniques known as search for a common fixed point. Indeed, let X be a metric space and suppose T 1 ,..., T N are pairwise distinct self-mappings of some nonempty and closed subset D of X. Suppose further that the fixed point set, Fix(T i ) = {x D; T i (x) = x}, of each mapping T i is nonempty and that C = Fix(T 1 ) ∩ · · · ∩ Fix(T N ) = ∅. The aim is to find a common fixed point of these mappings. One frequently employed approach is the following.
Let r be a random mapping for {1,..., N}, i.e., a surjective mapping from N onto {1,..., N} that takes each value in {1,...,N} infinitely often. Then generate a random sequence (x n ) by x 0 D arbitrary, and x n+1 = T r(n) (x n ), for all n > 0, and hope that this sequence converges to a point in C. We call it a random or unrestricted product (resp. iteration). For products generated by using control sequence, there are many results: for instance, cyclic control arises when r(n) = n + 1 mod N (see, for example, [5]).
In general, this random product fails to have well convergence behavior. The first positive results were done in the case when D = X is a Hilbert space and each mapping T i , is the projection onto some nonempty, closed and convex subset C i of X ; hence Fix (T i ) = C i , i = 1, ..., N. The problem of finding a common fixed point is then the wellknown convex feasibility problem (see, for example, [5]). Combettes article [6] proposed several interesting applications of this problem. Some of the early known results in this case are.
(1) Amemiya and Ando [7]: If each set C i , is a closed subspace, then the random product converges weakly to the projection onto C.
(2) Bruck [8]: If some set C i , is compact, then the random product converges in norm to a point in C. If N = 3 and each set C i , is symmetric, then the random product converges weakly to a point in C (see also [[9], Theorem 2]).
The authors in [14] were successful in their extension of Amemiya and Ando's [7] results from Hilbert spaces to Banach spaces. In this study, we investigate Amemiya and Ando's result and [14]'s results in metric spaces. Such extension is the first attempt so far.
The main difficulty faced in such extensions is the heavy use of the linearity structure of either the Hilbert space in [7] or Banach spaces in [14]. Indeed when one tries to extend concepts from linear functional analysis, one has to pay attention to look deep into the supporting basic ideas and what intrinsic interrelations exist between them. Most of the main theorems in nonlinear functional analysis were done in the framework of linear Banach spaces. So it was interesting to investigate the extension of these fundamental results in nonlinear structures like metric spaces. As an example of this research is Kirk's fixed point theorem [15]. Many researchers have tried to do it but the best approach is the one given by Penot [16]. The impact of this approach went beyond what it intended to do initially. This research follows the same motivations. In particular we investigate the concept of weak convergence in metric spaces which is central, for instance, in [14]. We consider the case of metric trees to illustrate some of these ideas.
Basic definitions and results
Metric trees were first introduced by Tits [17] in 1977. A metric tree is a metric space (M, d) such that for every x, y in M there is a unique arc between x and y and this arc is isometric to an interval in ℝ. For example, a connected graph without loop is a metric tree. One basic property of metric trees is their one dimensionality. Again in the late seventies, while studying t-RNA molecules of the E. Coli bacterium Eigen raised several questions which led Dress [18,19] to construct metric trees, (named as T-theory). Metric trees also arise naturally in the study of group isometries of hyperbolic spaces. For metric properties of trees we refer to [20].
Since a metric tree is a space in which there is only one path between two points x and y, this would imply that if z is a point between x and y, by which we mean if d(x, z) + d(z, y) = d(x, y) then we know that z is actually on the path between x and y. This will motivate the next concept of a metric interval. Metric trees are very special. They enjoy properties which are shared by l ∞ and Hilbert space. In particular, Kirk [21] showed that complete metric trees are hyperconvex. Since the weak topology has an intimate relationship with convexity, let us define convex subset in this setting. Definition 2.3. Let M be a metric tree and C ⊂ M. We say that C is convex if for all Clearly a metric tree M and the empty set ∅ are convex. Also any closed ball B(a, r) = {z M: d(a, z) ≤ r} in a metric tree is also convex. Let C(M) denotes the collection of all closed and convex subsets of M, we set: Note that C(M) is invariant by intersection, i.e. the intersection of any family of convex subsets of M is convex. We need the following result of Baillon [22] in order to prove our first fact about C(M).
Theorem 2.1. Since convex subsets of a metric tree are metric trees, then they are hyperconvex by [21]. This combined with Baillon's result we get the following theorem.
Theorem 2.2. Let M be a bounded complete metric tree and let {C b } b Γ be a family of nonempty, closed and convex subsets of M such that This is known as compactness of C(M) according to Penot's formulation [16]. Note the slight difference between the statements of the two theorems. Indeed the intersection of two convex sets is convex while the intersection of two hyperconvex sets may not be hyperconvex.
Next we discuss the nearest point projections in metric trees. Let C be a nonempty, closed and convex subset of a complete metric tree M. For any x M, denote In a Hilbert space, the metric projections on closed and convex subsets are nonexpansive. In uniformly convex spaces, the metric projections are uniformly Lipschitzian. In fact, they are nonexpansive if and only if the space is Hilbert. In what follows we will show that the metric projections in metric trees are nonexpansive. This result is not true in hyperconvex metric spaces.
Lemma 2.1. [23,24]If C is a nonempty, closed and convex subset of a complete metric tree M, then for any x M there exists a unique c x C such that dist(x,C) = d(x,c x ), which means that P C is single valued. Moreover if c C we have or P C (x) = P C (y), for any x, y M. In particular, P C is nonexpansive. Next we prove another property of the mapping P C . Proposition 2.1. If C is a nonempty, closed and convex subset of a complete metric tree M, then for any x M we have In other words, P C is a sunny nonexpansive mapping [25,26].
Since C is convex, we get that w C. Also the definition of w implies d(y, w) + d(w, P C (y)) = d(y, P C (y)).
The properties of P C will force P C (y) = w which will imply P C (y) [y, In particular we get d(x, P C (y)) ≤ d(x, P C (x)) which implies P C (y) = P C (x).
Amemiya and Ando's theorem in metric trees
In 1965 Amemiya and Ando [7] proved the astonishing result. converges weakly in H. Today, 46 years later, it is still not known whether (x n ) converges strongly, even for N = 3. There is doubt expressed in the literature as to whether this sequence does converge strongly (cf. [ [27], Example 4]) for an interesting example of possible relevance. In general, strong convergence may be obtained when some kind of compactness is assumed. Next, we show that in the case of metric trees, we have strong convergence without any compactness assumption. The Amemiya and Ando's theorem was preceded by von Neumann [28] for alternating products of two projections (with strong convergence as the conclusion). x 0 ∈ X, and x n+1 = P C r(n+1) (x n ), for all n ≥ 0.
converges strongly in M. Moreover we have
Fix
Proof. c C. Using Lemma 2.1 we have for any n ≥ 0. In particular we have for any n ≥ 0 and h ≥ 0. Since P C r(n) (c) = c, we get d(x n+1 ,c) ≤ d(x n , c), for any n ≥ 0.
In other words, the sequence (d(x n ,c)) is decreasing. Hence lim If we let n ∞, we get The definition of c 0 implies which implies d(c, c 0 ) = 0, or c = c 0 . Remark 3.1. In [14]the authors made heavy use of the property that in smooth reflexive Banach spaces X, if E is a closed subspace of X, then there is at most one nonexpansive retraction of X onto E [26]. In the case of metric trees, we have a similar result. Indeed, let C be a nonempty, closed and convex subset of a metric tree M. Then P C is a sunny nonexpansive retract of M onto C. Let Q : M C be another sunny mapping. Let x M. There exists w M such that [x, P C (x)] ∩ [x, Q(x)] = [x, w]. Since C is convex, then w C. Also since d(x,w) + d(w, P C (x)) = d(x, P C (x)), the definition of P C (x) will force P C (x) = w. Hence P C (x) [x, Q(x)]. Since Q is sunny, we must have Q(P C (x)) = Q(x), which implies P C (x) = Q(x). In other words, P C is the only sunny retract from M onto C.
In the next section we investigate the behavior of the random product of mappings other than the nearest point projections.
Random product of mappings in metric trees
As the authors did in [14], one inspires itself from the Amemiya and Ando's work in Hilbert spaces to extend it to other underlying spaces. In particular the authors in [14] introduced the concepts of (W) and (S) properties. Since the (W) property is strongly linked to the weak-topology, we are not able to extend such property to metric trees.
Indeed let c 0 M be a common fixed point of T 1 , ...,T N . Let us only prove that Since each mapping is nonexpansive, we get Since T N satisfies the property (S), we get T N (c) = c. Similarly one will show that T i (c) = c, for i = 1,...,N.
Another property discovered by Caristi [29] (see also [30]) and extensively used to obtain some beautiful results extending Banach contraction principle is the following definition.
Note that T is a nonexpansive retraction and Fix(T) = [0,1]. In particular (T n (x)) is convergent and its limit is T(x). But the nearest point projection on [0,1] is the map which is different from T. Moreover one can easily show that for any x [0,2], where l(t) = t. Therefore T satisfies the (C)-l property.
In the next result, we show how Theorem 3.2 extends to the family of mappings satisfying the (C)-l property.
Theorem 4.1. Let M be a complete metric space. Let T 1 ,...,T N be a finite family of pairwise distinct self-mappings of some nonempty and closed subset D of M. Suppose further that each map T i , i = 1,..., N, is continuous and satisfies the (C)-l property, with the same function l. Let r be a random mapping for {1,...,N}, i.e., a surjective mapping from N onto {1,...,N} that takes each value in{1,...,N} infinitely often. Then the random sequence (x n ) defined by x 0 D arbitrary, and x n+1 = T r(n) (x n ), for all n > 0, is convergent. Its limit is a common fixed point of the mappings T 1 ,...,T N . Proof. Let x D. Our assumptions on the mappings T i imply for any n ≥ 0. In particular we have for any n ≥ 0 and h ≥ 0. On the other hand, we have l(x n+1 ) ≤ l(x n ), for any n ≥ 0. Therefore the positive sequence (l(x n )) is convergent. Clearly this will imply that the sequence (x n ) is Cauchy. Since M is complete, there exists c M such that lim n→∞ x n = c ∈ D since D is closed. For any i {1,..., N}, there exists a subsequence of (x j(n) ) such that x j(n)+1 = T i (x j(n) ). Hence T i (c) = c, for any i = 1,..., N.
Note that the limit defines a retraction on the common fixed point set of the mappings T 1 ,...,T N . But this retraction may not be equal to the nearest point projection even in the case of a metric tree as the Example 4.1 shows.
The next result investigates the extension of some of the results discovered in [14]. Before we do this, we need to discuss the weak-topology in the nonlinear setting of metric spaces. Indeed, let (x n ) be a bounded sequence in the metric tree M. Define the real-valued function where U is a nontrivial ultrafilter [31]. We have the following theorem which will play a central role in our work. For any ε > 0, consider the set Using the properties of metric trees, we know that C ε is a nonempty, bounded and convex subset of M. Since ϕ U is continuous, then it is also closed. Using the compactness of C(M), then Now, we will show that this intersection is reduced to one point. Indeed, let us Fix z U ∈ M such that ϕ U (z U ) = r. Let x be any point in M. Using the properties of metric trees, for any n ≥ 1, there exists . Obviously this will imply d(w, z U ) = 0 or lim U d(w n , z U ) = 0 . Also since d(x n , w n ) + d(w n ,x) = d(x n ,x) for any n ≥ 1, then we have . This latest identity, also known as Uniform Opial condition, will easily show that z U is unique. Definition 4.3. Let M be a metric tree and (x n ) be a bounded sequence in M. For any nontrivial ultrafilter U, the unique point z Ufound in Theorem 4.2 is called the weaklimit of (x n ) along U. We will say that (x n ) is weakly convergent if and only if z U = z V, for any nontrivial ultrafilters Uand V.
It is because of the absence of a dual space that we used Opial behavior to try to catch the weak-limit of a bounded sequence. In the next result we show some close similarities between the classical weak-limit point in Banach spaces and the one introduced above.
Fix
Proof. i ≥ 1. Set P n the nearest point projection on conv(x i ) i≥n . Since U is nontrivial then where we used the nonexpansiveness of P n . This obviously implies which implies P n (z U ) = z U or z U ∈ con(x i ) i≥n for any n ≥ 1. So z U ∈ (x n ). Next we discuss the behavior of mapping which satisfies the property (S). Theorem 4.3. Let M be a complete metric tree. Let T : C C be a nonexpansive mapping which satisfies the property (S), where C is a nonempty, bounded, closed, and convex subset of M. Then the sequence (T n (x)) converges weakly to a fixed point of T.
Proof. Let U and V be any nontrivial ultrafilters. Let z U and z V be the minimum point of ϕ U (z) = lim U d(T n (x), z) and ϕ V (z) = lim V d(T n (x), z) , respectively. Proposition 4.1 implies that z U and z V are in Ω(T n (x)) ⊂ C. Next we will prove that z U and z V are fixed point of T. It is enough to prove that T(z U ) = z U . Since C is hyperconvex and bounded, we know that T has a nonempty fixed point set (see [32,33]). Let c Fix(T). The sequence (d(T n (x),c)) is a decreasing sequence of positive numbers. Since T satisfies the property (S), we deduce that lim n→∞ d(T n (x), T n+1 (x)) = 0 . Hence which implies ϕ U (T(z)) ≤ ϕ U (z), for any z C. The properties of z U will force the identity T(z U ) = z U , i.e., z U ∈ Fix(T). Note that the sequence (d(T n (x), z U )) is decreasing which implies
|
2019-06-18T15:31:19.138Z
|
2012-01-01T00:00:00.000
|
{
"year": 2012,
"sha1": "478d79d7a0fd4fde41c5926b7e9cee547e0d9ee7",
"oa_license": "CCBY",
"oa_url": "https://fixedpointtheoryandapplications.springeropen.com/track/pdf/10.1186/1687-1812-2012-57",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f0fbbc2218dcb3f858b6e59f2c4f6e8e77d3b4a4",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
237718149
|
pes2o/s2orc
|
v3-fos-license
|
Streaming ambivalence: Livestreaming and indie game development
Commercial game makers at all scales of production have increasingly come to incorporate livestreaming into every stage of the game development cycle. Mainstream hits like Fortnite and League of Legends owe their ongoing success in no small part to their massive uptake by streamers, and triple-A releases from major publishers can reliably expect significant attention on streaming platforms. But what about smaller, lower budget games? For independent game developers, the costs and benefits of streaming are less clear. Based on interviews with small commercial indie developers in Toronto and Montréal, this article critically examines different discourses around streaming and commercial indie games, focusing on developer perceptions of the benefits and risks of streaming and its impacts on indie game-making practices, including production, promotion, and community-building. Contrary to persistent popular myths about streaming as the key to ‘discoverability’, commercial indie game development remains a precarious form of cultural work, and indie games collectively attract only a tiny fraction of the overall audience on streaming platforms. There is a high level of uncertainty about the factors that led to a given game’s success, leaving many indie developers ambivalent about leveraging influencer attention and even as they commit significant time and energy trying to doing so.
Introduction
Without question, livestreaming is changing the industry and culture of digital games. Twitch in particular has been built up as the platform for users to broadcast themselves performing play and vie for the elusive social and economic rewards of online celebrity. Viewers consume billions of hours on a monthly basis, interacting with streamers and fellow spectators via live chat in ways that spill onto social media and ripple outward to shape popular tastes, modes of communication, cultural attitudes, and dominant play styles in game culture (Taylor, 2018). Twitch and competing platforms like YouTube (and their parent corporations Amazon and Google) extract massive profits from all this engagement via advertising, sponsorship deals, and various fees, guiding user attention to specific channels via front page ranking and recommendation algorithms (Partin, 2019). Commercial game makers at all scales of production have increasingly come to incorporate streaming into every stage of the game development cycle. Mainstream hits like Fortnite and League of Legends owe their ongoing status as bona fide pop cultural phenomena in no small part to their massive uptake by celebrity and amateur streamers alike, and triple-A releases from major publishers can reliably expect significant attention on streaming platforms, in some cases achieved by paying streamers directly to play (Lanier, 2019). But what about smaller, lower budget games? For independent game developers, the costs and benefits of streaming are less clear.
Indie developers are acutely aware of the centrality of streaming in the contemporary game industry ecosystem, but they lack the resources, brand recognition, and dedicated marketing teams of big-budget giants. There is a persistent popular myth that streaming and related forms of online content creation are a golden key to indie game 'discoverability' and ultimately sales, and that Twitch streamers, YouTubers, and other game-based content creators and influencers are the new gatekeepers of indie success (Phillips, 2018;Takahashi, 2016). However, commercial indie game development remains an extremely precarious form of cultural work . A great diversity of game and non-game content is broadcast but popular blockbusters continue to dominate streaming platforms, attracting the highest profile celebrity-influencers and their legions of fans, as well as countless smaller streamers. With the rare exception of breakout indie hits like Among Us (Fenlon, 2020), indie games collectively make up only a tiny fraction of the overall audience. There remains a high level of uncertainty about the factors that lead to a given game's success, leaving many indie developers ambivalent about leveraging influencer attention for sales even as they commit significant time and energy trying to doing so. Are streamers the golden key to success, a necessary cost of doing business as an indie, or platform capitalist snake oil? This article critically examines different discourses around streaming and commercial indie games, beginning with an overview of popular success stories, then focusing on developer perceptions of the benefits and risks of streaming and its impacts on indie game-making practices, including production, promotion, and community-building.
The body of academic research on Twitch and game streaming continues to grow, and scholars have investigated streaming services as platforms, the experiences of streamers marginalized on the basis of race, gender, sexuality, and mental health, the diverse forms of visible and invisible labor involved in streaming, cultures of game spectatorship, the possibilities of streaming for game development education, and the intersection of streaming and competitive esports (Consalvo and Phelps, 2021;Gray, 2017;Johnson and Woodcock, 2019a, 2019b, 2019cRuberg, 2020;Ruberg and Lark, 2020;Ruberg et al., 2019;Taylor, 2018;Walker, 2014). These insights directly inform our approach here, and we hope to expand and nuance this body of work by directing attention to the experiences of game developers with streamers and streaming platforms, extending the project of indie game studies and game production studies (Ruffino, 2021;Sotamaa andŠvelch, 2021). We likewise build on critical work on the political economy of digital platforms, online influencers, content creators, and micro-celebrity and media and cultural industries research more broadly (Abidin, 2018a;Bishop, 2020;Duffy, 2017;Duffy and Hund, 2015;Duguay, 2019;Nieborg and Poell, 2018). Ultimately we argue that, contrary to popular success stories, the impacts of streaming for indie game developers are complex and uncertain, and their ambivalence is characteristic of contemporary platformized cultural work.
Our findings are based on semi-structured interviews with 12 indie game developers based in Toronto and Montréal, Canada (see Table 1 below) selected using a combination of purposive sampling leveraging past connections and snowball sampling. Canada is the third largest producer of digital games internationally, and both cities are significant hubs, encompassing game-making activity from AAA to DIY. Almost 90% of Canadian studios, including all of our interviewees, fall into the category of 'small' or 'micro' operations with less than 25 employees (Nordicity, 2019). Our focus here is on commercial indie game developers who primarily make original, creatorowned games, usually distributed digitally, in a variety of production contexts.
'Developer' here includes all kinds of game workers, not limited to studio leadership or traditional 'creative' roles, but also frequently overlooked roles in commercial game-making like marketing and community management (Perks, 2020). In some cases, due to the shifting nature of indie cultural work, developers are responsible for multiple areas, while others are in more dedicated roles. All participants are embedded to varying degrees in local and translocal indie scenes and most are personally acquainted via community organizations, coworking spaces, and social events, as well as larger global networks of indie developers (Parker and Jenson, 2017). In addition to individual experience, these interconnected communities of practice inform developer understanding of streaming through informal knowledge-sharing and formal initiatives, such as events for developers to meet local streamers organized at coworking hubs. Interviews took place in single sessions in 2018 and 2019, usually in studio offices or coworking spaces, and participants were asked open-ended questions about their experiences with game livestreaming, how they interact with streamers, the impacts of streaming on various aspects of development, differences between streamers and other kinds of intermediaries like journalists, and the role of streaming platforms themselves. Interview data was transcribed then collaboratively coded and analyzed according to emergent themes, allowing us to synthesize on the ground stories, perspectives, and attitudes. Participants were given the opportunity to review the article and quotations before publication, and all names have been anonymized. This research is part of the larger Indie Interfaces project, and in addition to these interviews, our findings are informed by extensive interviews and ethnographic work conducted with indie game developers and cultural intermediaries between 2015 and 2019, during which time the potential importance of streaming for indies became increasingly apparent.
Streaming success stories and cultural intermediation
To set the stage for the present research, it is important to consider the wider industry context and popular narratives around indie games and streaming. In the wake of digital distribution, cheap bundling of games, and increased interest in smaller games, commercial indie games are now widely understood to be an oversaturated market, making it difficult to stand out (Keogh, 2018). In light of these concerns (whether or not they are accurate), streaming appears to be an 'implicit low-intensity marketing' solution to the problem of discoverability (Kerr, 2016: 135). Popular streamers command the attention of hundreds, sometimes thousands of eyeballs, and if they are playing your game, then there is a presumed opportunity to convert them to customers and fans. Journalist Jason Schreier underscores the role of streamers and YouTubers in the success of two breakout indie hits, Stardew Valley and Shovel Knight. In his account, 'early streams and videos generated more buzz for Stardew Valley than any press outlet' (2017: 77), and 'when huge YouTube channels like the Game Grumps later played through the [Shovel Knight] demo, they reached hundreds of thousands of people' (2017: 180). These and other success stories about indie developers making it big thanks to positive attention from streamers and YouTubers circulate widely and inform game development practices. Like other indie success stories, these narratives tend to assume a linear path in which the passionate labor and creative vision of obscure independent creators, along with a little luck, translates into well-earned fame (Ruffino, 2013). The developers we spoke to frequently mentioned these and other examples, and a handful have found traction with streamers for their own games.
In many ways, game streamers resemble cultural intermediaries, those actors in a cultural field that connect cultural works to consumers (Matthews and Smith Maguire, 2014). Intermediaries such as community organizers, festival and showcase curators, critics, coworking space coordinators, and other behind-the-scenes actors are the connecting tissue that constitutes indie game culture as such Perks et al., 2019). Aphra Kerr calls game streamers and online content creators new cultural intermediaries who are taking the place of specialist game magazines and written game reviews. These players are generating advertising, sponsorship revenue and driving sales of games. They assist in the circulation, marketing and commodification of gameplay. (2016: 137); Mark R. Johnson and Jamie Woodcock go so far as to argue that streamers are making professional reviewers obsolete (2019) Carolyn gestures to this as she tries to find the right word to describe what exactly streamers do for indie developers, suggesting 'servers', 'advertisers', and 'sales people' as possibilities, while Holly thinks of streamers as 'tastemakers' that draw attention to new games.
Our research suggests these accounts of influencers' influence may be exaggerated. Certainly, game streamers can act as tastemakers in that they -at least sometimes -are able to expose consumers to previously unknown cultural products. But Kerr goes on to note that the paratextual content created by streamers 'exists in an uneasy relationship' to the game makers whose work they build their streaming careers on (Kerr, 2016: 137). This uneasy relationship is further complicated by the platforms themselves, who are themselves powerful intermediaries. For this reason, T.L. Taylor challenges reductive accounts of streaming as merely promotional, a framing that glosses over the more complex culturaleconomic interdependences involved and the creative/cultural labor of streamers themselves (2018: 50-51). There is an important difference between 'downstream' intermediation of putting games in front of potential players associated with advertising and tastemaking, and 'upstream' intermediation between developers and powerful industry actors like publishers, platform-holders, and investors . This is further muddled by forms of 'cross-stream' intermediation between developers and journalists, curators, and community organizers whose 'relational labour' and networks of mutual support are far from obsolete and remain key to indie game development even if they do not directly engage consumers (Baym, 2015;Whitson et al., 2018). As we will show, streaming is not a simple or linear process of promoting cultural products to consumers, and in fact performs a wide variety of functions for a diverse range of actors to 'transform private play into public entertainment' (Taylor, 2018: 22), and indie game developers do not necessarily have much agency in this process.
Meritocratic success stories risk misrepresenting the work and complexities involved in both streaming and indie game development. In reality, only a small upper crust of indie games catch the attention of streamers and influencers in the first place, and the process by which they do so is anything but straightforward. These stories also ignore the 'survivor bias' of early adopters of new game production and distribution techniques; what begins as an exciting new 'blue ocean' quickly becomes a hyper-competitive 'red ocean' as other developers attempt to emulate the success stories (Mi, 2015) -indeed, breakout games like Stardew Valley and Among Us occupy significant platform real estate, making it that much more difficult for newcomers to capture attention. Melvin alludes to this, saying part of the challenge for developers is keeping abreast of new avenues for promotion and distribution, without falling into the trap of replicating strategies that no longer work. No doubt hard work, good ideas, and sheer luck play a role, but our research participants -including those that have found popularity with streamers -point to a more complex and ambivalent assemblage of actors, factors, and attitudes at play, suggesting that success stories are not the whole story.
Streaming and indie game production
Unsurprisingly, the rise of streaming has influenced not only promotional strategies, but all aspects of game development, including the design process. Tom argues that 'streaming games has changed the landscape of what kind of games are practical to build', or at least what is commercially marketable. In the current moment, all game developers are compelled to keep the dynamics of streaming platforms in mind as they conceptualize, execute, iterate, and launch projects and support them post-release -even if they ultimately choose to ignore them.
Watching others play. The most subtle but important way that streaming shapes game design is that developers are able to covertly watch their games being played online. Watching streams and gameplay videos becomes an extension of playtesting for developers, which is particularly valuable for in-development games with public 'early access' releases, or completed games that may be continually patched, updated, and developed for months or years after release based on player reception, data analytics, platform changes, and other factors (Nieborg and Poell, 2018). This offers certain advantages compared to conventional private playtesting. Hugh compares it to watching 'actual people' playing at in-person exhibitions, but better. He is especially drawn to smaller streamers with low viewer counts, who he says are more likely to 'play the game in a very similar environment to how they play the game if they were just playing without streaming it'. For Christopher, this removes the artifice of playtesting in the studio or at shows, because the players are playing without direct 'coaching' and scrutiny from the developers, resulting in something close to the 'the real experience of a first time player'. This lack of scrutiny leads to less filtered, more actionable feedback according to Tessa, because streamers 'don't feel like [they] owe any amount of patience to the game to make you understand, which can come across as pretty harsh [ . . . ] but at the same time, it's fair'. Melvin remembers how watching streamers struggle with certain features of his game (which was not originally designed with streaming in mind) was revelatory, and helped identify key usability problems, bugs, and other issues to be fixed that were missed in regular playtesting. However, this also creates a new challenge for developers. As Hugh notes, if the game is too buggy or broken, streamers may bounce off of it, or viewers may decide, 'well, there's 7,000 other games released this year. I'm not buying this one'. If the developers aren't able to make the necessary fixes promptly in response to issues flagged by streamers, says Christopher, 'We've lost these players or these viewers', the opposite of the desired effect. This illustrates the risk of unofficial playtesting in front of a live online audience compared to more controlled environments, as well as the 'always-on' grind necessitated by the shift to ongoing 'games-as-a-service' style development (Dubois and Weststar, 2021).
Designing for 'streamability'. Different genres, styles, and features are considered more or less amenable to the performance of play, and most developers we spoke to considered 'streamability' and 'watchability' in the design of their projects from the beginning in hopes of increasing their platform 'discoverability' (Della Rocca, 2020;McKelvey and Hunt, 2019). Action-oriented, competitive, and silly games, multiplayer 'live' games that are updated frequently, and horror games are singled out as good content for streamers because of their unpredictability and potential for humorous or entertaining commentary, their encouragement of audience 'back seat' play, and their capacity for long-term play. By contrast, single-player narrative games, especially those with fairly linear stories, are seen as less amenable for streaming. This emerging discourse of streamability and discoverability contributes to a kind of normative standardization of which types of indie games and developers are considered commercially feasible, and which are not.
Many developers told us they take time to closely analyze the most popular games on Twitch and other platforms to determine what makes them so streamable, and whether those qualities are marketable to a wider audience beyond content creators. Hugh thinks the visual and user interface levels are crucial, to make the game legible and entertaining for audiences as well as players. His studio's competitive multiplayer game was not made exclusively for streaming, but it was designed to work well as a competitive esport with online spectators. Its presentation is influenced by professional sporting events, 'So we looked at both those types of, how those things are presented on TV and tried to copy certain things'. Hugh notes that designers may prefer simplicity and minimalism, but from a 'spectator design' standpoint it is important to have additional information visible on screen, such as timers and energy meters, to engage commentators and the audience in the action. In Helena's experience some features streamers look for are relatively simple to implement, such as timers to foster speedrunning, but other features believed to enhance 'streamability', such as networked multiplayer, nonlinear structure, procedural generation to increase replayability, and customization, are more substantial undertakings for developers.
From Christopher's perspective, every aspect of a game's design is key to its appeal to streamers and viewers, and he put a lot of thought into making his multiplayer game 'perfect for Twitch'. Having small teams, for example, allowed for legible communication and interaction between players without overwhelming the streamer or viewers (Christopher notes with pride that his team came to the same conclusion as popular AAA titles on the ideal team size). He also determined, based on observing streamers and the affordances of Twitch as a platform, that 'games with some downtimes, as long as they're not too long, is great because they have time to engage with their community and talk with people and read the chat'. This is somewhat counterintuitive, since Christopher's design philosophy and past experience suggest players want a fast-paced game with as little downtime as possible. Tom also touches on this contrast: 'the streamer demands a certain flow for it to fit inside of a stream. If I'm making a super high stressed action game, that doesn't work for the streamer as well as it does for the individual player'. That 'slow time' allows streamers to more actively engage with their audiences, an essential part of their performance.
Developers are also keenly aware that if streamers are not hooked by a game's pacing and flow early on, they may not stick with it for long. Curtis feels in retrospect that his most recent game was not structured well for streamers. the big mistake that I didn't know I was making until I saw it being streamed, which is that I was really trying to get a good difficulty curve from the game, which means sort of introducing things at a steady pace, but not necessarily showing our hand entirely early on. [ . . . ] it's only once you get into the second [world] that you start seeing the things that are important, that are not important, that are surprising and that make you sort of realize, "Oh, this game's a lot deeper than I expected." But that first world ends up being a really natural stopping spot. So, what I've seen is a whole bunch of people who've done a single stream of the game where they play for around half an hour to an hour, finish up the first line and then never come back to the game on stream because they feel this shows what the game is about.
This poses a dilemma, however, because Curtis feels the game as released is better from a design perspective, even if a more front-loaded structure would be more appealing for streamers and promotional purposes. As Christopher's example of incorporating downtime also indicates, developers' instincts about what works for ordinary players must be balanced against what they think will work for content creators, directly informing the design process.
Platform programmability and integrations. Twitch's 'programmability' as a platform (Helmond, 2015) extends to game developers, who can use Twitch's API (application programming interface) to easily integrate platform functionality directly into games -a more explicit way of enhancing streamability. The developers we spoke with are ambivalent toward these integrations. Melvin's team added minor Twitch integrations that allow viewers to vote on in-game elements, which he says was a post-release decision once the game was already gaining popularity with streamers: 'It was just a cool idea and there was a plugin that worked for it, so we used it'. Helena sees integrations as an iffy proposition that not all streamers actually like, especially if they are 'obtrusive' and allow viewers to directly intervene in the game, so her studio has stuck to 'passive' features like using viewer usernames for in-game characters. These kinds of features are fun add-ons rather than core to the game's design. By contrast, Lauren has more experience with integrations and sees them as a substantial way to make genres perceived to be less streamable, such as single player narrative games, work well on stream. She explains that developers can tap into 'that desire that streamers have to connect' by developing features that allow streamers and viewers to engage directly through the game. One example is incorporating Twitch 'drops', free in-game items awarded to viewers if the streamer hits certain goals, which Lauren says incentivizes streamers to play the game, while simultaneously incentivizing viewers to become players so they can use the free items. But she cautions that it can't be a tacked-on thing, adding 'you have to actually think about it, I think developers are thinking about it more and more and are actually doing something that makes sense with their game, or just don't do it'. Other developers are dubious of the value of integrations, especially for small teams on modest budgets, and Stuart notes that because they do not work on mobile devices, many viewers will not even be able to use them. Here we see a central, recurring tension between dedicating time, energy, and budget to make streaming an "integral feature" of the game, versus focusing on other things.
Several developers talked about plans to build future projects around streaming from the ground up. Holly is hoping to take advantage of the excitement around virtual reality (VR) systems, explaining a concept where 'the streamer could play it in VR, but the audience could participate in the game itself using the new integration tools', by voting on what happens in the game, with those interactions incorporated into the VR user interface so the streamer is not 'cut off' from the audience. An important factor for Holly is that these integrations are monetizable via viewer donation, with developers getting 20% of the revenue alongside the platform and the streamer, as opposed being left out of the deal as they are in other forms of streaming monetization. She sees this as a pathbreaking idea, since most VR games are not optimized or monetized for streaming, and hopes that the audience-interactive elements will also increase replayability. Curtis has also done experiments with what he calls 'stream first' games that are 'made to be played over Twitch'. With some cultural agency funding, he prototyped 'a game that was played between the audience and the person streaming' using the Twitch chat, rather than the official API, and thought it was promising. However, he's hesitant to turn it into a larger scale project due to the 'serious money' required and the lack of well-designed, successful examples of similar games, which he attributes to the fact that some audiences simply want to watch rather than become active participants in the game. Nevertheless, like most developers he has streaming front of mind as he conceptualizes new projects: 'I'm going to just basically sit down and look at the state of the industry and try to figure out what my plans are next, because it keeps changing'. Indies are navigating a constantly shifting environment, and the language of risk permeates their comments.
For Tom, the greatest risk lies in ignoring streamers: 'A lot of developers would make a game without considering necessarily whether they're making it for streaming audiences', waiting until the game is ready to release before contacting streamers with a 'hope this works out' approach rather than intentionality. Curtis finds this process 'super annoying', since he feels it devalues games designed to be self-contained experiences in favor of 'endless amounts of content' and games-as-a-service models. This is exacerbated by what he calls the inscrutable 'black box of discoverability' on different platforms, leaving developers mystified about how to find an audience. This skepticism is warranted, according to other developers. Hugh lists off the many ways incorporating streaming-related features can impact a project: 'additional cost, additional programming time, additional quick fixing, additional [quality assurance]. So you have to be really sure that there's value in what you're doing before you commit to spend that money in development'. Christopher is fairly certain there is no value in streaming for his team's next game, so he's 'not going to invest effort and money too much on streaming because these kinds of games almost [never] stream or barely'. Strategic decisions about costs and benefits, imagined audiences, and design ethos, all inflected by platform logics, are now central to commercial indie game development. These strategies are undertaken on the chance -however slim -that streaming can lead to commercial success or notoriety for indie developers.
Streaming and indie game promotion
Although the experiences and specific attitudes of our interviewees vary, in the broadest terms indie developers see streaming as a means of promoting their games, alongside marketing, press, social media, public exhibitions, and other forms of promotion. According to developers, the potential value of streaming is highly dependent on the genre of game, and moreover there are many different kinds of streamers, each with different styles of performance and genre preferences, from competitive streamers who often play one game exclusively, to 'variety streamers' who rotate games and genres regularly, to 'niche' streamers who focus narrowly on a particular genre or subgenre. When the genre of game aligns with the streamers' particular tastes or play style, Helena says, streamers become 'very good hype people. If you have a game and you want people to get excited about it and you want to get it to as many people as possible, I feel like streamers are just the connectors'. Developers' ground-up theories of streaming resonate with Austin Walker's argument that the affordances of Twitch as a platform encourage a 'promotional stance' (Walker, 2014: 440). What exactly is being promoted -the game, the developer, the streamer, the platform, or some combination thereof -is not always apparent, however, which complicates notions of symbiosis between developers and streamers (Taylor, 2018: 126).
Some developers see a fairly direct connection between promotion, streaming, and sales. Melvin and Tessa's accounts of the success of their 'highly streamable' competitive party game exemplify the idea of streamers as a form of promotion. Although they did see some spikes in their sales that directly correlated with popular streamers playing the game not long after its release, they place greater emphasis on the fact that they have maintained sales at an unusually steady level for upwards of 3 years, a 'long tail' of players discovering the game thanks in part to ongoing streaming and gameplay videos. 'A lot of them are small, but still people are making content', which for Melvin and Tessa speaks to the value of fostering paratextual practices as a 'primary strategy' for ongoing post-release promotion that they have pursued 'pretty aggressively' as they have pushed new content for the game by directly soliciting hundreds of individual streamers. Tessa puts it succinctly: streamers are 'amplifiers' and 'arguably the most effective way that we could possibly have out there to get people's attention and grow our audience'. Tom ascribes the modest popularity of his own humorous multiplayer game to its 'replayability', which he believes encouraged streamers, notably those who played in groups, to 'keep coming back to it', correlating with increased sales. Stuart had a similar experience when a high-profile YouTube content creator discovered one of his games several years after release, which he says 'spiked my sales and then the sales reset, but not to launch, maybe to the year before. It basically, bumped it back a year in terms of the sales, in terms of those numbers. That's huge'. Stuart directly attributes this 'reset' of his game's long tail to this YouTuber, and he and Christopher both note that the permanent archive of recorded gameplay videos on YouTube may be an even greater asset than livestreamed content since they have more longevity. Several other interviewees drew similar correlations between streaming and long-term success, with the goal of becoming a 'forever game' updated over a long period of time for a dedicated audience, as Tom puts it.
Helena compares the role of streaming in promotion to celebrity and influencer marketing in other fields: It's why some perfume company would pay a model or a celebrity to take a picture with a perfume bottle. It's like, we want the streamer to play the game because we know that will make the game seem fun to their audience.
While certainly this is true in the case of big-name celebrity streamers, smaller or niche streamers can also have a positive impact. Stuart says that his team is deliberately marketing their game to a particular genre niche: 'our niche streamers are magnitudes smaller. But they are a way more targeted market. I feel like the conversion rate on views to sales would be way higher, like 10 times higher' because they 'cater directly to our audience'. In other words, quality is as important as quantity in promotion. Although he is not as convinced of direct sales boosts or measurable return on investment, Hugh contends that 'we can definitely see that in some cases, our game brought audiences to a Twitch streamer's channel. And in other cases, the Twitch streamers channel's audience brought viewers for us for the game'. This leads him to contend that having a game streamed in sufficient numbers can improve discoverability on digital distribution platforms thanks to increased searches and wishlisting. In the same vein, Christopher sees streaming as a useful way to gradually build a player base for in-development games still in beta testing or early access.
Uncertain results, ambivalence, and dismissal
In spite of the opportunities most of our interviewees see in streaming, the strongest theme in our conversations is ambivalence. Indies recognize the inevitability of streaming as a factor in contemporary game development, but frequently express uncertainty about how impactful, reliable, and measurable it really is, and whether actively pursuing it is worth the significant time and effort involved. Although as noted above some developers anecdotally attribute sales or engagement spikes to attention from specific streamers, in many other cases developers report that being featured by streamers with large followings produced no measurable results (Tran, 2020). Past success is no guarantee, either. When Tom made a new and improved 3D version of a previous game that had gained traction with many streamers, he found that they only played it briefly and moved on, and he isn't sure why it didn't resonate. Hugh characterizes indie game marketing as a process of 'just testing assumptions constantly', with no concrete rules or best practices to follow: 'One week this type of content works, the next week this type of content works. You can't plan for that. So I try a bunch of different things'. Helena likewise finds that there's no formula, which makes it hard to track, lamenting that 'the problem with streaming is that sometimes you can't really judge if it's working well'. This leads her to question whether exposure in and of itself is truly beneficial for her studio, contra popular narratives of streaming success.
In spite of his game's popularity with streamers, Melvin also remains ambivalent. At one point, Melvin and Tessa's studio invested money in the Twitch 'Bounty Board' system, which allows developers to make a pot of money available for streamers to claim in exchange for featuring their games. This led to more streamers playing the game, but didn't have any obvious effect on sales or engagement. 'What does that mean?' Melvin wonders, frustrated, Does that mean that it didn't have any effect? Does it mean that the effect is going to be felt over the next 12 months as just like a long tail addition to the general visibility of the game? We don't know.
Ultimately he concludes that Twitch is 'trying to own the channel of communication between the developers and the streamer', echoing Will Partin's work on how platforms 'capture' previously off-platform monetization strategies (2020). Several other developers, including Tom, Helena, and Lauren likewise question the usefulness of paying streamers directly, at least for indies working with small budgets. Christopher's studio used the Bounty system early on at Twitch's urging and found that while it did get streamers to play the game, the return on investment in terms of sales was negligible, suggesting that, as Lauren puts it, the feature is 'not attuned to indie reality'. Curtis links the pervasive uncertainty around streaming to the rapid pace of change in the game industry, and the dominance and inscrutability of platform algorithms: 'Do articles make a difference? Do streams make a difference? Is there anything other than being on the front page of Steam, make a difference? And then no one knows how stuff gets onto the front page of Steam'. Lacking answers to these questions, Curtis concludes that all developers can do is find an intersecting point in the 'Venn diagram' of 'games you want to make, games you can make, and games that have an audience' and hope for the best. Other developers go so far as to chalk success with streamers up to sheer luck. Tom and Stuart both describe it as a 'fluke', with a high degree of uncertainty and unpredictability in terms of impact. Although streaming platforms, digital storefronts, and third-part analytics services offer developers a plethora of data about their games and players, these layers of quantification only seem to further mystify the process (Egliston, 2021).
All of this raises questions about much of the advice that circulates about streaming for indies. Charlie critiques the popular idea that if you 'find the right streamer with the right audience [ . . . ] it's guaranteed to make all your financial dreams come true as an indie developer' as a potentially dangerous misconception. Carolyn likewise observes that 'people think it is an easier thing than it is' and worries that naivety or overconfidence will lead developers to overemphasize streaming to the detriment other important factors. The concerns discussed above about how amenable different genres are for different kinds of streaming play into this as well. As Lauren puts it, Considering streaming as just the one thing is kind of saying, all games are the same, all games have the same process [ . . . ] can we realistically compare a three person VR studio to an 18 person mobile game studio? No, we can't.
Lauren and Carolyn both caution that this makes it difficult to evaluate different indie experiences, since what works well for one game may not work at all for another.
For some indie developers, ambivalence leans toward a wholly negative view of streaming as too risky or even harmful to their games. This reflects a small but significant countercurrent to the generally celebratory discourse around game-based content creators, exemplified by Numinous Games' charge that YouTube Let's Play videos hurt sales of their narrative game That Dragon, Cancer (Green, 2016). Of all our participants, Holly's perspective is the most negative and closely aligns with their experience: more people have played the game for free than have bought it and I find that statistic depressing. [ . . . ] it all comes back to the nature of our game. Our game is a narrative game that plays like a movie. Once you have seen our game, you don't really have a reason to play it. And this is the inherent problem with the streaming culture and the game we made. The game we made, it streams well. People enjoy watching it and watching someone play it and it's a cool experience, but they have no reason to buy it afterwards.
The issue was exacerbated by the fact that Holly's team gave away numerous free promotional copies of the game to streamers, further reducing their overall sales. Anticipating these problems during development, her team considered asking streamers to only play half the game. They decided against it because they didn't want to sour relationships by coming off as overly controlling, but their fears were borne out.
Another factor that contributed to Holly's negative experience was her game's serious, dark themes. She was worried that streamers -especially those who usually stream more mainstream games -would not take it seriously: It feels dangerous to put it into the hands of someone who's more likely to make fun of it than to appreciate it. [ . . . ] I know there's in theory no such thing as bad publicity, but since it's such a specific and somewhat sensitive game, I just didn't feel like we should be courting that kind of attention.
This again echoes Numinous Games' concern that That Dragon, Cancer's deeply personal story of loss would be devalued by content creators, and also resonates with the experiences of queer game developers like Robert Yang, whose games about gay sex and masculinity are frequent targets of gameplay reaction videos and streams that use them as fodder for exaggerated, often profane mockery, and have also been censored by Twitch (k, 2018;Yang, 2016). Helena, whose games often feature characters of diverse gender and sexual identity, is cautious about how sexist, homophobic, or racist 'broey men' streamers will present those aspects to their audiences. Increased visibility on the internet is not necessarily a positive thing, especially for people marginalized on the basis of identity, and game culture in particular is notoriously hostile (Gray, 2014;Nakamura, 2008).
All of this has left Holly fatigued by the overemphasis on streamers in indie game promotion, at least for narrative games: 'frankly, I'm just disillusioned, I'm like why? Why would I do that? Cool, they'll play it and no one will buy it'. Several other developers share this skepticism, with Tom even suggesting that a perceived decline in story-oriented games could be related to the rise of streaming, further evidence of a normative effect on game development.
What exactly are streamers promoting?
A key factor in all of the different attitudes and perceptions discussed above is the knowledge that streamers are cultural producers in their own right. They may in some cases directly or indirectly promote indie games, but as noted above, cultural intermediation is not their primary function (Taylor, 2018: 51). This sets streamers apart from other actors in the space, such as journalists or festival curators, and developers are acutely aware of this fact. Tom observes that streamers cultivate 'parasocial relationships' that give their audiences a sense of a 'personable and amicable' social interaction when in fact it is largely unidirectional -concepts that align closely with critical research on other kinds of influencers (Abidin, 2015). Helena is also cautious about parasociality and worries about 'the amount of trust that they get from their community, how easily influenced the community can be and rabid fans and the ways they can take advantage of that'. She points to controversies like Counter-Strike: Global Offensive YouTubers hawking gambling schemes as one example (Frank, 2017), and more recently there has been slew of sexual harassment and assault charges against popular streamers (Grayson, 2020). If a streamer recommends a game, that recommendation may hold additional weight thanks to their parasocial relationships (as in all influencer marketing), but streamers are less intermediating and more remediating the games they play -the stream stands as a distinct cultural product (Consalvo, 2017).
For some developers, this state of affairs feels unfair or even exploitative. Holly's negative experience with her game has led her to personally view streamers as profiting off of indies: 'If they have a large enough audience, they are literally getting money from the audience to be playing a game and or from I guess other ads on Twitch. [ . . . ] They're making money off of it'. On the other hand, Hugh understands why some developers feel this way but is critical of the impulse: 'There is a particular angle that says streamers are parasites, they are producing content off the back of the work that we're doing. [ . . . ] the reality is that that's just not how the world works anymore'. For Hugh, developers need to take streaming as a given of the contemporary industry and make the best of it, rather than treating streamers as competition. Similarly, Lauren argues that streamers and developers alike should approach streaming from a place of collaboration.
Collaboration, connection, and community-building
It is in this potential for platform-mediated collaboration, connection, and community-building that developers see the most direct value in streaming. While the influence of streaming on direct or indirect sales is difficult to pin down, many interviewees point to other, less quantifiable but equally important factors at play, such as community-building and fostering audience engagement. What allows for long tail success like Melvin and Tessa's is a critical mass of people invested in the game. Cultivating a loyal, participatory community of fan-consumers who feel a personal connection to the creator's work is understood to be essential for contemporary independent cultural production, and social media engagement is a key means of doing so (Baym, 2015;Kribs, 2017). In Carolyn's experience, having your games featured on Twitch streams produces engagement 'in a way that is very organic and/or authentic', and so it should be seen as a community tool that ripples outward onto other social media platforms, regardless of sales. That sense of intimacy and authenticity is actively constructed and presents streamers as 'real' players actually playing and reacting to the game, often through 'calibrated amateurism' and other performative techniques, reinforced by the technical and social affordances of platforms (Abidin, 2018b;Cunningham and Craig, 2017;Ruberg and Lark, 2020). As Hugh argues, having an engaged community even if 'they're not all consumers or they're not all potential purchasers of your product' is useful in and of itself, giving developers more to work with as they relationally cultivate an audience, promote their games, and develop new projects.
Tessa, for example, uses streaming as raw material for producing social media posts for her studio: 'for me it really serves the purpose of creating content that I can use to make the promotion of what's coming up'. She collects clips of interesting or funny moments from streams, as well as memes, press, and other materials and reworks them into compelling content to share via other channels, a strategy other community managers also employ to generate engagement and build brand recognition. Tessa explicitly ties this to credibility and authenticity, a way of incorporating streamers and viewers into the studio's community, and she says streamers appreciate this mutually beneficial acknowledgment. The community-building function extends also to shaping that community. Charlie argues that streamed and recorded play not only helps new players grasp the basics of a game, but additionally model normative ways of playing and enjoying it, contributing to emergent community standards more effectively than official developer-produced content or journalistic coverage. Although it was not a major theme in our interviews, some indie developers livestream their own game development work for similar pedagogical reasons (Consalvo and Phelps, 2021). Rather than seeing streamers as a way of outsourcing promotion, developers are compelled to adopt the same parasocial strategies of self-promotion and relational community maintenance as the streamers themselves, much like other independent cultural producers in the digital age (Kribs, 2017) -provided they have the time and resources to spend.
Conclusion
Game developer perspectives on streaming illustrate just how mutable and precarious commercial indie game development continues to be, in spite of the proliferation of streaming-related success stories. The small Canadian developers we spoke with feel the influence of streaming on all aspects of their work and approach its potential risks and benefits ambivalently as they pursue the elusive goal of creative and economic sustainability .
In the production process, streaming offers an opportunity for more organic playtesting and tweaking games in response to player experience, but this requires active, ongoing development work. Streaming also has a normative effect on design practices, as developers attempt to conceptualize games that appeal to streamers and viewers, though this may clash with their own design sensibilities. Programmable tools that integrate aspects of the streaming platform directly into games may enhance streamability, but they are often prohibitively costly or labor-intensive for smaller developers. Beyond production, streaming is understood to serve a promotional function, and some developers attribute sales bumps and long-term interest in their games to uptake by streamers. However, the majority of our participants express uncertainty about the value of streaming as a promotional tool, pointing to inconsistent results and frustratingly opaque platforms. For certain kinds of games, the impact of streaming is seen as largely negative, benefitting streamers and the platform more than developers, which has implications for what developers consider commercially feasible. Where developers seem to find streaming more consistently useful is in the less explicitly promotional but no less important community-building aspects of cultural production. Streaming thus becomes one of many venues where developers themselves are compelled to adopt the performative, relational techniques of streamers and other online influencers to cultivate a following for their work.
Our findings complicate the optimistic narratives and advice that characterize much of the discourse on streaming and indie games, in which platforms are paradoxically positioned as both the cause of and solution to the problem of discoverability. In fact, the 'nested precarities' of the competitive market for indie games, the rapidly changing game industry, and the ambiguous cultural and economic logics of different platforms (Duffy et al., forthcoming) are embodied in game developers as profound ambivalence (Chia, 2021). The experiences of indie game developers with livestreaming are thus consistent with the more general precarity and ambivalence of cultural work in the era of platform capitalism (de Peuter et al., 2017;Glatt and Banet-Weiser, 2021;Lehto, 2021;Siciliano, 2021). With a whole ecology of platforms and content creation practices shaping game production, promotion, monetization, and community management in the present moment, there is much to learn by centring the empirical experiences of ordinary game developers navigating this environment.
|
2021-09-09T20:47:33.996Z
|
2021-07-28T00:00:00.000
|
{
"year": 2021,
"sha1": "80761c3af56fbee466d78b3acb351555b6d22a9b",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/13548565211027809",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "37e37808dcfad156415e087a0f0b550e51a1128d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Sociology",
"Medicine"
]
}
|
253588063
|
pes2o/s2orc
|
v3-fos-license
|
Increased plasma concentration of vascular endothelial growth factor in patients with atopic dermatitis and its relation to disease severity and platelet activation
Overproduction of vascular endothelial growth factor (VEGF) in atopic dermatitis (AD) lesions has previously been observed. It is also known that platelet is an important source of VEGF and platelet factor 4 (PF-4), a potential marker of AD severity. To evaluate concentrations of VEGF and its soluble receptors (sVEGF-R1 and sVEGF-R2) in the plasma of AD patients and to examine its possible correlation with disease severity and plasma concentrations of PF-4, a platelet activation marker. Plasma concentrations of VEGF and its receptors and levels of PF-4 were measured by an immunoenzymatic assay in 51 AD patients and in 35 healthy non-atopic controls. The severity of the disease was evaluated using the eczema area and severity index. AD patients showed significantly increased VEGF and PF-4 plasma concentrations as compared with the controls. Plasma concentrations of sVEGF-R1 and sVEGF-R2 did not differ between the groups. There were no remarkable correlations between plasma VEGF concentration and disease severity or between VEGF and PF-4 concentration. This study shows that plasma concentration of VEGF may be increased in patients suffering from AD. It seems that plasma VEGF concentration is not a useful marker of disease severity and, apart from platelets, other cells might also release the cytokine.
Introduction
Atopic dermatitis (AD) is a chronic inflammatory skin disease which results from interaction of skin barrier defects, Th1/Th2 cells dysregulation, and environmental factors. Histologically, it is characterized by dilated vessels and perivascular edema leading to erythema and edema [1]. Interestingly, based on the mouse model of AD, it has been observed that angiogenesis is the major pathologic feature of the disease [2]. It has also been suggested that mast cells in AD may stimulate neoangiogenesis via the release of proangiogenic factors [3]. It is known that the key role in vascular permeability, vasodilation and angiogenesis is played by vascular endothelial growth factor (VEGF) [4]. It may also stimulate inflammatory cell recruitment, enhance antigen sensitization and appear crucial for adaptive T(H)2 inflammation [5]. There are few data available on the role of VEGF in AD. Zhang et al. [6] demonstrated increased production of VEGF in AD lesions. Zablotna et al. [7] suggested an association between the -1154 VEGF gene polymorphism and AD. Therefore, the objective of our study was to evaluate concentrations of VEGF and its soluble receptors (sVEGFR1 and VEGFR2) in plasma of patients with AD and to examine their possible correlation with disease severity. Because platelets are important sources of VEGF, we assessed the relationship between plasma concentrations of platelet factor 4 (PF-4), a platelet activation marker, and this cytokine.
Patients
Fifty-one patients who fulfilled the AD criteria as defined by Rajka and Langeland [8] were enrolled into the study. Their clinical and laboratory baseline characteristics are shown in Table 1. The patients were examined during the active period of the disease. Disease severity was assessed according to the eczema area and severity index (EASI) scoring system [9]. The majority of the patients (33) also suffered from persistent allergic rhinitis without any asthma symptoms. The remaining patients suffered from AD without any other atopic diseases such as asthma, rhinitis or conjunctivitis. All the patients were sensitized to house dust mite (HDM) allergens; positive skin tests to HDM (Dermatophagoides pteronyssinus and/or Dermatophagoides farinae) extracts and positive serology (specific IgE = class 2 or higher). They also showed positive skin prick tests to other inhalant and food allergens which, however, were not clinically significant.
The patients were not treated with any antihistamines, topical steroids or calcineurin inhibitors for at least 1 week before enrolment into the study (only emollients were applied). They were free of any systemic steroids during the preceding 8 weeks.
The patients were compared with 35 healthy non-atopic subjects (20 males, 15 females) aged 18-38 years (median 21 years). None of the subjects had any other concomitant dermatological or medical disorders.
All the subjects submitted respective written consent and the study was approved by the University Committee of Ethics.
Blood samples and analytical methods
Because platelets are a potential source of PF-4 and VEGF, we measured VEGF concentration in platelet-poor plasma (PPP). Blood was obtained in the morning (07:00 to 08:00, in the fasting state) after a 25-min rest at slight or no stasis from the antecubital vein into CTAD tubes containing four anticoagulants-sodium citrate, theophylline, adenosine and dypiridamole (Vacutainers Ò , Becton-Dickinson) to obtain maximal stabilization of platelets, then placed into an ice/water bath. The tubes were then centrifuged at 3,0009g for 15 min at 4°C. Following the first centrifugal cycle three-quarters of the top plasma was removed with a plastic transfer pipet. This plasma was centrifuged again at 3,0009g for 15 min to remove the residual platelets. The plasma obtained was stored at -70°C until assayed for VEGF and PF-4.
Measurement of PF-4 concentration in PPP was performed to assess the degree of platelet activation in vivo.
sVEGF-R1 and sVEGF-R2 concentrations were performed in the plasma collected using EDTA as an anticoagulant.
VEGF analysis VEGF plasma concentrations were determined using the Quantikine Human VEGF enzyme-linked immunosorbent assay (ELISA) (R&D Systems Inc., Minneapolis, MN, USA), to recognize the soluble isoforms (VEGF121 and VEGF165). The detection limits were 9.0 pg/ml. Values \9 pg/ml were equalized to zero.
sVEGF-R1 and sVEGF-R2 analyses
The receptor plasma concentrations were assayed by specific commercially available ELISA assay kits (Quantikine; R&D Systems Inc.) in accordance with the manufacturer's instructions. The sensitivity of the assay for VEGF-R1 and sVEGFR-2 was 3.0 and 5 pg/ml, respectively.
PF-4 analysis
The PF-4 concentration was measured in the PPP by ELISA using commercial Asserachrom Ò (Diagnostica Stago, France). The detection limit was 0.25 IU/ml.
Skin prick tests
Allergic status was evaluated using a panel of common inhalants and the main food allergens (Allergopharma, Reinbeck, Germany). The skin wheal-flare reaction was read after 15 min and considered positive if the wheal diameter was at least 3 mm larger than one formed by the control substance.
Other laboratory investigations
The serum levels of total immunoglobulin E (IgE) and specific IgE to D. farinae and D. pteronyssinus were measured by ELISA using a commercial kit (Allergopharma) according to the manufacturer's instructions. The blood platelet and eosinophil counts were determined using an automatic hematology analyzer.
Statistical analysis
Data are presented as median and ranges. All the statistical evaluations were performed by Mann-Whitney U test. The correlations between parameters were measured with Spearman rank test. The results were considered significant when P \ 0.05.
Results
VEGF plasma concentration was significantly higher in AD patients than in the healthy controls (31.2 and 17.2 pg/ml, respectively; P = 0.0007; Fig. 1). Plasma concentrations of VEGF-R1 and VEGF-R2 did not differ significantly between AD patients and healthy subjects (36.2 vs 35.8 and 8,145 vs 7,470 pg/ml, respectively). There were no significant differences in plasma concentrations of VEGF, sVEGF-R1 and sVEGF-R2 between AD patients with and without persistent allergic rhinitis. Plasma concentration of PF-4 was significantly increased in AD patients as compared with the controls (5.5 and 3.2 IU/ml, respectively, P = 0.0005; Fig. 2). No significant correlation was found between VEGF and PF-4 (r = 0.26, P = 0.6). There were no correlations between VEGF and VEGF-R1 or VEGF-R2 or between VEGF-R1 and VEGF-R2. In addition, no significant correlation was found between VEGF concentration and the counts of platelets and eosinophils (data not shown). Neither did we observe any significant correlation between plasma VEGF and serum concentration of total IgE and specific IgE anti-HDM (data not shown). Plasma VEGF concentration did not correlate significantly with EASI.
Discussion
These findings demonstrate for the first time that patients with AD may show significantly increased plasma concentration of VEGF. Because platelets are important sources of VEGF in the circulation [10,11], we performed the analysis in PPP. At present, the major sources and the role of increased plasma concentration of VEGF in AD patients are unknown and remain speculative.
Different cells involved in the pathogenesis of AD are able to synthesize and release VEGF.
Overproduction of VEGF in AD lesional keratinocytes has been demonstrated. The amount of VEGF produced in the lesions of AD was approximately 25 times higher than in normal stratum corneum; however, the mechanism is unclear [6]. It is therefore possible that keratinocytes in AD might release greater amounts of VEGF, which in turn could contribute to the subsequent increase in plasma concentration. Another important storage site is formed by the platelets, which release VEGF upon activation in vivo. It has been reported that platelet activation measured by plasma concentrations of PF-4 and beta-thromboglobulin is increased in AD patients [12][13][14][15], but not in other manifestations of atopic diathesis [16,17]. In addition, it has been suggested that chemokines are markers of AD severity [13]. Our study indicated no significant correlation between plasma concentrations of VEGF and PF-4, suggesting that platelets are not likely to be the sole source of VEGF in AD. It has been demonstrated that activated eosinophils may appear to be an important source of the vascular permeability factor, which may contribute to tissue edema at the sites of allergic inflammation [18]. In addition, sVEGFR1 is expressed by eosinophil whose activation with VEGF stimulates directed migration and activation of eosinophil. Thus, VEGF may play an important role in the modulation of eosinophilic inflammation [19]. In our study, there was no significant correlation between plasma concentration of VEGF and eosinophil counts, suggesting that eosinophils could not be the sole source of VEGF in AD.
Other possible sources of VEGF include mast cells. Mast cells can secrete VEGF and such secretion is enhanced via upregulation of IgE receptor on mast cells [20]. Interestingly, it has been suggested that transfer of IgE from the circulating blood to extravascular tissue via endothelial cells may depend on the concentration of VEGF secreted from mast cells [21]. We did not observe any significant correlation between plasma VEGF and serum concentration of total IgE and specific IgE anti-HDM. Furthermore, other cells involved in immune inflammatory processes in AD can release VEGF. Different cell sources probably contribute to the subsequent increase in plasma concentration of VEGF.
Our results show that AD severity according EASI does not correlate with plasma VEGF concentration. This might suggest the lack of an important link between the degree of skin inflammation and VEGF release. VEGF plasma concentration may not be a useful marker of disease severity.
It has been suggested that VEGF may play a role in the pathogenesis of AD and may regulate the development of AD lesions, acting possibly in the persisting erythema and edema by prolonged capillary dilatation and hyperpermeability [6]. Apart from AD, increased expression of VEGF has been observed in patients suffering from other inflammatory skin diseases associated with enhanced vascularity and vascular hyperpermeability, including bullous pemphigoid, dermatitis herpetiformis and erythema multiforme [22]. Overexpression of VEGF and its receptors has been observed in delayed hypersensitivity skin reactions [23].
The significance of the increased concentration of circulating VEGF in AD is unclear. Whether circulating VEGF contributes directly or indirectly to AD pathogenesis or is merely a secondary phenomenon needs to be determined. Because VEGF is a multifunctional cytokine secreted by a variety of cells and is overexpressed in AD, it leads to the hypothesis that circulating VEGF may be involved in AD and provides a link between vascular permeability and leukocyte recruitment as well as activation at sites of the inflammation.
There was no correlation between plasma concentrations of VEGF and its soluble receptors (sVEGF-R1 and sVEGF-R2) in AD patients.
Considering that sVEGF-R1 is a negative regulator of VEGF availability (by sequestrating the ligand and by forming inactive heterodimers with membrane-bound VEGF receptors), one could expect some changes in plasma sVEGF-R1 concentration shown by AD patients. No increased plasma sVEGF-R1 concentration observed in this study may suggest a paradoxical response of sVEGF-R1, promoting the VEGF performance. Such a phenomenon could hypothetically account for the disturbance of mechanisms responsible for VEGF activity in the VEGF/ sVEGF-R1 system in AD patients. Our results are not sufficient, however, to draw any ultimate conclusions, particularly because data illustrating a correlation between the concentration of VEGF and its receptors are scarce, if not divergent.
On the other hand, the function of sVEGF-R2 is less recognized, while in vitro it appears as a weak antagonist of VEGF [24].
Conclusions
The study appears as the first reported evidence of an increased concentration of VEGF in PPP in atopic dermatitis; however, its role remains uncertain and further investigations should be undertaken for better recognition of its function. The involvement of VEGF in the inflammatory reaction of AD might be supported by evidence of a variety of biological effects exerted by VEGF on cells and processes that play a major role in AD. Different cells sources probably contribute to the subsequent increase in plasma concentration of VEGF. The major source of circulating VEGF in AD is still unknown. It seems that plasma VEGF concentration is not a useful marker of disease severity and, apart from platelets, other cells might also release the cytokine.
|
2022-11-18T14:39:02.149Z
|
2012-08-23T00:00:00.000
|
{
"year": 2012,
"sha1": "c044eeb3797bd4c7201f191bdfaa843f1b30e561",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00011-012-0543-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "c044eeb3797bd4c7201f191bdfaa843f1b30e561",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
2991763
|
pes2o/s2orc
|
v3-fos-license
|
Indoxyl Sulfate as a Mediator Involved in Dysregulation of Pulmonary Aquaporin-5 in Acute Lung Injury Caused by Acute Kidney Injury
High mortality of acute kidney injury (AKI) is associated with acute lung injury (ALI), which is a typical complication of AKI. Although it is suggested that dysregulation of lung salt and water channels following AKI plays a pivotal role in ALI, the mechanism of its dysregulation has not been elucidated. Here, we examined the involvement of a typical oxidative stress-inducing uremic toxin, indoxyl sulfate (IS), in the dysregulation of the pulmonary predominant water channel, aquaporin 5 (AQP-5), in bilateral nephrectomy (BNx)-induced AKI model rats. BNx evoked AKI with the increases in serum creatinine (SCr), blood urea nitrogen (BUN) and serum IS levels and exhibited thickening of interstitial tissue in the lung. Administration of AST-120, clinically-used oral spherical adsorptive carbon beads, resulted in a significant decrease in serum IS level and thickening of interstitial tissue, which was accompanied with the decreases in IS accumulation in various tissues, especially lung. Interestingly, a significant decrease in AQP-5 expression of lung was observed in BNx rats. Moreover, the BNx-induced decrease in pulmonary AQP-5 protein expression was markedly restored by oral administration of AST-120. These results suggest that BNx-induced AKI causes dysregulation of pulmonary AQP-5 expression, in which IS could play a toxico-physiological role as a mediator involved in renopulmonary crosstalk.
Introduction
Acute kidney injury (AKI), a syndrome recognized as the sudden deterioration of renal function from several hours to a few days, causes derangement of homeostatic maintenance of the body's fluids and electrolytes [1].AKI is characterized by increased levels of serum creatinine (SCr) and oliguria caused by functional or structural disturbances of the kidney, including abnormalities in blood, urine or tissues present for less than three months.Despite advances in understanding the pathophysiology, improvements in dialysis and supportive care, the mortality of AKI remains considerably high (ranging from 40% to 60%) [2].The high mortality of AKI is associated with acute lung injury (ALI) or acute respiratory distress syndrome, which are typical complications of AKI [2].Although it is well documented that lung injury is often associated with AKI and that lung dysfunction is highly correlated with death in patients with AKI [3], the mechanism underlying renopulmonary crosstalk has not been fully elucidated.
Several studies have suggested that dysregulation of lung salt and water channels following AKI plays a pivotal role in ALI [4,5].Of various channels, it is known that aquaporin 5 (AQP-5) is a pulmonary predominant water channel and responsible for the majority of water transport across the apical membrane of type I alveolar epithelial cells [6,7].A previous report has also shown that ischemic acute renal failure leads to downregulation of AQP-5 and the pulmonary epithelial sodium channel, Na + /K + -ATPase, and may modulate lung dysfunction and susceptibility to lung injury [8].However, little is known about the mechanism of dysregulation of AQP-5 in the pathogenesis of ALI caused by AKI.
Uremic toxins, characterized as compounds retained as solutes in the serum that contribute to uremic syndrome, trigger a complex and variable symptomatology [9].Indoxyl sulfate (IS), a putative low-molecular weight uremic toxin, is excreted in the urine under normal kidney function, but is retained in the blood circulation and various tissues during renal dysfunction in AKI and chronic kidney disease [10].It is well documented that IS is exclusively generated in the liver through metabolic process by several hepatic metabolizing enzymes, such as sulfotransferase (SULT) 1A1 [11,12].IS in the blood circulation is efficiently taken up by renal proximal tubular cells via basolateral membrane-localized organic anion transporters, OAT1/SLC22A6 and OAT3/SLC22A8, and excreted into the urine via unidentified apical membrane-localized transporters [13].Our previous studies showed that the increase in the IS levels could be involved in the mechanism of the downregulation of renal organic ion transporters and central nervous system toxicities in cisplatin-induced AKI model rats [14,15].It has also been shown that inhibition of IS production elicited a nephropreventive effect in the ischemic AKI model [16].Moreover, various previous studies suggest that serum and tissue IS accumulation play crucial roles in the pathogenesis of AKI [17,18].
In this study, we developed the bilateral nephrectomy (BNx)-induced AKI rat model to examine the involvement of a typical oxidative stress-inducing uremic toxin, IS, in the dysregulation of the pulmonary predominant water channel, AQP-5, and elucidate the toxico-physiological role of IS as a mediator involved in renopulmonary crosstalk in the pathogenesis of ALI.
SCr, BUN and Serum Accumulations of IS
To determine the involvement of IS in the pathogenesis of ALI, we first sought to develop the bilateral nephrectomy (BNx)-induced AKI rat model.As shown in Figure 1, BNx evoked AKI with the increase in SCr, blood urea nitrogen (BUN) and serum accumulations of IS at 48 h.In addition, oral administration of AST-120, clinically-used oral spherical adsorptive carbon beads for reducing the accumulation of uremic toxins, resulted in a significant decrease in serum IS level (Figure 1C).lung injury (ALI) or acute respiratory distress syndrome, which are typical complications of AKI [2].
Although it is well documented that lung injury is often associated with AKI and that lung dysfunction is highly correlated with death in patients with AKI [3], the mechanism underlying renopulmonary crosstalk has not been fully elucidated.
Several studies have suggested that dysregulation of lung salt and water channels following AKI plays a pivotal role in ALI [4,5].Of various channels, it is known that aquaporin 5 (AQP-5) is a pulmonary predominant water channel and responsible for the majority of water transport across the apical membrane of type I alveolar epithelial cells [6,7].A previous report has also shown that ischemic acute renal failure leads to downregulation of AQP-5 and the pulmonary epithelial sodium channel, Na + /K + -ATPase, and may modulate lung dysfunction and susceptibility to lung injury [8].However, little is known about the mechanism of dysregulation of AQP-5 in the pathogenesis of ALI caused by AKI.
Uremic toxins, characterized as compounds retained as solutes in the serum that contribute to uremic syndrome, trigger a complex and variable symptomatology [9].Indoxyl sulfate (IS), a putative low-molecular weight uremic toxin, is excreted in the urine under normal kidney function, but is retained in the blood circulation and various tissues during renal dysfunction in AKI and chronic kidney disease [10].It is well documented that IS is exclusively generated in the liver through metabolic process by several hepatic metabolizing enzymes, such as sulfotransferase (SULT) 1A1 [11,12].IS in the blood circulation is efficiently taken up by renal proximal tubular cells via basolateral membranelocalized organic anion transporters, OAT1/SLC22A6 and OAT3/SLC22A8, and excreted into the urine via unidentified apical membrane-localized transporters [13].Our previous studies showed that the increase in the IS levels could be involved in the mechanism of the downregulation of renal organic ion transporters and central nervous system toxicities in cisplatin-induced AKI model rats [14,15].It has also been shown that inhibition of IS production elicited a nephropreventive effect in the ischemic AKI model [16].Moreover, various previous studies suggest that serum and tissue IS accumulation play crucial roles in the pathogenesis of AKI [17,18].
In this study, we developed the bilateral nephrectomy (BNx)-induced AKI rat model to examine the involvement of a typical oxidative stress-inducing uremic toxin, IS, in the dysregulation of the pulmonary predominant water channel, AQP-5, and elucidate the toxico-physiological role of IS as a mediator involved in renopulmonary crosstalk in the pathogenesis of ALI.
SCr, BUN and Serum Accumulations of IS
To determine the involvement of IS in the pathogenesis of ALI, we first sought to develop the bilateral nephrectomy (BNx)-induced AKI rat model.As shown in Figure 1, BNx evoked AKI with the increase in SCr, blood urea nitrogen (BUN) and serum accumulations of IS at 48 h.In addition, oral administration of AST-120, clinically-used oral spherical adsorptive carbon beads for reducing the accumulation of uremic toxins, resulted in a significant decrease in serum IS level (Figure 1C).
Organ Accumulation of IS and Histological Changes of Lung Tissue
Because BNx caused AKI with a significant increase in serum IS level, we next attempted to determine whether IS was accumulated in various kinds of tissue by using the BNx rat model.As shown in Figure 2, BNx significantly increased IS accumulation in various organs, especially lung tissue.Consistent with the result showing the significant decrease in serum IS level (Figure 1C), administration of AST-120 resulted in a significant decrease in IS accumulation in lung tissue (Figure 2).Moreover, in association with the marked decrease in IS accumulation in lung tissue, BNx-induced thickening of interstitial tissue in the lung was obviously suppressed by oral administration of AST-120 (Figure 3), suggesting that IS accumulation in lung tissue may play important roles in the pathogenesis of ALI.
Organ Accumulation of IS and Histological Changes of Lung Tissue
Because BNx caused AKI with a significant increase in serum IS level, we next attempted to determine whether IS was accumulated in various kinds of tissue by using the BNx rat model.As shown in Figure 2, BNx significantly increased IS accumulation in various organs, especially lung tissue.Consistent with the result showing the significant decrease in serum IS level (Figure 1C), administration of AST-120 resulted in a significant decrease in IS accumulation in lung tissue (Figure 2).Moreover, in association with the marked decrease in IS accumulation in lung tissue, BNx-induced thickening of interstitial tissue in the lung was obviously suppressed by oral administration of AST-120 (Figure 3), suggesting that IS accumulation in lung tissue may play important roles in the pathogenesis of ALI.
AQP-5 and Na + /K + -ATPase Protein Expressions of the Lung
It has been reported that dysregulation of lung salt and water channels following AKI plays a pivotal role in ALI [4,5].To determine the involvement of IS in dysregulation of pulmonary predominant water channels in ALI, we next examined AQP-5 and Na + /K + -ATPase protein expressions of lung in BNx rats.Western blot analysis showed the significant decrease in AQP-5 expression of lung in BNx rats (Figure 4A,B).Interestingly, BNx-induced significant decrease in AQP-5 expression was obviously restored by oral administration of AST-120 (Figure 4A,B).By contrast, no significant change of Na + /K + -ATPase protein expression was observed.Moreover, immunohistochemical analysis confirmed that BNx-induced decrease in AQP-5 protein expression in lung tissue was also restored by oral administration of AST-120 (Figure 4C,D), suggesting that IS accumulation in lung tissue may play crucial roles in the pathogenesis of ALI through dysregulation of pulmonary AQP-5 expression.
Organ Accumulation of IS and Histological Changes of Lung Tissue
Because BNx caused AKI with a significant increase in serum IS level, we next attempted to determine whether IS was accumulated in various kinds of tissue by using the BNx rat model.As shown in Figure 2, BNx significantly increased IS accumulation in various organs, especially lung tissue.Consistent with the result showing the significant decrease in serum IS level (Figure 1C), administration of AST-120 resulted in a significant decrease in IS accumulation in lung tissue (Figure 2).Moreover, in association with the marked decrease in IS accumulation in lung tissue, BNx-induced thickening of interstitial tissue in the lung was obviously suppressed by oral administration of AST-120 (Figure 3), suggesting that IS accumulation in lung tissue may play important roles in the pathogenesis of ALI.
AQP-5 and Na + /K + -ATPase Protein Expressions of the Lung
It has been reported that dysregulation of lung salt and water channels following AKI plays a pivotal role in ALI [4,5].To determine the involvement of IS in dysregulation of pulmonary predominant water channels in ALI, we next examined AQP-5 and Na + /K + -ATPase protein expressions of lung in BNx rats.Western blot analysis showed the significant decrease in AQP-5 expression of lung in BNx rats (Figure 4A,B).Interestingly, BNx-induced significant decrease in AQP-5 expression was obviously restored by oral administration of AST-120 (Figure 4A,B).By contrast, no significant change of Na + /K + -ATPase protein expression was observed.Moreover, immunohistochemical analysis confirmed that BNx-induced decrease in AQP-5 protein expression in lung tissue was also restored by oral administration of AST-120 (Figure 4C,D), suggesting that IS accumulation in lung tissue may play crucial roles in the pathogenesis of ALI through dysregulation of pulmonary AQP-5 expression.
AQP-5 and Na + /K + -ATPase Protein Expressions of the Lung
It has been reported that dysregulation of lung salt and water channels following AKI plays a pivotal role in ALI [4,5].To determine the involvement of IS in dysregulation of pulmonary predominant water channels in ALI, we next examined AQP-5 and Na + /K + -ATPase protein expressions of lung in BNx rats.Western blot analysis showed the significant decrease in AQP-5 expression of lung in BNx rats (Figure 4A,B).Interestingly, BNx-induced significant decrease in AQP-5 expression was obviously restored by oral administration of AST-120 (Figure 4A,B).By contrast, no significant change of Na + /K + -ATPase protein expression was observed.Moreover, immunohistochemical analysis confirmed that BNx-induced decrease in AQP-5 protein expression in lung tissue was also restored by oral administration of AST-120 (Figure 4C,D), suggesting that IS accumulation in lung tissue may play crucial roles in the pathogenesis of ALI through dysregulation of pulmonary AQP-5 expression.
Discussion
Despite advances in understanding the pathophysiology, improvements in dialysis and supportive care, the mortality of AKI remains considerably high [2].Although the high mortality of AKI is associated with ALI, which is a typical complication of AKI, the molecular pathogenesis of ALI has yet to be determined.In the present study, we showed that BNx-induced AKI caused IS accumulation in the lung tissue, which in turn may lead to ALI progression via dysregulation of pulmonary AQP-5 expression.
One of the interesting finding in this study is that IS may play a toxico-physiological role as a mediator involved in renopulmonary crosstalk.It has been reported that IS concentrations in lung were markedly elevated in both 5/6 nephrectomized rats and cisplatin-induced AKI rats [19,20].Consistent with those previous reports, our results also showed that, in association with the marked increase in serum IS concentration, BNx significantly increased IS accumulation in various organs, especially lung tissue (Figure 2).It is to be noted that the significant increase in serum accumulations of IS was observed at 4 h (Figure S1A).Moreover, IS accumulation in lung already showed a tendency to increase at 4 h (Figure S1B).It is known that AST-120 can reduce the accumulation of uremic toxins, such as IS, by restriction of protein intake in the intestine [21,22].Because oral administration of AST-120 indeed restored BNx-induced thickening of interstitial tissue following IS accumulation in the lung (Figure 3), ALI progression may be associated with the increase in serum IS concentration caused by renal injury through renopulmonary crosstalk.It should be noted that SCr and BUN were not restored by AST-120 treatment (Figure 1A,B), since there was likely no renoprotective effect caused by decreasing serum IS concentration due to bilateral nephrectomy.
A previous report has shown that ischemic acute renal failure leads to downregulation of the pulmonary epithelial sodium channel, Na + /K + -ATPase and AQP-5 and may modulate lung dysfunction and susceptibility to lung injury [8].Those findings suggest that dysregulation of lung
Discussion
Despite advances in understanding the pathophysiology, improvements in dialysis and supportive care, the mortality of AKI remains considerably high [2].Although the high mortality of AKI is associated with ALI, which is a typical complication of AKI, the molecular pathogenesis of ALI has yet to be determined.In the present study, we showed that BNx-induced AKI caused IS accumulation in the lung tissue, which in turn may lead to ALI progression via dysregulation of pulmonary AQP-5 expression.
One of the interesting finding in this study is that IS may play a toxico-physiological role as a mediator involved in renopulmonary crosstalk.It has been reported that IS concentrations in lung were markedly elevated in both 5/6 nephrectomized rats and cisplatin-induced AKI rats [19,20].Consistent with those previous reports, our results also showed that, in association with the marked increase in serum IS concentration, BNx significantly increased IS accumulation in various organs, especially lung tissue (Figure 2).It is to be noted that the significant increase in serum accumulations of IS was observed at 4 h (Figure S1A).Moreover, IS accumulation in lung already showed a tendency to increase at 4 h (Figure S1B).It is known that AST-120 can reduce the accumulation of uremic toxins, such as IS, by restriction of protein intake in the intestine [21,22].Because oral administration of AST-120 indeed restored BNx-induced thickening of interstitial tissue following IS accumulation in the lung (Figure 3), ALI progression may be associated with the increase in serum IS concentration caused by renal injury through renopulmonary crosstalk.It should be noted that SCr and BUN were not restored by AST-120 treatment (Figure 1A,B), since there was likely no renoprotective effect caused by decreasing serum IS concentration due to bilateral nephrectomy.
A previous report has shown that ischemic acute renal failure leads to downregulation of the pulmonary epithelial sodium channel, Na + /K + -ATPase and AQP-5 and may modulate lung dysfunction and susceptibility to lung injury [8].Those findings suggest that dysregulation of lung salt and water channels following AKI may play a pivotal role in ALI [3][4][5].Our results showed that IS accumulation triggered the dysregulation of AQP-5 protein expression in lung tissue (Figure 4).Moreover, both Western blot and immunohistochemical analysis showed that the BNx-induced significant decrease in pulmonary AQP-5 expression was obviously restored by oral administration of AST-120 (Figure 4), suggesting that IS accumulation in lung tissue may play pivotal roles in the pathogenesis of ALI through dysregulation of pulmonary AQP-5 expressed in alveolar epithelial cells.Meanwhile, no significant change of Na + /K + -ATPase protein expression was observed in BNx rats.Since it has been reported that Na + /K + -ATPase protein expression was significantly changed by ischemic acute renal failure [8], regulation of Na + /K + -ATPase protein expression is likely mediated by ischemia-related factors, such as inflammatory mediators, rather than uremic toxins.Several studies have shown that IS accumulation is associated with not only dysregulation of lung water channels, but also various risk factors, such as reactive oxygen species, transforming growth factor-β1, tissue inhibitor of metalloproteinase-1, intracellular adhesion molecule-1 and plasminogen activator inhibitor-1 [23][24][25][26][27].In addition, it has been reported that IS upregulates monocyte chemotactic protein-1 expression through the production of reactive oxygen species and activation of the MAPK and JNK pathway [28].Because it is also documented that p38 MAPK and JNK activation downregulate AQP5 expression in alveolar epithelial cells [29], IS accumulation may cause dysregulation of pulmonary AQP-5 expression by activating the p38 MAPK and JNK pathway, which in turn leads to thickening of interstitial tissue in the lung.Moreover, alveolar epithelium expressed not only AQP-5, but also AQP 3 and AQP4 [30].Therefore, future studies will focus on determining the molecular mechanism of IS-induced dysregulation of AQP-5 and also further exploring the other factors involved in IS-induced ALI.
It is documented that IL-6 contributes to AKI-mediated lung injury, potentially via effects on lung production of chemokines [31].In addition to the fact that IL-6 was significantly elevated in patients with AKI, circulating IL-6 levels could be used as a prognostic marker in patients with ALI [31].To further determine the involvement of IL-6 in IS accumulation and dysregulation of pulmonary AQP-5, we measured serum IL-6 concentration in BNx rats at 4 h, which showed the highest serum IL-6 levels in a previous study [32].As shown in Figure S2A, BNx significantly increased serum accumulations of IS, and AST-120 treatment decreased its serum IS levels even at 4 h.However, BNx-induced serum IL-6 elevation was not suppressed by AST-120 treatment (Figure 2B), suggesting that IL-6 may be involved in the pathogenesis of ALI independently of IS.Based on the previous studies, IL-6 may cause ALI progression through histological damage or neutrophil infiltration [27], rather than dysregulation of pulmonary AQP-5.
In conclusion, our results suggest that IS accumulation in lung tissue may play crucial roles in the pathogenesis of ALI in BNx rats.BNx-induced AKI caused dysregulation of pulmonary AQP-5 expression, in which IS could play a toxico-physiological role as a mediator involved in renopulmonary crosstalk.This finding may bring new insights into understanding of ALI pathogenesis and may provide useful basic information for establishing a new therapeutic strategy to prevent the high mortality of AKI associated with ALI.
Chemicals
IS was obtained from Sigma-Aldrich Co. (St.Louis, MO, USA).AST-120 was kindly provided by Daiichi Sankyo Co., Ltd.(Tokyo, Japan).Carboxymethyl cellulose (CMC) and methanol were obtained from Wako Pure Chemical Industries, Ltd. (Osaka, Japan).All chemicals used in this study were of analytical grade and commercially available.
Animal Experiments
All procedures for animal experiments were approved by the Kumamoto University ethical committee concerning animal experiments (Identification code: A 27-045, Approval date: 01/04/2015) and animals were treated in accordance with the Guidelines of the United States National Institutes of Health regarding the care and use of animals for experimental procedures and the Guidelines of Kumamoto University for the care and use of laboratory animals.Male Sprague-Dawley (SD) rats at 6 weeks of age were housed in a standard animal maintenance facility at a constant temperature (22 ± 2 • C) and humidity (50%-70%) and a 12/12-h light/dark cycle for about a week before the day of the experiment, with food and water available ad libitum.Rats were anesthetized using sodium pentobarbital (50 mg/kg intraperitoneally) and placed on a heating plate (39 • C) to maintain a constant temperature.All surgery was conducted under anesthesia with pentobarbital, and all efforts were made to minimize animal suffering.The kidneys of male SD rats at 6 weeks of age were exposed via midline abdominal incisions.In the bilateral nephrectomy model, both renal pedicles were tied off with a suture and then cut distal to the suture.The ureters were pinched off with forceps, and the kidneys were removed as previously reported [33].Sham animals (control) underwent anesthesia, laparotomy and renal pedicle dissection only.Rats were divided into three different groups as follows: sham-operated rats (control rats), CMC-administered rats with BNx and AST-120-administered rats with BNx.AST-120 (2.5 g/kg) was orally administered to rats 24 and 1 h before and 24 h after BNx.Blood was collected 4 or 48 h after BNx from the abdominal aorta and centrifuged at 3000× g for 10 min to obtain the serum sample.Methanol (100 µL) was added to serum (50 µL), and the mixture was centrifuged at 13,000 rpm for 10 min at 4 • C. The obtained supernatant (50 µL) was diluted with HPLC mobile phase solution (300 µL) and centrifuged at 13,000 rpm for 5 min at 4 • C. The supernatant was used for HPLC determination of IS concentration.Lung, liver, heart and intestine was harvested 48 h after BNx and homogenized in phosphate-buffered saline (pH 7.4) using a Polytron PT3000 (Kinematica AG, Lucerne, Switzerland).After centrifugation at 3000 rpm for 10 min at 4 • C, the obtained supernatant was used for the HPLC assay of IS concentration.Lung samples were fixed in 10% buffered formaldehyde and embedded in paraffin for H&E staining and immunohistochemistry. Levels of SCr (enzymatic method) and BUN (uricase ultraviolet (UV) method) were then measured.
High-Performance Liquid Chromatography Determination of IS Concentration
HPLC was performed according to a previous report with some modifications [14,34].The HPLC system consisted of a Shimadzu LC-10ADVP pump and a Shimadzu RF-10AXL fluorescence spectrophotometer.A column of LiChrospher ® 100 RP-18 (Merck KGaA, Darmstadt, Germany) was used as the stationary phase, and the mobile phase consisted of acetate buffer (0.2 M, pH 4.5).The flow rate was 1.0 mL/min at a column temperature of 40 • C. The presence of IS in the eluate was monitored by means of a fluorescence detector (excitation 280 nm, emission 375 nm).
Western Blot Analysis
Western blot was performed according to a previous report with some modifications [16,19,35].Kidneys were homogenized in an ice-cold homogenization buffer consisting of 230 mM sucrose, 5 mM Tris (hydroxymethyl) aminomethane hydrochloride (Tris-HCl) (pH 7.5), 2 mM ethylenediaminetetraacetic acid, 0.1 mM phenylmethanesulfonyl fluoride, 1 µg/mL leupeptin and 1 µg/mL pepstatin A. After measuring protein content using a bicinchoninic acid (BCA) protein assay reagent (Thermo Fisher Scientific, Waltham, MA, USA), each sample was mixed in loading buffer (2 w/v % sodium dodecyl sulfate (SDS), 125 mM Tris-HCl pH 7.2, 20 v/v % glycerol and 5 v/v % 2-mercaptoethanol) and heated at 95 • C for 2 min.The samples were subjected to sodium dodecyl sulfate-polyacrylamide gel electrophoresis using a 7.5% gel and transferred onto a polyvinylidene difluoride membrane (Immobilon-P; EMD Millipore, Billerica, MA, USA) by semi-dry electroblotting.The membrane was blocked for 1 h at room temperature with 2 v/v % ECL Advance Blocking Agent (GE Healthcare UK Ltd., Little Chalfont, UK) in 50 mM Tris-buffered saline (pH 7.6) containing 0.3 v/v % Tween 20, and then incubated for 1 h at room temperature with a primary antibody specific for rAQP-5 (Alpha Diagnostic, San Antonio, TX, USA) or Na + /K + -ATPase (upstate biotechnology, Inc., Lake Placid, NY, USA).The blots were then washed with Tris-buffered saline containing Tween 20 before incubation with the secondary antibody (horseradish peroxidase-labeled anti-rabbit immunoglobulin F(ab)2 or horseradish peroxidase-linked anti-mouse immunoglobulin F(ab)2) (GE Healthcare Ltd., Chicago, IL, USA) for 1 h at room temperature.Immunoblots were visualized with an ECL system (ECL Advance Western Blotting Detection Kit; GE Healthcare Ltd., Chicago, IL, USA).
Histochemical Staining
Histochemical staining was performed according to a previously-described report with some modifications [16,36,37].Paraffin-embedded specimens were cut into 6-µm sections and mounted on glass slides.After deparaffinization of the sections, rehydration, pretreatment with a microwave for 2 × 10 min in citrate buffer pH 6.0, washing in phosphate-buffered (PBS), pH 7.4, were followed by the blocking of endogenous peroxidase with 0.3% H 2 O 2 for 15 min and a blocking in Blocking One Histo (Nacalai tesque, Kyoto, Japan) for 15 min at room temperature.The primary antibody specific for rAQP-5 (Alpha Diagnostic, San Antonio, TX, USA) was incubated overnight at 4 • C, followed by washings in PBS.The secondary antibody (horseradish peroxidase-labeled anti-rabbit immunoglobulin F(ab)2) was incubated for 1 h at room temperature, followed by washings in PBS.DAB solution (Dako, Tokyo, Japan) was then added for coloration for 15 min at room temperature, followed by counterstaining with hematoxylin.The sections were deparaffinized and stained with hematoxylin-eosin (H&E).Pathological changes of lung tissue in BNx rats were assessed by hemorrhage, a hallmark of ALI associated with increased lung vascular permeability [38].Quantitative analysis of immunohistochemical images and pathological changes of lung tissue were performed using WinRoof V7.4 (MITANI Corporation, Tokyo, Japan), which performed automated particle analysis in a measured area as described previously [39].
Statistical Analysis
Data were analyzed statistically by analysis of variance, followed by Scheffé's multiple comparison test.A p-value of <0.05 was considered statistically significant.All data are represented as the mean ± standard deviation (SD).
Figure 3 .
Figure 3. Histological changes of lung tissue in BNx rats.(A) H&E-stained sections of the lung tissue of sham-operated rats (sham), BNx rats with (BNx + AST-120) or without (BNx) oral administration of AST-120 (2.5 g/kg) at 48 h.Scale bars represent 20 μm; (B) Quantitative analysis of histological changes in lung tissue.Each column represents, the mean ± SD for three rats in each group.* p < 0.05 versus BNx.
Figure 3 .
Figure 3. Histological changes of lung tissue in BNx rats.(A) H&E-stained sections of the lung tissue of sham-operated rats (sham), BNx rats with (BNx + AST-120) or without (BNx) oral administration of AST-120 (2.5 g/kg) at 48 h.Scale bars represent 20 μm; (B) Quantitative analysis of histological changes in lung tissue.Each column represents, the mean ± SD for three rats in each group.* p < 0.05 versus BNx.
Figure 3 .
Figure 3. Histological changes of lung tissue in BNx rats.(A) H&E-stained sections of the lung tissue of sham-operated rats (sham), BNx rats with (BNx + AST-120) or without (BNx) oral administration of AST-120 (2.5 g/kg) at 48 h.Scale bars represent 20 µm; (B) Quantitative analysis of histological changes in lung tissue.Each column represents, the mean ± SD for three rats in each group.* p < 0.05 versus BNx.
|
2017-01-31T08:35:28.556Z
|
2016-12-23T00:00:00.000
|
{
"year": 2016,
"sha1": "747bce93e1c58d3a287e6b7339f050a1db6b4919",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/18/1/11/pdf?version=1482486424",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "747bce93e1c58d3a287e6b7339f050a1db6b4919",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239632640
|
pes2o/s2orc
|
v3-fos-license
|
Optimization approaches for defining storage strategies in maritime container terminals
In maritime container terminals, yards have a primary role in permitting the efficient management of import and export flows. In this work, a mixed 0/1 linear programming model and a heuristic approach are proposed for defining storage rules in order to minimize the space used in the export yard. The minimization of land space is pursued by defining the rules to allocate containers into the bay-locations of the yard, in such a way as to minimize the number of bay-locations used and the empty slots within them. The main aim of this work is to propose a solution approach for permitting the yard manager to compare yard storage strategies for different transport demands, in such a way to be able to evaluate and, eventually, modify the storage strategy when the characteristics of the transport demand change. Computational experiments, based on both real instances and generated ones, are presented. All instances are derived by a case study related to an Italian terminal.
Introduction and literature review
Maritime container terminals are generally recognized as crucial intermodal change nodes in the logistic chains, managing the greater part of the world sea trade, i.e. about 80% of the world one (UNCTAD, 2018).
The storage yards have a primary role in permitting the efficient management of import and export flows Carlo et al. (2014) and, in recent years, thanks to advancements in quayside equipment and technologies, seems the bottleneck of port operations has moved from quayside to yard side Tan et al. (2017). This means that the typical operations performed in the yard, such as the storage and the retrieval of containers, dispatching and routing of material handling Communicated by Anna Sciomachen. equipment, must be managed improving their efficiency, in such a way not to compromise the efficiency of the whole terminal system and the competitiveness of the terminal in the whole logistic chain.
The yard, the intermediate area between the frontier and the backward of a terminal, is used to store, control and handle the containers and occupies a considerable part in a terminal. The yard is usually divided into some segmentation for the inbound and outbound containers based on the process of import and export, respectively.
The container yard is divided into numerous Blocks, each one composed by a given number of Bays. Each bay is formed by several Rows; containers are stacked in Tiers. The identification of a container position in the yard is based on these 3 indicators: Bay, Row and Tier. In modern container terminals, the maximum tier to stock a container in a block is 4 and the utilization ratio ranges from 70% to 90%.
Blocks can be positioned either perpendicular or parallel to the quay, and the location of the input/output container points (i.e. points for the exchange of containers between transfer vehicles and yard cranes) can be either at the end of the blocks or in the middle; thus, two configurations of layout are possible and they are generally known as European and Asian layout, respectively. (More details can be found in Carlo et al. 2014;Wiese et al. 2010.) As far as the storage strategies are considered, the most of literature is devoted to export containers. Many terminals store containers in the yard per their loading vessels. In this case, the terminal has to assign sub-blocks to vessels and then organize the storage of containers inside every subblock. This problem is known as the yard template planning (Moorthy and Teo 2006;Zhen 2014), and it represents a tactical level decision problem. The yard template planning has been generally solved under deterministic assumptions (i.e. the number of containers to load on a vessel is known).
At the operational level, given a yard template, the terminal solves the storage allocation problem (Zhang et al. 2003;Lee and Tan 2006) generally based on the sub-blocks. Some authors refer to these sub-blocks as loading clusters. In Yu et al. (2020), the authors model the choice of loading clusters in such a way to obtain a more flexible allocation strategy for organizing the space in the export yard. They describe in detail the concept of loading cluster and loading operations, and the link between these two important activities. In He et al. (2020) and Tan et al. (2019), the authors try to develop more flexible yard management by determining simultaneously the size of the loading clusters and their allocation to specific blocks. A loading cluster is a stretch of bays in a specific yard block. The word loading is used to stress the importance of coordinating the bay configuration in the yard with the slots on the ship in a given bay. The ideal situation is to have the containers in the same yard stack to be put in the same ship bay. Thus, the yard manager has to optimize the choice of loading cluster while considering their loading operations. In Ambrosino and Sciomachen (2003), the authors evaluate the impact of the yard organisation on container loading operations by computing the total stowage time when different picking sequences are considered. In Han et al. (2008), the authors optimize the yard template and the yard storage allocation problems simultaneously.
More recent papers deal with the robust yard template facing uncertainty Zhen (2014). In Petering (2009), the authors evaluate how the block widths affect the terminal performances thanks to a discrete event simulation model, while Petering and Murty (2009) show how the length of the blocks in the storage yard affects.
The template of the terminal is organized following the handling equipment used. In the analysed literature, the terminals use Rubber Tyred Gantry (RTG) cranes. The template of a terminal using reach stackers for picking up export containers is quite different, since the blocks are operated from one side. The pickup operations and the number of re-handles to execute to pick up a container are affected by the type of terminal equipment used.
In this paper, we consider a terminal with blocks parallel to the quay, where import and export yards are independent, and we deal with export standard containers. Handling operations in the export yard are executed thanks to reach stackers.
The yard template is given, this means that the export yard is organized in blocks of different capacities and, for each vessel, there is a subset of dedicated blocks, that is the containers that will be loaded on that vessel must be stored in the dedicated subset of blocks. Containers can be stored in the dedicated blocks under different storage strategies. Since now we only consider the subset of blocks dedicated to a vessel and the containers that must be loaded on it.
Each container is characterized by its type, size, weight and destination; these characteristics are important when defining the storage strategy. The ideal rule is to store together containers having the same characteristics, to reduce the operation time and avoid a bottleneck in the terminal when loading the ship Zhang et al. (2003), Saanen and Dekker (2007). This strategy is known as consignment strategy. Note that this strategy requires large storage space (for example, more than random policies De Koster et al. 2007), but on the other hand permits to improve the storage yard operations during the vessel containers loading in terms of productivity of both pickup operations in the bays and movements of material handling equipment among bays. Note that when a random policy is used, another strategy follows to improve the efficiency in the loading process; this strategy can be either a pre-marshalling strategy that permits to reorganize the container stacking beforehand, in order to reduce reshuffles, or a re-marshalling strategy that permits to move containers from their current storage location to a location closer to their vessel. Generally this happens in European layout terminals.
Among the papers dealing with the storage allocation for export containers, in Kim et al. (2000) a consignment strategy, based on Light, Medium and Heavyweight classes, destination and size, is used to decide an exact slot for each container; Kim and Park (2003) try to increase the loading operation efficiency by considering the travel distances of equipment, while in the optimal storage location is determined taking into consideration the container handling schedules.
In Woo and Kim (2011), four rules to determine the number of blocks to allocate the groups of export containers are proposed. Rules are fixed, and the main aim is to optimize the movements of yard equipment and the distance between the yard and the quay. Moreover, the authors evaluate the influence of the yard size on the efficiency of loading operations.
In an optimization model for defining storage strategies for export containers is proposed. The authors focus on the definition of rules of the consignment strategy intending to minimize the space used.
Starting from the model proposed in the present paper a new formulation for defining the best allocation of containers to storage spaces is proposed, simultaneously defining the best consignment strategy to use.
The main purpose of this work is to propose a solution approach, based on a mathematical model, being able to determine the set of best rules for defining which containers to store together, while determining the loading cluster for each group of containers. From a managerial point of view, the proposed approach permits the yard manager to compare storage strategies, and in particular, it should be used by the yard manager to evaluate and, eventually, change the storage strategy when the characteristics of the transport demand change.
The remainder of this paper is organized as follows. The problem under investigation is described in Sect. 2. The 0-1 linear model is presented in Sect. 3, while the solution approach is described in Sect. 4. Finally, the experimental tests are reported in Sect. 5 and conclusions and future works are outlined in Sect. 6.
The general contest
Let us consider an export yard and the blocks dedicated to a particular vessel. Blocks are characterized by different capacities, depending on bays, rows and tiers. Generally, the number of rows ranges from 2 to 5, while 4 tiers are considered.
A bay-locations is a set of cells belonging to the same bay of a block. Thus, the capacity of each bay-locations varies according to the number of rows in the block; the capacities are 8, 12, 16 and 20 containers. Figure 1 represents two different blocks with several bays. The yellow part is a block composed of 4 bays, each one characterized by 2 rows, thus the capacity of each bay-locations is 8. Meanwhile, the blue part is a block composed of bay-locations with 3 rows, and thus, the capacity of each bay-locations is 12.
Note that we refer to 20' bay-locations. For the storage of 40' containers, two contiguous 20' bay-locations are required. Figure 2 shows how it is possible to use a block composed of 6 bay-locations (Block1 in Fig. 2a), having both 20' and 40' containers to store. The block can be used for the storage of 20' containers, e.g. 6 bay-locations are occupied Summarizing, the yard consists of a given number of 20' bay-locations (here called simply bay-locations) of different capacities.
The yard manager assigns containers to the bay-locations following the storage rules adopted by the terminal. The storage rules consist of a list of characteristics that containers may have to be stored together. These rules permit to have homogeneous containers in each bay-locations, i.e. to be able to pick them up in sequence for their loading on board of the vessel and to optimize the work of the reach stackers during their pick-up in the yard. (It is generally preferred to complete the pick-up process in a bay-locations and empty it before moving the reach stacker to another bay-locations).
The most common characteristics used when defining a storage strategy are the following: Size: 20 feet (20') and 40 feet (40') containers; only stacks (and bay-locations) of one size (i.e. either 20' or 40') are permitted.
Type: standard containers, 20' and 40' box and 40' HC containers (special containers follow different rules derived directly from the particular requirements for their storage: plugs for reefers, special locations for hazardous and out of huge, etc.).
Destination: containers are grouped by their destination. Containers on board are generally grouped for homogeneous port of discharge, i.e. either a bay of the vessel or a part of it is dedicated to store containers for the same destination.
Weight: containers stored in the same bay have similar weights for respecting the requirement of safety, generally saying that the weight of the container stored in a given tier has to be no greater than the weight of the container stored in the tier below it, within a given tolerance. Many terminals group containers according to weight classes, i.e. containers belonging to the same weight class, can be stored together; the most common configuration used is based on three classes: Light, Medium and Heavy. For each class weight lower and upper limitations are given; for example the weight of containers in the Medium class ranges from 15 to 25 tons, those in the Light class have a weight less than 15 tons, while containers with a weight greater than 25 tons belong to the Heavy class.
It is easy to understand that the elements more impacting on the space utilization are the following: • the yard template and layout of blocks: the capacities of the bay-locations and the number of bay-locations of each capacity; • the consignment strategy: the rules adopted impact on the number of containers of each group, and the number of groups to manage (as shown in Fig. 3). The number of groups to manage corresponds to the required baylocations'patterns.
In Fig. 3, containers are grouped by their destination, their type (Box or High Cube), their size and their weight class. For each destination, nine patterns must be managed. The higher is the number of patterns to manage, the higher is the yard space required. The required space is also a function of the bay-locations capacities. How it is possible to act on these elements to reduce the yard space without penalising the efficiency of loading operations?
The number of weight classes used for defining the storage rules has a direct impact on the number of patterns, while acting on the weight limitations of each class it is possible to modify the number of containers in each pattern, thus the number of the required bay-locations for each pattern. Hence, these two elements might have a great impact on the space used in the yard and we have decided to investigate the possibility to optimize both the number of weight classes to use and the weight lower and upper bounds of each class.
In the following, we will refer to the number of weight classes as Class configuration; that is, 3 class configurations can be chose when defining the storage strategy; containers can be grouped into 2 weight classes (i.e. Light and Heavy), 3 weight classes (i.e. Light, Medium and Heavy) or 4 weight classes (i.e. Light, Medium, Heavy and Extra). Then, the best set of lower and upper bounds for each weight class of the chosen class configuration must be adopted. This means that for each class configuration many weight limitations, here called Weight configurations, are possible and only one can be adopted in the storage strategy.
The analysed contest
As explained above, the key elements of a storage strategy are the class configuration (i.e. the number of weight classes used to split containers) and the weight limitations associated to each class (lower and upper bounds), together with the characteristics of containers such as destination, type and size. In very general terms, the problem under investigation can be described as follows. Given the export yard blocks characterized by a set of bay-locations of different capacities dedicated to store the containers waiting for their loading on a specific vessel, given a set of containers representative of the average transport demand for the considered vessel, the problem consists in deciding the Class configuration and the Weight configuration to use for grouping the containers in order to minimize the space used in the export yard.
Let us now introduce the problem in more details. As far as the yard is considered, the following elements are given: the set of bay-locations of the blocks and the number of baylocations having a given capacity; the set of the possible class configurations and weight configurations, together with the weight limits. Moreover, each container to store in the yard is characterized by its size, type, weight and destination.
The problem consists in deciding the assignment of each container to a specific bay-locations, while simultaneously determining the class configuration of the blocks dedicated to the vessel under investigation, and their weight limitations, the characteristics of each bay-locations in terms of destination, type, size, capacity (among the set of available capacities in the blocks of the yard) and the weight class among the set belonging to the chosen class configuration, in order to respect the yard capacity and minimize both the number of bay-locations and the total empty slots.
Note that this problem emerges at tactical level for defining rules to use in the operative contests. These rules are not fixed once and ever adopt; the idea is to modify them following the trend of the export flow demand. Consider, for example, a service served by the terminal, and suppose that the number of containers for a destination of this service increases; we are interested to observe which groups of containers increase (i.e. 20' box containers or heavy 20' ones, etc.). Only in this way, we are able to decide whether the existing rules are adequate or not, and in this latter case, how to modify them thanks to optimization approach. The model and the solution approach useful for performing that analysis are presented in the following sections.
The mathematical model
In this section, a basic 0-1 linear programming model to solve the problem described in Sect. 2.2 is presented. The useful notation is the following: height of container i, ∀i ∈ C w i weight of container i, ∀i ∈ C u p weight upper bound of weight limit p, ∀ p ∈ P l p weight lower bound of weight limit if the weight configuration w belong to class configuration f , 0 otherwise γ wp ∀w ∈ W , ∀ p ∈ P, γ wp = 1 if the weight limits p belong to configuration w, 0 otherwise α weight used in the objective function for penalising the empty slots in the bay-locations.
Let us introduce the following decision variables: number of empty slots in baylocations j.
The resulting model is the following: Min j∈B d∈D h∈H s∈S q∈Q Subject to: Equation (1) is the objective function that minimizes the number of bay-locations used and penalizes the empty slots in the bay-locations.
Thanks to constraints (2), each container must be stored in one bay-locations. Constraints (3) assign at most one destination, one size, one type and one capacity to each baylocations. Constraints (4) verify the number of containers assigned to the bay-locations is less or equal to the capacity assigned to it.
The yard capacity, in terms of the number of bay-locations of the different capacities available (i.e. 8, 12, 16, 20 containers), is verified thanks to (5).
Constraints (6), (7) and (8) refer to the choice of a class configuration together with a weight configuration.
Only one couple of weight limits can be assigned to each bay locations (9), and thanks to (10), the weight limits are assigned to each bay-locations following the choice of the weight configuration chosen for the blocks of the yard. Thanks to (11) and (12), a container can be assigned to a bay-locations only if its weight is within the maximum and minimum weight limitations imposed to the bay-locations by the weight configuration assigned to it thanks to (9). In (13), the number of empty slots in each bay-locations is computed.
Model (1)-(13) can be solved up to optimally only for small instances (as shown in Sect. 5); in the following section, a heuristic procedure that can be used for solving real size instances is described.
Heuristic approach
For defining the best storage strategy rules for the real size problems, we propose a heuristic approach based on the model (1)-(13). From computational results (see Section 5), it is clear that the number of destinations and the different capacities of bay-locations have a great impact on the CPU time. Due to these considerations and to the fact that each consignment strategy groups containers for their destination and size, we propose a solution approach that decomposes the problem into sub-problems. In particular, we solve model (1)-(13) for each destination and for each size of containers (i.e. 20' and 40'). Moreover, we relax the capacity constraints due to the layout of the yard; thus, we suppose to have an unlimited number of 16 and 20-capacity bay-locations. Thanks to the union of the sub-problem solutions, we have a solution for the original problem; unfortunately, this solution can be unfeasible for constraints (5) that verify the yard capacity. Moreover, the obtained solution should present different class configurations for the destinations (i.e. a violation of constraint (6)) and, as a consequence, different weight limitations. In the proposed heuristic, only the unfeasibility concerning the yard capacity is eliminated.
The main steps of the proposed heuristic procedure are the following: Step1: construct a solution and verify its feasibility; Step2: remove unfeasibility by new assignment of containers belonging to the bay-locations used in more quantities than available in the real yard layout.
Before describing the solution approach, let us introduce the following additional notation: x complete current solution u q number of 20'bay-locations of capacity q used in the current solution E q number of bay-locations of capacity q used in excess with respect to the available ones (n q ) A q number of bay-locations of capacity q left with respect to the available ones (n q ) β permits to manage the size of containers (20' and 40') L q list of bay-locations of capacity q used in the current solution m j number of containers stored in the bay-locations j
Step1: construct a solution and verify its feasibility
After having solved model (1)-(13) for each destination d and for each size s, we construct, by the union of the obtained solutions, the current solutionx, characterized by a given number of used 16 and 20-capacity bay-locations.
For verifying the feasibility of solutionx, it is necessary to compute the number of 20' bay locations used for each capacity q (u q ) and compare them with n q .
This check is detailed in the following procedure described in c-like.
Check feasibility:
For each capacity q, set u q = 0 Set β ∈ {1, 2} If the bay-location j is used for 20' containers Set β = 1, then compute u q =u q + β; End If If the bay-location j is used for 40' containers Set β = 2, then compute u q =u q + β;
End If End For
For every capacity q, calculate E q and A q : If n q − u q > 0; Set A q = n q − u q : End If If n q − u q < 0 Set E q = u q − n q Create the list L q in descendent order with respect to the empty slots (z j ) End If End For If E q > 0 for at least one q the current solution is unfeasible.
Go-to
Step 2 otherwise: STOPthe current solution is feasible Step2: obtaining feasibility When the current solutionx does not respect the yard layout, i.e. there is at least one E q > 0, and then, it is necessary to modify the usage of bay-locations in the yard. Since one of the aims is to minimize the number of empty slots in the bay-locations, the idea is starting to replace the bay-locations with capacity q such that E q > 0 with large numbers of empty slots, with bay-locations of different and more adequate capacities. Thus, all bay-locations with capacity q such that E q > 0 are put in the list L q in descendent order with respect to their empty slots (z j ) in such a way to try to reduce the number of bay-locations with capacity q used, starting from those having the largest number of empty slots.
For example, let us suppose to have a 20-capacity baylocations with 9 empty slots. If in the yard is available a 12capacity bay-locations, we can change them and reduce the number of empty slots from 9 to 1. In case there is no a baylocations with a capacity greater than 11 containers, and we need to remove the 11 containers assigned to the 20-capacity bay-locations, we can try to split the 11 containers into two bay-locations, the most adequate among the available in the yard; for example, we can use two 8-capacity bay-locations.
These ideas, used to modify the current unfeasible solution in order to obtain feasibility while improving it, are detailed in the following procedure described in c-like.
Obtaining feasibility by removing bay-locations over used:
For each capacity q, such that E q > 0 While E q > 0 Select the first element of the list L q Let beĵ the bay-locations selected Search the most adequate bay-locations Let be: j new the selected bay-locations Search the 2 most adequate bay-locations for splitting mˆj Let be: j new1 , j new2 the selected bay-locations q new1 , q new2 their capacities Fix and Compute: Update the solution:
Stop-there is not a feasible solution End For
The search in the yard among the available bay-locations is realized by comparing the capacity of available bay-locations with mˆj that is the number of containers stored in baylocationsĵ, to minimize the empty slots in the new selected bay-locations. If there is not a bay-location with capacity greater than mˆj , the idea is to split mˆj in two bay-locations, and thus, it is necessary to select the two most adequate baylocations, again with the aim of minimizing the number of empty slots. If it is not possible to split mˆj , this means that we are not able to construct a feasible solution starting from thex, for the current layout.
Remember that we are referring to the layout of the yard reserved for a vessel; thus, in the operative contest, the capacity of this yard does not represent a strong constraint. This does not mean that it is possible to modify the layout on request, but in case of critical situations, a part of a block dedicated to another vessel can be temporarily used (as sometimes in the terminal under investigation happens). storage strategy, when considering different yard layouts, and increasing the size instances in terms of the number of containers. Moreover, we try to show the benefits for the terminal yard manager of having more freedom degree in choosing storage strategies, evaluating the impact on the space utilization. In the second campaign, we test the model by using a particular scenario (i.e. Scenario 3) by increasing both the number of containers and the number of destinations.
The model introduced in Sect. 3 and the solution approach described in Sect. 4 have been implemented in MPL (Mathematical Programming Language) and spreadsheet Excel and solved by the commercial solver GUROBI on a device with Intel Core i7, 2.6 G Hz, Memory 16G. All experiments have been conducted by using instances generated by having in mind the real cases solved by a container terminal of an Italian port. Also some instances derived by a case study are reported to validate the proposed approach.
First experimental campaign
In the first campaign, we use small-scale instances. In particular, we refer to instances SS, characterized by 86 containers to load on the same vessel, and instances MS with 320 containers. The main aim of this campaign is to investigate the behaviour of the model presented in Sect. 3 either with fixed and predetermined weight and class configurations or different weight and class configurations to choose. For this analysis, we investigate four scenarios of increasing difficulty and complexity. Details of parameters of each scenario are specified in Figure 9 reported in Appendix. In particular, in Scenario 1 only one fixed class configuration is used, with fixed weight limits. Scenario 2 permits to choose the weight limits for a given class configuration. Scenario 3 permits to choose among different class configurations, each one characterized by fixed weight limits. The last scenario, Scenario4, represents the larger degree of flexibility: it is possible to choose both the best class configuration and weight configuration. Note that the model proposed in Sect. 3 permits to face this problem, the more general case for defining the best storage strategy.
Moreover, in this analysis we suppose to have two different terminal layouts in terms of capacity of the bay-locations. In fact, thanks to an historical data analysis of the terminal under investigation, we know that bay-locations with capacity of 12, 16 and 20 are in common usage. Thus, we compare performances under two different situations, a capacity set 1 (named CS1) in which the blocks of the terminal are composed by bay-locations with a capacity of 16 and 20 containers, and a capacity set 2 (named CS2), in which bay-locations have capacity 12, 16 and 20 containers. In particular, the terminal layout characteristics in terms of quantity of bay-locations of each capacity are summarized in Table 1. Model (1-13) has been solved with time limits that differ for the considered scenarios; in particular, for the first scenario, the maximum CPU time is set as 3600 seconds, for the second scenario is 10800 seconds, while for the last two scenarios, the time limit is fixed to 14400 seconds.
The detailed results of the different solved scenarios are reported in Appendix in Fig. 10 and in the following figures where some graphs are used for showing them easily. In particular, we can note from graph in Fig. 4 that all SS instances can be solved up to optimally in the four analysed scenarios. More flexibility in the class and weight configuration choice required more CPU time, and from the graph, it is easy to note that instances characterized by layout CS2 In case of medium-sized instances MS, model (1-13) can be used to solve up to optimally only instances characterized by the simple layout (SC1) with both the class configuration and the weight configurations fixed. The trend of the CPU time and Gap are reported in the graph in Fig. 6.
Finally, we have investigated how the space is used in the yard when the different layouts are implemented. From the results reported in Fig. 10, we can obtain the graphs depicted in Figs. 7 and 8. From these graphs we can note that the number of empty slots are lower when in the yard are available bay-locations with 12, 16 and 20 container capacity.
By fixing a priori, either the number of classes to use or the weight limitations for each class can cause an inefficient usage of the space in the yard. In fact, if we consider SS instances, we can note that, without optimizing the storage strategy can be generated even 94 empty slots [126], while in the optimal solution only 14 empty slots [30] are present when layout CS2 [CS1] is used. These numbers grow for instances MS. We can also note that the number of empty Obviously, small capacity is more attractive in order to reduce vacancy in every bay-locations. However, the number of bay-locations generated remains almost the same under two different capacity sets. The influence denoted by the variety of weight configuration can be deemed as small on the space and bay-locations utilization.
Second experimental campaign
This campaign is executed by using the proposed model for solving different instances for Scenario 3 with a particular layout of the yard, that is only one bay-locations capacity is present, i.e. 20 containers bay-locations capacity. The effectiveness of the proposed model is evaluated for increasing size instances in terms of both number of containers and destinations. We conduct experiments with 200, 400, 800 and 1600 containers.
CPU time limit is fixed 3600 seconds for instances with 200 and 400 containers, while it is 7200 seconds for instances with 800 containers and 14400 seconds for the largest instances with 1600 containers. Results and detailed information are listed in Table 2, where columns refer to the number of containers (Cntr), the number of destinations (D.), the CPU time (CPU), the number of bay-locations used (B-l) and the empty slots (E.s.), the class configuration chosen (Conf.) and the optimality gap (Gap).
Looking at Table 2, we can note that all instances with 200 and 400 containers have been solved up to optimality requiring an average CPU time of 77 and 311 seconds, respectively. Larger instances present high gaps even if the time limit has been increased up to 7200 and 14400 seconds. We can note that larger instances with only one destination present lower gaps than those with either 4 or 8 destinations.
Summarizing, from both the results shown in Table 2 and in Figs. 5 and 6 (where CS1 and CS2 were compared), we can
Some real case instances
As final test, we present a comparison of the results obtained by model (1)-(13) and the proposed heuristic approach. We have solved some real instances, belonging to the set of SS and MS, and one larger instance characterized by 646 containers and 9 different destinations. For one instance of each size, we are able to compare the obtained solutions also with the storage strategy adopted by the terminal under investigation. Data reported in Table 3 are the average of the results of three SS and MS instances and permit to compare the performances of the model and the heuristic method in terms of empty slots (E.s.) and bay-locations used (B-l). From this comparison, we can note that the heuristic approach is worst in terms of empty slots, while almost the some number of bay-locations is used.
In Table 4, we compare the obtained results with the solutions adopted by the terminal. This comparison shows that either proposed model or heuristic approach outperforms the current storage plan of the terminal under investigation in terms of empty space and bay-locations. The solutions obtained by using the proposed heuristic approach grant a saving that ranges from 7% to 56% of empty slots, and from 16% to 53% of bay-locations used. The proposed approach seems to be promising and helpful for yard storage managers.
Conclusions
In this paper, we have tried to implement a solution approach for helping yard managers in defining the best storage strategy to use for minimizing the space used. We have shared the obtained results with the maritime terminal under investigation that, having lots of problems due to the lack of space in the yard, have appreciated this approach. The idea is to solve this problem each time there is a significant change in the transport demand that can require a change in the storage strategy. The number of classes and the weight limitations, defined thanks to the proposed approach, are inserted as parameters in the TOS (Terminal Operating System) of the terminal that manage the real-time storage of the flow of containers reaching the terminal by trains and by trucks. The proposed approach provides maximum freedom to terminal managers choosing different storage strategies in accordance with numerous requests. It permits to decide the most appropriate combination of characteristics and configurations for reorganizing the storage plan granting better space utilization.
As future work, it should be interesting to extend this problem in such a way as to consider it dynamically. A vessel of a service can visit the terminal for example once a week and can occur that containers arrive too early at the terminal, and since they have to be loaded on the vessel of the next week, they have to wait; thus, it is necessary to manage together containers that must be loaded in two different vessels of the same service. This is the new request of the terminal we are involved with.
Funding This research has been supported by MUR-Italy -project PRIN2015 research program: SPORT: Smart PORt Terminals Availability of Data and Material Real data are not available for privacy reasons. Generated instances are available on request.
Conflict of interest The authors declare that they have no conflict of interest
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm ons.org/licenses/by/4.0/.
Appendix A Details on instances and computational tests
See Figs. 9 and 10.
In Fig. 9, the details of the parameters characterizing each scenario are reported. The TestID identifies each type of solved instance, by indicating the status of both the class configuration and the weight configuration. In the last two columns, the different classes and the possible weight limitations are shown. In Fig. 10, the results of each solved scenario are reported.
|
2021-10-21T16:09:12.727Z
|
2021-09-07T00:00:00.000
|
{
"year": 2022,
"sha1": "13c89258ba551966e9cf2521605e53183e5f9875",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00500-022-06769-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "42b1debef4eef10c36cf25ec8ea2dbe1ae1484ac",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
49192120
|
pes2o/s2orc
|
v3-fos-license
|
Goal-Oriented Conjecturing for Isabelle/HOL
We present PGT, a Proof Goal Transformer for Isabelle/HOL. Given a proof goal and its background context, PGT attempts to generate conjectures from the original goal by transforming the original proof goal. These conjectures should be weak enough to be provable by automation but sufficiently strong to prove the original goal. By incorporating PGT into the pre-existing PSL framework, we exploit Isabelle's strong automation to identify and prove such conjectures.
auto discharges all remaining sub-goals or DInd runs out of the variations of induct methods as shown in Fig. 1a.
This approach works well only if the resulting sub-goals after applying some induct are easy enough for Isabelle's automated tools (such as auto in DInd) to prove. When proof goals are presented in an automation-unfriendly way, however, it is not enough to set a certain combination of arguments to the induct method. In such cases engineers have to investigate the original goal and come up with auxiliary lemmas, from which they can derive the original goal.
In this paper, we present PGT, a novel design and prototype implementation 3 of a conjecturing tool for Isabelle/HOL. We provide PGT as an extension to PSL to facilitate the seamless integration with other Isabelle sub-tools. Given a proof goal, PGT produces a series of conjectures that might be useful in discharging the original goal, and PSL attempts to identify the right one while searching for a proof of the original goal using those conjectures.
Identifying Valuable Conjectures via Proof Search
To automate conjecturing, we added the new language primitive, Conjecture to PSL. Given a proof goal, Conjecture first produces a series of conjectures that might be useful in proving the original theorem, following the process described in Section 2.2. For each conjecture, PGT creates a subgoal_tac method and inserts the conjecture as the premise of the original goal. When applied to "itrev xs [] = rev xs", for example, Conjecture generates the following proof method along with 130 other variations of the subgoal_tac method: apply (subgoal_tac "!!Nil. itrev xs Nil = rev xs @ Nil") where !! stands for the universal quantifier in Isabelle's meta-logic. Namely, Conjecture introduced a variable of name Nil for the constant []. Applying this method to the goal results in the following two new sub-goals: 1. (!!Nil. itrev xs Nil = rev xs @ Nil) ==> itrev xs [] = rev xs 2. !!Nil. itrev xs Nil = rev xs @ Nil Conjecture alone cannot determine which conjecture is useful for the original goal. In fact, some of the generated statements are not even true or provable. To discard these non-theorems and to reduce the size of PSL's search space, we combine Conjecture with Fastforce (corresponding to the fastforce method) and Quickcheck (corresponding to Isabelle's sub-tool quickcheck [3]) sequentially as well as DInd as follows: strategy CDInd = Thens [Conjecture, Fastforce, Quickcheck, DInd] Importantly, fastforce does not return an intermediate proof goal: it either discharges the first sub-goal completely or fails by returning an empty sequence. Therefore, whenever fastforce returns a new proof goal to a sub-goal resulting from subgoal_tac, it guarantees that the conjecture inserted as a premise is strong enough for Isabelle to prove the original goal. In our example, the application of fastforce to the aforementioned first sub-goal succeeds, changing the remaining sub-goals to the following:
!!Nil. itrev xs Nil = rev xs @ Nil
However, PSL still has to deal with many non-theorems: non-theorems are often strong enough to imply the original goal due to the principle of explosion. Therefore, CDInd applies Quickcheck to discard easily refutable non-theorems. The atomic strategy Quickcheck returns the same sub-goal only if Isabelle's subtool quickcheck does not find a counter example, but returns an empty sequence otherwise.
Now we know that the remaining conjectured goals are strong enough to imply the original goal and that they are not easily refutable. Therefore, CDInd applies its sub-strategy DInd to the remaining sub-goals and it stops its proof search as soon as it finds the following proof script, which will be printed in Isabelle/jEdit's output panel.
Conjecturing
Section 2.1 has described how we identify useful conjectures. Now, we will focus on how PGT creates conjectures in the first place. PGT introduced both automatic conjecturing (Conjecture) and automatic generalization (Generalize). Since the conjecturing functionality uses generalization, we will only describe the former. We now walk through the main steps that lead from a user defined goal to a set of potentially useful conjectures, as illustrated in Fig. 2. We start with the extraction of constants and sub-terms, continue with generalization, goal oriented conjecturing, and finally describe how the resulting terms are sanitized.
Extraction of Constants and Common Sub-terms. Given a term representation
T of the original goal, PGT extracts the constants and sub-terms that appear multiple times in T . In the example from Section 1, PGT collects the constants rev, itrev, and [].
Generalization. Now, PGT tries to generalize the goal T . Here, PGT alone cannot determine over which constant or sub-terms it should generalize T . Hence, it creates a generalized version of T for each constant and sub-term collected in the previous step. For [] in the running example, PGT creates the following generalized version of T : !!Nil. itrev xs Nil = rev xs.
Goal Oriented Conjecturing. This step calls the function conjecture, illustrated in Fig. 3, with the original goal T and each of the generalized versions of T from the previous step (C 0 , . . . , C n ). The following code snippet shows part of conjecture: fun cnjcts t = flat (map (get_cnjct generalisedT t) consts) fun conj (trm as Abs (_,_,subtrm)) = cnjcts trm @ conj subtrm | conj (trm as App (t1,t2)) = cnjcts trm @ conj t1 @ conj t2 | conj trm = cnjcts trm For each T and C i for 0 ≤ i ≤ n, conjecture first calls conj, which traverses the term structure of each T or C i in a top-down manner. In the running exam- ple, PGT takes some C k , say !!Nil. itrev xs Nil = rev xs, as an input and applies conj to it. For each sub-term the function get cnjct in cnjcts creates new conjectures by replacing the sub-term (t in cnjcts) in T or C i (generalisedT) with a new term. This term is generated from the sub-term (t) and the constants (consts). These are obtained from simplification rules that are automatically derived from the definition of a constant that appears in the corresponding T or C i .
In the example, PGT first finds the constant rev within C k . Then, PGT finds the simp-rule (rev.simps(2)) relevant to rev which states, rev (?x # ?xs) = rev ?xs @ [?x], in the background context. Since rev.simps(2) uses the constant @, PGT attempts to create new sub-terms using @ while traversing in the syntax tree of !!Nil. itrev xs Nil = rev xs in a top-down manner.
When conj reaches the sub-term rev xs, get cnjct creates new sub-terms using this sub-term, @ (an element in consts), and the universally quantified variable Nil. One of these new sub-terms would be rev xs @ Nil 4 . Finally, get cnjct replaces the original sub-term rev xs with this new sub-term in C k , producing the conjecture: !!Nil. itrev xs Nil = rev xs @ Nil.
Note that this conjecture is not the only conjecture produced in this step: PGT, for example, also produces !!Nil. itrev xs Nil = Nil @ rev xs, by replacing rev xs with Nil @ rev xs, even though this conjecture is a non-theorem. Fig. 4 illustrates the sequential application of generalization in the previous paragraph and goal oriented conjecturing described in this paragraph.
Clean & Return Most produced conjectures do not even type check. This step removes them as well as duplicates before passing the results to the following sub-strategy (Then [Fastforce, Quickcheck, DInd] in the example).
Conclusion
We presented an automatic conjecturing tool PGT and its integration into PSL. Currently, PGT tries to generate conjectures using previously derived simplification rules as hints. We plan to include more heuristics to prioritize conjectures before passing them to subsequent strategies.
Most conjecturing tools for Isabelle, such as IsaCoSy [6] and Hipster [7], are based on the bottom-up approach called theory exploration [2]. The drawback is that they tend to produce uninteresting conjectures. In the case of IsaCoSy the user is tasked with pruning these by hand. Hipster uses the difficulty of a conjecture's proof to determine or measure its usefulness. Contrary to their approach, PGT produces conjectures by mutating original goals. Even though PGT also produces unusable conjectures internally, the integration with PSL's search framework ensures that PGT only presents conjectures that are indeed useful in proving the original goal. Unlike Hipster, which is based on a Haskell code base, PGT and PSL are an Isabelle theory file, which can easily be imported to any Isabelle theory. Finally, unlike Hipster, PGT is not limited to equational conjectures.
Gauthier et al. described conjecturing across proof corpora [4]. While PGT creates conjectures by mutating the original goal, Gauthier et al. produced conjectures by using statistical analogies extracted from large formal libraries [5].
|
2018-06-16T00:34:00.754Z
|
2018-06-12T00:00:00.000
|
{
"year": 2018,
"sha1": "5a7628b2f8647ecad770bd26bf346fa268c11768",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1806.04774",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0d1980bc75fe8ebddd4585152a5e12094d324043",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
269125850
|
pes2o/s2orc
|
v3-fos-license
|
Swing-phase detection of locomotive mode transitions for smooth multi-functional robotic lower-limb prosthesis control
Robotic lower-limb prostheses, with their actively powered joints, may significantly improve amputee users’ mobility and enable them to obtain healthy-like gait in various modes of locomotion in daily life. However, timely recognition of the amputee users’ locomotive mode and mode transition still remains a major challenge in robotic lower-limb prosthesis control. In the paper, the authors present a new multi-dimensional dynamic time warping (mDTW)-based intent recognizer to provide high-accuracy recognition of the locomotion mode/mode transition sufficiently early in the swing phase, such that the prosthesis’ joint-level motion controller can operate in the correct locomotive mode and assist the user to complete the desired (and often power-demanding) motion in the stance phase. To support the intent recognizer development, the authors conducted a multi-modal gait data collection study to obtain the related sensor signal data in various modes of locomotion. The collected data were then segmented into individual cycles, generating the templates used in the mDTW classifier. Considering the large number of sensor signals available, we conducted feature selection to identify the most useful sensor signals as the input to the mDTW classifier. We also augmented the standard mDTW algorithm with a voting mechanism to make full use of the data generated from the multiple subjects. To validate the proposed intent recognizer, we characterized its performance using the data cumulated at different percentages of progression into the gait cycle (starting from the beginning of the swing phase). It was shown that the mDTW classifier was able to recognize three locomotive mode/mode transitions (walking, walking to stair climbing, and walking to stair descending) with 99.08% accuracy at 30% progression into the gait cycle, well before the stance phase starts. With its high performance, low computational load, and easy personalization (through individual template generation), the proposed mDTW intent recognizer may become a highly useful building block of a prosthesis control system to facilitate the robotic prostheses’ real-world use among lower-limb amputees.
Introduction
Around the world, millions of people are living with major lower limb losses due to various causes such as injury and disease (Ziegler-Graham et al., 2008). Traditionally, passive (i.e., non-powered) prosthetic devices were used to restore the lost limb and joint (e.g., knee and ankle) functions.Due to the passive prostheses' inability to generate active mechanical power, their users typically suffer from multiple issues in gait, e.g., asymmetric gait, increased hip power, and elevated metabolic energy consumption (Waters et al., 1976;Hof et al., 2007;Winter, 2009).Further, amputees fitted with passive prostheses experience significant difficulty in energetically demanding locomotive activities such as stair climbing (Bae et al., 2009), causing major inconveniences in their daily life.Motivated by these significant issues, multiple robotic (powered) lowerlimb prostheses were developed by researchers in academia and industry [e.g., Vanderbilt Leg (Lawson et al., 2014) and Open-Source Leg (Azocar et al., 2020)], providing the potential to significantly improve amputee users' mobility and quality of life through actively powered prosthetic joints.With two commercial products in clinical use [Ossur Power Knee (Cutti et al., 2008)] and Otto Bock Empower Ankle the capability of powered prostheses in restoring healthy-like gait in walking has been demonstrated in multiple studies (Johansson et al., 2005;Silver-Thorn and Glaister, 2009).
With their actively powered joints, robotic prostheses can potentially function like healthy biological limbs in locomotive modes beyond regular level walking.For example, powered prosthetic joints may enable amputees to climb stairs in a more natural way (Lawson et al., 2013).However, to support such multi-functional operation in amputee users' daily life, reliable identification of users' motion intent (as represented by the desired mode of locomotion) is indispensable, as each locomotive mode requires a specifically designed motion control strategy to fit its unique dynamic characteristics.Furthermore, an even greater challenge is the timely recognition of the amputee user's intent of locomotive mode transition.When a prosthesis motion controller transitions from the current mode of operation to a new mode (e.g., walking to stair climbing), such transition needs to occur on a timely basis (with minimal time delay) to avoid disrupting the amputee user's overall gait control.Considering the weak gait and stability control capability of lower-limb amputees as well as the increased risk of fall during such transitional movements, the ability of recognizing mode transitions and taking the corresponding control actions is critical for the amputees' mobility and safety in daily living.
Motivated by the importance of the topic, intent recognition for prosthesis control has been investigated by numerous investigators in the area.Two types of sensor signals were used as the major sources of information.The first is the muscle activation signals acquired through surface electromyography (sEMG).For example, Huang et al. developed phase-dependent sEMG pattern recognition methods using linear discriminant analysis (LDA) and artificial neural network (ANN) classification techniques to recognize multiple modes of locomotion, including standing, level walking, and stair ascent/descent (Huang et al., 2009;Huang et al., 2011); recently, Zhang et al. developed a dynamic adaptive neural network algorithm for the multi-feature fusion-based processing of sEMG signals (Zhang et al., 2022).With the sEMG serving as a noninvasive interface to the user's nervous system, the acquired sEMG signals may directly reflect his/her intent for the desired joint motion.However, sEMG also suffers from multiple issues such as low reliability and weak signals susceptible to noise and motion artifacts, affecting its practical use in amputees' daily life.The other type of sensor signals is the signals from mechanical sensors, most of which are embedded in the prosthesis itself (joint angles/angular velocities, accelerations/angular velocities measured through inertia measurement units, ground reaction forces, etc.).For example, Varol et al. developed a Gaussian Mixture Model (GMM)-based supervisory controller of powered lower-limb prostheses to infer users' intended motion modes (stand, sit, or walk) based on the signals from the prosthesis-embedded joint motion and interaction force sensors (Varol et al., 2010).More recently, Su et al. developed a convolutional neural network (CNN)-based method to recognize human motion intent utilizing the signals from the inertia measurement units (IMUs) mounted on the healthy legs of lower-limb amputees (Su et al., 2019); Cheng et al. developed a biomechanically intuitive activity recognition approach using the signals from a thigh-mounted IMU and a force-sensing resistor as the input (Cheng et al., 2021).Additionally, fusion of the sEMG and mechanical sensor signals has also been investigated to improve the intent recognition performance (Huang et al., 2011;Young et al., 2013a;Young et al., 2013b).
Despite the large body of research work dedicated to the topic, reliable real-time recognition of user motion intent, especially on the desired locomotive transition, still remains a challenging issue that affects robotic prostheses' practical use in amputees' daily life, as the majority of existing approaches are only capable of recognizing the current (on-going) locomotive mode.Further, the heavy computation load associated with many intent recognition approaches also hampers their implementation in prosthesis control systems due to the limited computational power of the onboard microcontrollers.To overcome these challenges, the authors present a new lightweight intent recognizer to detect the user's desired locomotive mode transition early in the swing phase, such that a robotic lower-limb prosthesis may assist the amputee user to complete the power-demanding portion of the gait cycle with its powered joint actions.Such early and timely recognition of the locomotive transition may form an important building block for a future versatile (multi-modal) prosthesis control system to facilitate robotic prostheses' use in amputees' daily life.
As the basis of the intent recognizer development, the authors completed a multi-modal gait data collection study, including a variety of locomotive modes and mode transitions (detailed in the subsequent section).To facilitate the intent recognizer's implementation in real-time prosthesis control, the proposed intent recognizer only involves the signals from common mechanical sensors, including joints angle and inertial measurement data.Utilizing these sensor signals, the authors developed a multidimensional dynamic time warping (mDTW) method to provide timely detection of possible walking-to-stair ascent/descent transitions in the swing phase (Section 2), such that the robotic prosthetic joints may assist the user to complete the potentially power-demanding actions during the subsequent stance phase (e.g., lifting of the body center of mass during stair ascent).The mDTW method also provides an additional advantage of facilitating personalized and continuous adaptation through supplemental template generation, which may be especially useful for amputee prosthesis users with highly diverse and evolving gait patterns.
Gait data collection study
To support the development of the intent recognizer, a study was conducted to collect the related gait data.In this multi-modal gait data collection study, the sensors were selected primarily based on the availability in robotic lower-limb prostheses.The majority of sensors used in this study were those embedded in a lowerlimb exoskeleton, which are able to measure limb movement with high accuracy and reliability (Haque et al., 2019;Haque et al., 2021).Considering the fact that most lower-limb amputees are unilateral, a single exoskeleton was attached to each participant to measure the knee and ankle joint movement (using rotary magnetic encoders), the shank and thigh 3D movement (using two inertia measurement units (IMUs)), and the foot plantar pressure (using two forcesensing resistors (FSRs) embedded in the shoe).These sensor signals are expected to be available from a robotic lower-limb prosthesis.Further, wearable sensors were attached to other parts of the human body to provide additional gait information, including two IMUs attached to the contralateral leg (shank and thigh), an additional IMU attached to the chest, and two FSRs embedded in the shoe of the contralateral foot (on the heel and first metatarsal head to facilitate the detection of important gait events such as heel strike and toe-off).Details of the sensor placement are show in Figure 1.Nine subjects with no physical and cognitive abnormalities (anthropometric data shown in Table 1) participated in the study.Note that the study were conducted on healthy subjects for two main reasons: 1) the target users of the proposed mDTW method (individuals with amputation fitted with future robotic prostheses) may be able to walk like healthy individuals, and thus the corresponding gait data would be similar to those of healthy individuals as well; and 2) it is difficult to recruit participants from the target user population, as the use of robotic protheses is still very limited (note that the walking gait of amputees fitted with traditional passive prostheses is significantly different from that of amputees fitted with robotic prostheses).The study was approved by the Institutional Review Board (IRB) at the University of Alabama.After the exoskeleton and the wearable sensor were attached, each subject was asked to walk freely for 3-5 min to get comfortable with the setup.Subsequently, the subject performed the following locomotives activities: a) walk on treadmill at self-selected slow, moderate, and fast speeds (each speed for 30 s), b) perform a total of four sequences of motion activities comprising all three motion states, such as level ground walking, walking to stair climb transition, and walking to stair descend transition.
The activities within the four sequences were organized in different orders to avoid bias in the data collection.The experiments were conducted using two staircases (one staircase with a left hand turn and the other with right-hand turn) connected with a long straight hallway.Activity sequence started from stair descent in one sequence and stair ascent in the other one for each staircase selected.As such, a total of four activity sequences were tested based on the starting points of the sequences.Similarly, the four sequences ended at four different stopping points.The walking speeds within the sequence were randomized among three self-selected speeds (fast, normal and slow speed).The participants were free to take rest whenever necessary.The entire experiment was videotaped with a handheld camera.Before starting the data collection, the camera and the exoskeleton system were time synchronized.A desktop computer was used to send timestamps to the sensor system while the camera used its own application to synchronize the time.The activities to be recognized and the corresponding durations are listed in Table 2.
On average, 40 min of data were recorded per subject.The data contains values of the accelerometer and gyroscope for x, y, and z-axes from five IMUs, the joint position of knee and ankle, as well as the heel and the ball pressure from the FSRs under the left/right feet.In addition to the signals from the sensors, we also extracted the thigh angle (with respect to the vertical direction) from the corresponding IMU signals, considering the fact that the thigh movement directly reflects a person's intended motion (e.g., raising the thigh higher in stair climbing).Specifically, complementary filter and Kalman filter were used to extract the thigh movement.First, accelerometer data was used to calculate the angles (roll and pitch).Subsequently, gyroscope data were integrated to get pitch and roll rates.Following this, the accelerometer and gyroscope data were combined to obtain a filtered estimate.Next, the Kalman Filter was initialized by defining state variables, matrices, and initial conditions.After that gyroscope data were used to predict
Data labeling and augmentation
Human locomotion is cyclic in nature.In data processing, the data sequences were segmented into gait cycles by monitoring the shank IMU accelerometer z-axis data and the heel pressure data.Note that, different from the traditional method of starting each gait cycle from ground contact (i.e., stance phase first), the segmented gait cycles in this work start from the event of toeoff (i.e., swing phase first).Such method of segmentation enables the proposed intent recognizer to recognize the desired mode (or mode transition) early in the swing phase, which, in turn, enables the prosthesis motion controller to regulate the actuator power output to assist the amputee user to complete the often power-demanding stance-phase motion.Subsequently, the cycles were manually labeled using MATLAB signal labeling toolbox using the video as the reference.Related to the intent recognition in this work, three types of cycles were utilized, including level walking (LW), level walking to stair climbing (LW-SC) transition, and level walking to stair descending (LW-SD) transition.In the data set, the number of LW cycles was significantly larger than the transitional motion cycles.To address this imbalanced dataset issue, all transitional motion states cycles were augmented by (a) scaling (96%, 98%, 102%, and 104% of the original amplitude) (b) resampling, and (c) white Gaussian noise augmentation (SNR 30 dB, 35 dB, 40 dB, and 45 dB) (Wen et al., 2021).The post-augmentation dataset contained 30,461 LW cycles, 2392 LW-SC cycles, and 2184 LW-SD cycles.
Dynamic time warping
Leveraging the cyclic nature of human locomotion, the proposed intent recognition algorithm was developed by comparing the real-time sensor signals with the known patterns of locomotion based on the progression in a gait cycle.The comparison was conducted with the method of Dynamic Time Warping (DTW) (Liberman, 1983), which was developed to compute the optimal match between two given signal sequences.The DTW is very efficient in the time-series similarity measurement even if the two time-series are not aligned in the time axis despite being very similar in shape.While Euclidean distance assumes the nth point in one sequence is aligned with the nth point in the other, DTW alignment allows a more intuitive distance measure to be calculated, as shown in Figure 2. The DTW distance is expected to be much smaller compared with the Euclidean distance after optimally matching the signal sequences.The equations below outline the core steps of Dynamic Time Warping, providing a mathematical framework for aligning sequences (Jang et al., 2017;Weng et al., 2023).
In the proposed DTW-based intent recognition algorithm, we express any movement gait cycle from a continuous movement sequence and a template cycle as two time series X and Y.
Frontiers in Robotics and AI 04 frontiersin.orgY = ( y 1 , y 2 , ……, where m is the length of X and n is the length of Y.A distance matrix D in the size of (m×n) is formulated using the single-point Euclidean distance between x i and y j of the sequences X and Y: The cumulative distance matrix C is calculated where each element C (i,j) represents the cumulative distance from the starting point (1, 1) to cell (i, j) using dynamic programming: The initial conditions are C (1,1) = D (1,1), C (i,1) = D (i,1)+C (i−1,1), and C (1,j) = D (1,j)+C (1,j−1).The optimal warping path is calculated through the cumulative distance matrix, using backtracking to trace the path with the minimum total distance.This path represents the alignment between the two sequences.The total distance along the optimal warping path is the sum of the distances between the aligned elements.
As described above, the DTW algorithm calculates the warping path which gives the lowest distance/cost measure between X and Y.The measured cost/distance should be low if X and Y are alike and high if they are dissimilar.The multidimensional Dynamic Time Warping (mDTW) algorithm is an extension of the regular DTW algorithm that takes all dimensions into account when finding the optimal match between two series.As multiple sensor signals are available to support the intent recognition, the mDTW was adopted to make full use of the rich information embedded in the sensor signals.Further, based on the key requirement of identifying the possible locomotive mode transition early in the swing phase, the cumulated realtime sensor signals from the start of the gait cycle were compared with the corresponding templates with the progression of the gait cycle, which may generate the valuable information of intent recognition performance and its improvement when more sensor information becomes available with the progression in the gait cycle.
Template generation
As described earlier, a total of three locomotive states (mode or mode transitions) were investigated in this study, including a steady state (level walking, LW) and two transitional state (level walking to stair climbing, LW-SC, and level walking to stair descending, LW-SD).Leveraging the cyclic nature of human locomotion, we also segmented the gait data into individual cycles, starting from the event of toe-off (exoskeleton side).Note that such definition was applied to all gait cycles in this work, including the steady-state LW cycle as well as the transitional (LW-SC and LW-SD) cycles.
Utilizing the collected gait data, templates were generated to represent the characteristics of each mode (mode transition).Specifically, the templates were generated by averaging (sample by sample) all gait cycles for the respective mode or mode transition.To address the slight variation of cycle length, the gait cycles were resampled to the average length of the gait cycles using the Matlab 'resample' function prior to averaging.The templates were generated for all motion modes and all sensor signals, including the thigh angle, all axes of the gyroscopes and accelerometers in the IMUs, joints angles, and foot pressures.As an example, the templates of the thigh angle for the three locomotion states are shown in Figure 3. Figure 3A shows the signal cycles from the study, while Figure 3B shows the generated templates for the respective locomotion modes.
Classification with multiple templates and leave-one-out validation
With the availability of multiple templates from different test participants (a total of 111 templates for all signals and locomotive states from each participant), the standard mDTW method was augmented with a voting mechanism, which exploits the individual predictions to make the final prediction (Figure 4).The majority voting scheme has been used in a variety of methods, and it can be shown that majority voting improves the probability of correct classification regardless of the type of classifier used (Narasimhamurthy, 2005).Incorporating the voting mechanism is expected to improve the performance of the mDTW intent recognizer by accommodating the variation of human gait patterns among different individuals.When multiple sets of templates were obtained from multiple human subjects, the voting mechanism can be combined with the standard mDTW method to make full use of the available template sets and provide more accurate and responsive intent recognition.To classify an unknown motion cycle, the corresponding sensor signals are compared with the available template sets to compute the similarity scores, generate individual predictions, and finally lead to the final prediction through voting.Specifically, comparison with each template set (associated with each subject) yields an individual prediction (LW, LW-SC, or LW-SD) based on the similarity score within the set; the final prediction is then determined by the collection of individual predictions through voting.In the event of a tie, the final decision will be made through the comparison of average similarity score.
Considering the large number of signals available, a forward selection process was implemented to identify a subset of signals with the most significant contributions to the classification.This was an iterative process, starting from the signal with the best performance of classification when used as the single input to the algorithm.In each iteration, a new signal was added, which best improved the model till an addition of a new variable did not improve the performance of the model.
For the validation of the intent recognition algorithm, the standard leave-one-subject-out method was adapted to this specific application.Specifically, for the performance characterization on each subject, the corresponding template set (i.e., generated by his/her own data) was excluded.Such cross-validation is expected to generate an unbiased evaluation of the intent recognizer while making full use of the gait data available.
Results
As described in Section 2.5, the forward selection method was utilized to determine the most significant set of signals, along with the optimal number of dimensions, for implementing the mDTW method.The list of sensors signal selected from the forward selection method is tabulated in Table 3. Maximum accuracy was achieved when six sensor signals were used in this method.Table 4 shows the overall accuracy and F1-score of the proposed method by increasing the dimension from one to six using the forward selection method.The table also shows that the accuracy reduces by adding an additional sensor signal other than selected by the forward selection method in the mDTW model; hence only six sensor signals were considered in this model.The last column in Table 4 shows the inference (classification) time for the algorithm executed in Matlab Frontiers in Robotics and AI 06 frontiersin.orgIntent recognition classification model.
Sig6
Accelerometer-Z (Chest) Sig7 (not selected) Accelerometer -X (Instrumented thigh) 2023a (DTW function from the Signal Processing Toolbox) on a 3.2 Ghz Intel Core I-9 processor.Note that, when implementing the proposed intent recognizer in prosthesis control, the cumulated real-time sensor signal data from the beginning of the gait cycle will be used to compare with templates for locomotive mode recognition, and reliable recognition early in the swing phase is highly desirable.As such, performance of the proposed mDTW algorithm was investigated for the different segment sizes (percentages from the initiation of the gait cycle).Figure 5 and Figure 6 show the performance (accuracy and F1 score, respectively) of the mDTW algorithm with respect to the percentage of the gait cycle.For most subjects, both accuracy and the F1-score increase with the progression in the gait cycle.At 30% gait cycle, the accuracy of classification for all subjects exceeded 98%, suggesting that the mDTW algorithm was able to recognize the potential transitions (LW-SC and LW-SD) well before reaching the stance phase [typically starting at ∼40% of the gait cycle if starting from toe-off (Winter, 2009)].
To provide more quantitative performance information of the proposed mDTW algorithm, the cumulative confusion matrix for 30% gait cycle is shown in Figure 7.As can be clearly observed in this figure, the data obtained in the first 30% of the gait cycle enabled the mDTW algorithm to recognize the locomotive mode and mode transitions with high accuracy, providing sufficient time for the lower-level prosthesis motion controller to switch to the correct mode of operation and complete the power-demanding portion (typical stance phase) of the gait cycle.
As the final part of the testing and validation, we investigated the potential of personalization using a subject's own (personalized) templates.Specifically, half of the subject's data were used to generate a set of personalized templates, while the other half were used for validation and performance characterization.The performance of this personalized mDTW algorithm (with user-generated templates) was then compared with the performance of the user-independent mDTW described above, with the typical results shown in Figure 8.As can be observed in this figure, the personalized mDTW algorithm was able to improve the recognition performance in a certain range of gait cycle percentage, but the magnitude of improvement was not significant.The reason, presumably, is that the study only involved healthy subjects with similar normal walking gaits, which diminished the performance enhancement provided by the personalization.
Discussion
Leveraging the data generated by multiple sensors in a multimodal gait data collection study, we developed an mDTW intent recognizer to detect the locomotive mode or mode transition early in the swing phase.Note that a variety of machine learning methods (models) were developed for lower-limb prosthesis control-oriented user intent recognition.For example, Bhakta et al. (2020) Accuracy vs. gait segment size for different participants.
Based on the results reported in this paper, when leave-one-out validation was conducted, the XGBoost method outperformed the other two methods, with the errors of 10.12% in recognizing steady-state locomotive modes and 15.78% in recognizing mode transitions.Note that, the intent recognition models tested in this paper only recognize which mode (or mode transition) each step belongs to, using the sensor signal data collected in locomotive experiments.In comparison, the mDTW intent recognizer in this paper is capable of detecting the ongoing mode transition during the transitional gait cycle, and thus avoids the typical one-step delay in mode transition recognition.Regarding the comparison of recognition accuracy, our mDTW model was able to provide lower error (<6.5%) than the models tested in Bhakta et al., 2020 when only level walking to stair climbing and stair descending transitions were considered.Finally, the proposed mDTW model features low computational load and fast inference time (Table 4), beneficial for its implementation in the real-time control of robotic lower-limb prostheses.
While 36 collected sensor signals were investigated in this study, all sensors' signals are not equally useful in classifying the intended motion states.Besides, as the number of signals increases, the computational load required for robotic prosthesis control also increases.Therefore, it is necessary to select the most useful signals while keeping the number of signals to a minimum.The selected signals using the forward selection algorithm are tabulated in Table 3. Table 4 shows a clear upward trend of the accuracy and F1-score by adding more dimensions in the mDTW model.The results shows that the accuracy of the model has been significantly improved (from to 68.90%-99.08%)while six dimensions were used instead of a single dimension.However, the accuracy does not improve beyond these six sensors signals.Confusion matrix of the testing for 30% gait segments.
Based on the results shown in Figure 6, the accuracy and F1score show increasing trends with respect to the percentage of progression in the gait cycle (starting from toe-off) irrespective of the participants.A few participants (subject-6, subject-2, and subject-8) show low accuracy and low F1-score at 10% and 20% segment size; however, all participants show more than 98% accuracy and 0.97 F1-score respectively at 30% into the gait cycle.The figure also shows that the accuracy and F1-score do not improve significantly when the progression in the gait cycle exceeds 30%; hence this size could be considered the optimal point of decision Comparison of classification accuracy between personalized model vs. user independent model.for this method.This also suggests that this method can predict the intended modes with high accuracy within the swing phase to provide sufficient time for the prosthesis motion controller to switch modes if necessary.
One of the major advantages of this classification model is that it can operate in a user-independent manner.As described in Section 2.5, this method did not use any templates generated from the participant's own gait data during the validation process.The user-independent classification allows an intent recognition system to be used in an "off-the-shelf " fashion where a single, generic intent recognition system for a robotic lower limb prosthesis reduces personalized training times, which would otherwise be burdensome to the user and the clinician.
However, the model facilitates the use of personalized templates to further improve the performance.Figure 8 shows the how the classification accuracy changes after introducing personalized templates in the model.The results did not show significant improvement of the overall accuracy by introducing personalized templates for the able-bodied participant.However, considering the diverse and time-varying gait patterns of lower-limb amputees, template personalization may become a useful way to improve the intent recognition performance in the real-world application in the robotic lower-limb prosthesis control.In fact the lack of available templates may be a significant challenge when implementing the mDTW intent recognizer in robotic lower-limb prostheses, as the amputee users of such robotic prostheses may display substantially different gait patterns from healthy individuals.A possible solution, presumably, is to incorporate a data collection session when tuning prosthesis controllers for individual users, such that the mDTW intent recognizer can be personalized through the generation of individual templates.
For the future work, the proposed mDTW intent recognizer is expected to be implemented as the upper-level controller in the real-time control system of future robotic lower-limb prostheses.The algorithm will be executed in cycles, with each cycle initiated at toe-off (i.e., start of the stance phase).With the progression of the gait cycle, template comparison will start at approximately 10% of the gait cycle to allow enough data to be accumulated.Subsequently, template comparison will be continuously conducted to recognize possible gait mode transition.Note that, the proposed mDTW intent recognizer can be easily integrated with the finitestate impedance controller (FSIC), the most widely used lowerlimb prosthesis motion control approach, as the FSIC's controller behavior (mimicking the combination of a virtual spring and a virtual damper) is also gait phase-specific (i.e., controller behavior change triggered by a certain gait event such as toe-off and heel strike) (Sup et al., 2008).Such compatibility is expected to facilitate the mDTW intent recognizer's future application in the prosthesis control and generate a greater impact in the field.Considering the importance of intent recognition in prosthesis control, a fault detection module (similar to that described in Zhang and Huang, 2015) may be incorporated to detect signal anomaly and improve the algorithm's reliability through the use of possible recovery mechanisms.Finally, the proposed mDTW method's functionality may be expanded to recognize other locomotive modes and mode transitions (e.g., stair climbing/descending to walking), and its application may also be expanded to the control of other types of devices assisting the user's lower-limb motion (e.g., robotic knee exoskeletons).
Conclusion
In this paper, we developed a new mDTW intent recognition method to recognize the locomotive mode and mode transition in the swing phase of the gait cycle, with the purpose of enabling a robotic prosthesis to assist its user to complete the potentially power-demanding actions during the subsequent stance phase.Through a multi-modal gait data collection study, we obtained the necessary data from multiple mechanical sensors to support the subsequent classifier development.When developing the mDTW algorithm, feature selection was conducted to identify the six most useful sensor signals as the input, and a voting mechanism was used to augment the standard mDTW algorithm to make full use of the gait data obtained on the multiple subjects (through the corresponding templates).Through validation, it was shown that the proposed mDTW algorithm can recognize the locomotive mode or mode transition within 30% progression of a gait cycle with 99.08% accuracy and 0.9730 F1-score.As such, when used in a hierarchical prosthesis control system, such early-swing-phase detection is expected to provide sufficient time for the lower-level motion controller to switch operation mode if necessary before the initiation of the stance phase.Finally, with the algorithm's low computational load and easiness of personalization through individual template generation, the proposed mDTW intent recognizer may become a basic building block of future prosthesis control systems to facilitate the robotic prostheses' real-world application among the large amputee population.
FIGURE 1
FIGURE 1Prototype of the measurement exoskeleton.
FIGURE 2
FIGURE 2Euclidean distance vs. Dynamic time warping distance.
FIGURE 3
FIGURE 3Thigh angle: signal gait cycles of the three locomotion modes (each shaded region represents one standard deviation) (A) and their respective templates (B) (all normalized).
FIGURE 6 F1
FIGURE 6F1-score vs. gait segment size for different participants.
TABLE 1 Anthropometric
Data of the participants.
TABLE 3
List of sensor signals used in mDTW.
compared the performances of three machine learning algorithms in lowerlimb prosthesis control-oriented intent recognition, including linear Performances of the method for different dimensions.
|
2024-04-14T15:06:02.102Z
|
2024-04-12T00:00:00.000
|
{
"year": 2024,
"sha1": "52b00b503223e3925449901b2f44aa151233cb45",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2024.1267072/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4f21827b355ab7b7863d6ab0ce89afe9ad8d929",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": []
}
|
219800939
|
pes2o/s2orc
|
v3-fos-license
|
Genetic Programming for Vegetated Channel
– The problems in river hydraulics is too complicated to develop exact, generalized empirical or regression equations. Regression analysis has been used extensively in the past to address these problems but owing to its inadequacies pertaining to a first ‐ hand functional form determination and clustering effect of influential points and groups of points that could even have been erroneous, regression has failed to deliver satisfactory results. precise allow the model for best description of variability as well as reasonable predictability. Present work uses genetic programming for flow prediction in vegetated channel. The developed model shows very high predictability of flow resistance.
I. INTRODUCTION
Channel vegetation refers to vegetation that typically are emergent aquatic plants or herbs, trees and shrubs that exist inside and in proximity to water. They may be submerged completely or non-submerged (emergent), have different densities, height of submergence, stem-flexibility, stem geometry, surface characteristics and spacing. These characteristics influence flow and morphology of a channel in different ways. Submerged and emergent vegetation cause different velocity profiles in a channel, governed by their height and flexibility.
Literature is replete with numerous works on flow-vegetation interactions. Huang and Nanson [1] found that vegetation affected channel hydraulic geometry exponents. Other, specific case studies such as those of Malkinson [2], McKenney [3] and Erskine [4] have also found the effects of riparian vegetation on channel properties like cross-sectional characteristics of streams, channel morphogenesis and channel widening. Channel mobility decreases with increase in vegetation density. It has been reported [5,6] that deviation of the velocity profile in the outer layer over a gravel bed with vegetation cover on the walls is much larger than the case of flow over a gravel bed without vegetation cover on the walls. Studies have also shown that vegetation spacing affects turbulence by changing the three dimensional flow patterns [7]. Experiments in laboratory flumes have been carried out to quantify the flow-vegetation interactions by various researchers [8,9]. Green [10] has done the field measurement in natural vegetated fields. Also, detailed numerical simulations of flow through vegetation were performed [11,12]. Several empirical [13] and theoretical relations [14,15] have been proposed to describe the flow-vegetation interactions. Bennett [16] designed a flume-based study to alter the flow pattern within a straight, degraded stream corridor by using simulated emergent vegetation of varying density placed at key locations within the channel. Their study showed that flow velocity can be markedly reduced within and near the vegetation zones, flow can be diverted toward the opposite bank, and vegetation density controlled the magnitude of these effects.
Huthoff et al. [17] have ascertained that because of simplicity, empirical equations have better field applicability than theoretical ones. Galema [18] based on data available in the literature has compared different predictors of flow characteristics. Based on the comparison of the predicted and measured values for both conditions, it can be concluded that no simple predictor exists for both conditions [18]. Empirical equations have a general problem that they are applicable for restricted circumstances, those in which the equations have been formulated. In most of the cases, an empirical study is carried out to grasp the trend of the dependence of resistance on the measurable vegetation parameters and an equation is developed that explains the observed behavior most accurately possible in the given context. The equation developed is then used to predict behavior of the system, given the values of independent parameters. This methodology lacks in the generalization of behavior of the physical phenomena being explained. Other than empirical techniques, Computational Fluid Dynamics (CFD) models have been used to solve the Navier-Stokes equations in specified domains. Due to computational limitations, vegetation characteristics, most importantly the vegetation geometry has to be kept simple enough, leading to idealizations that differ much from actual vegetation characteristics.
A lack of proper understanding of such natural phenomena in have led hydrologists to employ techniques of soft computing, for tasks like non-linear data modeling, which make it possible to predict parameters based on the significant recognizable variables. Over the last two decades, soft computing methods such as artificial neural networks (ANN), genetic algorithm (GA), fuzzy logic and particle swarm optimization (PSO) have encountered a growing interest in hydrological studies and have been applied as powerful alternative computational tools. ANNs were used to predict scour holes around bridge piers by Lee [19]. They were also used for rainfall forecasting by French [20] and by Lee [21] to carry out long term tidal predictions. GA and fuzzy control were applied to a combined sewer pumping station by Yagi [22] and a groundwater management problem was addressed using genetic algorithm by Sidiropoulos [23]. Genetic algorithm was used for predicting hourly-based flow discharge hydrographs from level data by Tayfur [24]. Penalty-typegenetic algorithm was used to guide rational reservoir flood operation by Chang [25]. ANN models for monthly streamflow time series prediction were developed by Wu [26]. Other recent developments came in the paradigm of soft computing technology with the introduction of variety of optimization algorithms, including the PSO, by Kennedy & Eberhart [27]. Since then PSO has been used extensively as an alternative optimization algorithm. A PSO based simulationoptimization model for the solution of groundwater management problems was proposed by Gaur [28]. PSO was also applied for the optimization of gravity dam and sluice gate by Wu et al. [29].
The paradigm of genetic programming is a descendent of genetic algorithms but differs fundamentally from it in approach and structure. While genetic algorithms are used to search for solution values directly, genetic programming aims to find programs that yield best solutions. It is also evident from the search space of genetic programming that it designs and looks for expressions and not values, although numeric constants are calculated by multi-gene regression procedure after the genetic programming searches for the best population.
Genetic programming is a relatively new domain in soft somputing and has gained popularity in a variety of applications, including those in river hydraulics and sediment dynamics in fluvial systems. Azamathulla [30] used linear genetic programming for discharge prediction in compound channels. Kisi et al. [31] developed suspended sediment models using genetic programming. Aytek [32] attempted sediment modeling using a genetic programming approach.
The primary objective of the present work is to develop the flow predictor for vegetative channel using the multi-gene genetic programming approach using historical datasets from flume experiments with vegetation. This is done by first selecting optimum parameters for the algorithm and then assessing model efficiency with the help of several criteria.
II. GENETIC PROGRAMMING
The basic search strategy behind genetic programming [33] is a genetic algorithm, which imitates biological evolution. It differs from this traditional genetic algorithm in that it typically operates on parse trees instead of bit strings. GP initializes a population consisting of the random members known as chromosomes (individual) and the fitness of each chromosome is evaluated with respect to a target value. The principle of Darwinian natural selection is used to select and reproduce "fitter" programs. GP creates equal or unequal length computer programs that consist of variables (terminal) and several mathematical operators (function) sets as the solution. The function set of the system can be composed of arithmetic operations (+,−, / ,*) and function calls (such as _ex ,x, sin,cos, tan, log, sqrt, ln, power). Each function implicitly includes an assignment to a variable, which facilitates the use of multiple program outputs in GP, whereas in tree-based GP those side effects need to be incorporated explicitly [34]. The fitness of a GP individual may be compared by using the equation (1) Where Xj is the value returned by a chromosome for the fitness case j and Yj is the expected value for the fitness case j.
An initial population of expressions is generated with randomized selection procedure. A fitness function is used to assess the individual expressions. It is usually the deviation of the model output from the actual output, or, the error, which must be minimized. An algorithm based on Darwinian model of reproduction and survival of the fittest and genetic recombination is used to create a population of individuals from the current population. A part of the parent population is filtered based on their fitness values and are used to create new offspring population which replaces the old generation. Then, each individual is assessed for fitness and the process is repeated. This algorithm produces populations which, over generations, tend to exhibit average improving fitness and adapt themselves to the changes in their environment.
Individuals in the population
The structures that undergo adaptation are expressions containing functions, operators, (independent) variables and numeric coefficients, whose form, size and complexity can dynamically change during the process. The search space of expressions is the set of all possible composition of expressions that can be formed using recursively from the available set of 'n' functions F= {f 1 , f 2 , f 3 ... f n } and an available set of m terminals T= {a 1 , a 2 , a 3 ... a m }. While the terminals are either variables, numeric constants and universal constants (e.g. pi), the functions comprise of mathematical operators and standard mathematical functions. It should be noted that the set of functions and terminals used in a particular problem should be selected so as to be capable to solve the problem.
The search space
The search space consists of the hyperspace of all valid expressions that can be formed using the available set of functions and terminals. It is important to decide the maximum level/depth of the expressions since at times, the later generations may lead to complicated expressions which are long in length and take much more time to compute and may cause the algorithm to be slow. To allow the genetic operators like mutation and crossover to happen, the tree data structure used to represent and store each expression. The formation of an expression involves choosing a function first and then choosing another function or terminal as the parameters of the original function.
The fitness function
The available dataset of input and output combinations provides the algorithm to assess the fitness of an individual expression created. In effect, the expression created by the algorithm can be evaluated for the inputs specified and the output yielded can be compared with the actual correct output in the dataset. The same is carried out for each of the data points and the sum total of the errors serves as an indicator of fitness in an inversely related sense. The closer the sum of errors is to zero, the better is the expression assessed.
Operations that modify the expressions
The genetic programming algorithm operates on the population of expression using some operations. These utilize the structure of the expressions and are essentially to create, select and breed populations to further create new individuals. The individual operations are described in this section.
Fitness proportionate reproduction
This operation copies the best individuals from the parent population to the offspring population. The proportion of the population that has to be copied is specified as a parameter of the algorithm. This procedure ensures that if an individual has a sufficiently high fitness value, it can survive though generations.
Crossover or Recombination
This operation is responsible for creating variations in the population by producing expressions in the offspring generation that combine traits from two parent individuals. The process starts with choosing two parent individuals proportionate to their fitness values from the entire parent population and then combines parts of expressions from each individual at varying/randomized positions. The result is a combination of two crossover fragments, one from each parent individual.
The state of the system At any stage, the state contains the individuals of one (current) generation only. The best result, if needed to be found, must come from the current population only, since there is no bookkeeping of individuals from the past.
Algorithm termination
The result of the algorithm is the best individual from the state of the system (the current populations). The algorithm may be terminated when a previously specified fitness has been reached, depending on the tolerance values, or when the specified number of generations have worked. The single best individual is considered to be the output of the algorithm.
Parameters in the algorithm
The genetic programming algorithm operates on populations of symbolic expressions by creating and changing them. The following parameters decide the extent of such operations.
Population size
It is the number of expressions in a generation. This value remains constant throughout the algorithm run.
Number of generations
This parameter indicates the number of iterations of the algorithm; where in each iteration new sets of individuals are created using the operators as mentioned above.
Maximum number of genes
In multi-gene genetic programming, more than one expression are formed using genetic programming algorithm followed by linear regression of the expressions in the final population is carried out to yield a composite weighted expression list that fits the data best. For this, since one gene represents one expression, the maximum number of genes implies the maximum number of expressions in the final expression.
Maximum depth of genes
The gene stores an individual expression. The depth of a gene indicates the level of compositions of functions utilized for framing expressions. A gene with higher depth represents more convoluted expressions than one with lower depth. A limit on the depth would be
Mutation-Crossover-Copy fractions
The fractions of the entire population that must undergo the above three operations is another important operation that controls convergence rate of the algorithm to the actual best solution. By changing these fractions, one can control the rates in which variation takes place from one generation to the other. These must be decided and are utilized in every iteration to create the population of the offspring generation. The three values add up to 1.
Tournament size
The genetic programming algorithm utilized selection technique called tournament selection, where a set of population are chosen for creating the new generation based on a fitness criteria. The tournament size parameter is used to specify what percentage of the population shall be used to create new population.
Elite fraction
Elite fraction is that fraction of the initial (parent) population that will be copied without making any changes in them. This is done for the best individuals in the population and makes book keeping of older 'good' individuals unnecessary, since they will be copied every time a new generation is created.
Build method
The initial population creation can be done in three separate manners-'full', 'grow' and 'ramped-half-and-half' methods. While in the 'grow' build method, if the maximum depth has not yet been reached, either a function or a terminal may be selected, causing more diverse tree structures with some branches longer than others, in the 'full' method, the trees created are of the full size and have been randomly created. The third method uses the first two in equal proportions.
Parameter Selection
The parameters of the algorithm have to be selected so that the best individual may be found. Unluckily, there are no general guidelines on parameter estimation yet and hence, a hit and trial procedure was used to reach to the best combination of parameters. Since the number of parameters is huge (a total of ten parameters in all), a stepwise selection of the parameters was carried out, where, a nominal value of other nine parameters being kept constant, the parameter to be optimized was scanned thoroughly across values and the results were compared to assess the effect of this one parameter only. For most of these parameters, the individual runs were repeated three times each to remove random correlations, and since the results varied each time.
III. RESULTS AND DISCUSSIONS
Soft-computing needs functional form of a physical system describing its dependent and independent variables. Yen [35] has analyzed several equations in terms of their dependent and independent variables. The physical modeling of channel flow through vegetation is very complex. According to Lopez and Garcia [36], it is a function of many variables; the fluid properties, flow properties, vegetation characteristics and channel characteristics as follows: where u is the mean velocity, h is the flow depth, k is the height of the vegetation, i is the channel slope. D is the diameter of cylindrical vegetation and m is the number of cylinders per m 2 horizontal area. C d is the non-dimensional drag coefficient. Symbolic regression was used to develop a flow predictor model as an expression relating the flow explicitly to the variables. Entire multi-gene genetic programming for modeling was implemented in MATLAB ® software using "gptips", a genetic programming MATLAB ® toolbox. Galema [18] Thus, a population of 1000 was adopted for modeling. The model performance has been shown in the Figure 2. As it can be seen from the figure, model predicts the phenomena well. Vegetation-flow interactions are central to many problems of practical interest to hydraulic engineers including flood risk studies, sediment transport studies and the analysis of the hydraulic performance of river restoration schemes. Existing predictors are limited in terms of their applicability. Though a data driven technique like symbolic regression does not elucidate or utilize the physics of the system, it is primarily useful for sufficiently good prediction capabilities. Based on a large database, present work identifies a symbolic regression model using genetic programming based on statistical datasets. The generalization capacity of the model is very good considering so many different parameters affecting the flow-vegetation interactions.
|
2020-06-04T09:07:46.024Z
|
2020-01-30T00:00:00.000
|
{
"year": 2020,
"sha1": "338266e4a0e067f1539cda743bb9b3d6b8695725",
"oa_license": null,
"oa_url": "https://doi.org/10.46564/ijraet.2020.v08i01.005",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2e851897e06692f924bdd742bdb47143b90b930c",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Computer Science"
]
}
|
270525373
|
pes2o/s2orc
|
v3-fos-license
|
Patients’ Satisfaction Regarding Oral Healthcare Services in the North-East Region of Romania: A Preliminary Questionnaire Survey
This research addresses a gap in the literature by conducting a comprehensive analysis of patients’ level of satisfaction with dental care. Methods: By combining quantitative and qualitative survey methods with a PSQ, this study aims to augment ongoing initiatives to enhance dental patients’ experiences by painting a more comprehensive depiction of patients’ level of satisfaction. Results: When asked about their overall level of satisfaction 77.1% of the patients said that they received excellent services from office personnel and 72.2% said they trust their doctors. Conclusions: Assessing patient satisfaction in the realm of dental service quality is crucial for enhancing service quality and accuracy, which would benefit both patients and dentists and, ultimately, improve public health.
Introduction
Defining and measuring satisfaction is a challenging task due to its complexity.It is a psychological phrase that can be assessed via time and personal experiences [1].It indicates the extent to which expected objectives have been achieved.Satisfaction includes cognitive and emotional aspects and is influenced by past experiences, expectations, and social connections [2,3].Recently, there has been a growing emphasis on patient-centered care in the healthcare industry, emphasizing the significance of comprehending and improving patient happiness.Dental treatments, being a crucial part of healthcare, must meet these expectations; thus, high-quality dental services are essential for maintaining patient health, satisfaction, and general well-being [4,5].
The quality of dental care has a direct impact on patients' oral health results and also plays a significant role in shaping their perception of the service provided, which, in turn, affects their loyalty and the probability of referring the service to others [6].
Although there is agreement on the significance of patient satisfaction in dental care, the research indicates a lack of thorough comprehension regarding the aspects that influence it [7].Prior research has mostly concentrated on clinical results and the technical proficiency of dental practitioners, frequently neglecting aspects like interpersonal communication, service availability, and the dental practice environment [8,9].
Patient satisfaction is a critical indicator of healthcare quality and plays a vital role in the evaluation of medical services.It reflects patients' perceptions of their healthcare experiences and is influenced by various factors, including the quality of care provided, the effectiveness of communication, and the interpersonal skills of healthcare providers [10].
High levels of patient satisfaction are associated with better adherence to treatment plans, improved clinical outcomes, and increased patient loyalty to healthcare providers and institutions [11].Conversely, low levels of satisfaction can lead to poor health outcomes, reduced compliance with medical advice, and decreased utilization of healthcare services [12].
The relationship between patients and their primary care physicians is particularly important in determining patient satisfaction.Primary care physicians often serve as the first point of contact within the healthcare system and play a crucial role in coordinating and managing patients' overall care [13].The quality of this relationship can significantly impact patients' overall healthcare experiences.Effective communication, empathy, and trust between patients and their physicians are essential components of a positive patientphysician relationship [14].Patients who feel heard, understood, and respected by their physicians are more likely to be satisfied with their care and to follow medical advice.
Numerous studies have highlighted the importance of patient satisfaction as a determinant of healthcare utilization, adherence to treatment, and overall health outcomes.Satisfaction levels are influenced by various factors, including the physician's communication skills, empathy, and competence, as well as the time spent with patients [15][16][17].Additionally, the organizational aspects of healthcare services, such as accessibility, waiting times, and administrative support, also contribute significantly to patient satisfaction [18].
Understanding these dynamics is crucial for healthcare providers and policymakers to ensure that patient-centered care remains at the forefront of medical practice.
Educational institutions benefit both students and patients by providing training opportunities for students and by addressing patients' dental care needs.It is crucial to assess patient satisfaction with the dental services offered in order to meet patient expectations, enhance patient cooperation, and maintain the dental institution's performance [19].It is also important to enable students to fulfill their clinical requirements promptly, as contented patients are more likely to comply and attend their visits.While dental clinics and hospitals prioritize patient satisfaction, educational settings prioritize student learning, which may occasionally lead to impaired patient satisfaction [20].
This study fills a need in the literature by providing an in-depth evaluation of dental treatment satisfaction among patients.Dental professionals and healthcare authorities can enhance the quality of care and patients' experiences by recognizing and addressing the complex factors that contribute to satisfaction.
In order to fill these gaps, this study uses a mixed-methods approach to investigate in more depth the factors that contribute to dental patients' satisfaction with their treatment.A more sophisticated understanding that is specific to different patient demographics and situations is needed since the weights assigned by patients to different satisfaction factors vary.
The purpose of this research is to add to the continuing efforts to improve dental patients' experiences by providing a more complete picture of patient satisfaction through the use of quantitative and qualitative surveys with the help of a PSQ.
✓ Research design
This study aims to thoroughly investigate patient satisfaction with dental services.Due to the lack of dependable data on patient satisfaction in the multifaceted dental care system in Romania, our study focused on assessing patient satisfaction in a university dental clinic in Iasi and the factors that impact it.The research seeks to measure a wide range of factors affecting patient satisfaction by quantitative methods.The quantitative aspect includes a structured survey to measure the significance of different satisfaction factors in order to obtain a more profound understanding of patient experiences and perceptions.
✓ Survey instrument
The data were collected through a Patient Satisfaction Questionnaire consisting of 46 items (PSQ-46), which uses a 5-point Likert response scale ranging from strongly agree to totally disagree.The questions are divided into 6 dimensions that measure patients' satisfaction toward physicians (20 items), access (8 items), nurses (4 items), appointments (4 items), and facilities (4 items), and a separate subscale of 6 items to measure overall satisfaction with the service provided by the practice.The PSQ was translated from English into Romanian following existing guidelines to maintain equivalence [21].
✓ Participants
The study sample consisted of patients who had received dental services within the last 10 months.Participants were recruited from the medical center of Gr.T.Popa Dental University, Iasi, Romania, and included patients from urban and suburban areas to ensure diversity in demographics and healthcare experiences.The inclusion criteria included adults aged 18 and above who had visited the dental clinic at least once in the past year.A total of 306 survey respondents were selected to represent various age groups, genders, and socioeconomic statuses.Their selection took into consideration the following inclusion and exclusion criteria: Inclusion criteria − Patients who had completed their dental treatment and who willingly consented to participate in the study.− Patients aged eighteen years or older.
Exclusion criteria
− Patients unwilling to participate in the study and who were unable to provide informed consent.
✓ Data collection
A self-administered questionnaire was developed based on a review of the literature, which was then translated into the Romanian language and validated.The questionnaire included items on factors contributing to satisfaction and dissatisfaction like the frequency of dental visits, the quality of dental services, the role of communication with dental staff, information on waiting rooms, etc. [22] The study received approval from the Institutional Review Board of UMF Gr.T.Popa, Iasi (No. 318/30.05.2023).Participants were informed about the study's purpose, their rights, and confidentiality measures prior to data collection.Informed consent was obtained from all participants, and personal identifiers were removed from the data to ensure anonymity.
✓ Statistical analysis
Descriptive statistics were used to summarize the demographic information and responses to survey items.Inferential statistics, including regression analyses, were employed to identify significant predictors of patient satisfaction.
Following data entry into an Excel spreadsheet, the statistical software for the social sciences, version 29, (Inc.Chicago, IL, USA) was used for processing.
Percentages, means, and standard deviations were calculated for the qualitative and quantitative data.Chi-square (X2) tests were performed to statistically analyze the qualitative data.A p-value of 0.05 was considered a significant difference.
Results
The study group consisted of 306 patients, who were predominantly women (58.5%), from urban areas (63.1%), and had a high school or university education (89.9%).Half of the patients were over 50 years old (50.3%), and a third (34.6%) were aged between 30 and 50 years.Additionally, half of the patients were employed (51.3%), and among the others, the majority were retirees (35.9%).Almost half of the patients had made between 5 and 10 visits to the dentist (44.8%), a third had made between 1 and 5 visits (30.4%), and a quarter of the patients (24.8%) had made more than 10 visits to the dentist (Table 1).Questions with negative connotations were unfavorably received by patients, who expressed their partial or total disagreement with them; thus, it emerged that the vast majority of patients were satisfied with their doctor (77.1%).It can therefore be stated with certainty that the vast majority of patients declared a high overall satisfaction score, as shown in Table 2 and Figure 1.This overall satisfaction score is reflected in the specific opinions of patients toward doctors, which was investigated in the second section of the questionnaire.
In full correlation with these results, it is noteworthy that negative or critical statements about doctors were rejected by patients, who expressed their agreement with these statements only at very low percentages.This overall satisfaction score is reflected in the specific opinions of patients toward doctors, which was investigated in the second section of the questionnaire.
In full correlation with these results, it is noteworthy that negative or critical statements about doctors were rejected by patients, who expressed their agreement with these statements only at very low percentages.
Therefore, patients believed that the doctor was indifferent to them only in isolated cases, and only a small percentage of patients (between 10 and 15%) were willing to claim that the doctor withheld some information from them or was not sufficiently empathetic toward their pathology and problems; however, it is clear that the vast majority of patients had favorable opinions about their communication with the doctor (see Table 3 and Figure 2).Therefore, patients believed that the doctor was indifferent to them only in isolated cases, and only a small percentage of patients (between 10 and 15%) were willing to claim that the doctor withheld some information from them or was not sufficiently empathetic toward their pathology and problems; however, it is clear that the vast majority of patients had favorable opinions about their communication with the doctor (see Table 3 and Figure 2).In general, patients considered it simple to schedule an appointment at the clinic when they wished, although only 40% of them maintained this opinion when it came to more specific situations, believing that they could easily schedule an appointment when they had a specific need or when they wanted to meet with a certain doctor (see Table 4 and Figure 3).In general, patients considered it simple to schedule an appointment at the clinic when they wished, although only 40% of them maintained this opinion when it came to more specific situations, believing that they could easily schedule an appointment when they had a specific need or when they wanted to meet with a certain doctor (see Table 4 and Figure 3).Patients' evaluation of the accessibility in the clinic were slightly more reserved; thus, the predominance of patient agreement was only manifested in relation to items 6 and 7.
On the other hand, about half of the patients most likely faced the need to resolve emergencies and had a favorable interaction with the clinic in such situations, and about a third of the patients needed to discuss issues privately with the doctor and had a positive interaction, while another third probably did not face such situations, giving neutral responses to the items that targeted them; there was also a smaller percentage of patients who probably faced such situations but did not obtain the interaction they wished for from the doctor (see Table 5 and Figure 4).
Table 5. Patient responses to the fourth section of the questionnaire-Opinions about accessibility (frequency distributions).Patients' evaluation of the accessibility in the clinic were slightly more reserved; thus, the predominance of patient agreement was only manifested in relation to items 6 and 7.
On the other hand, about half of the patients most likely faced the need to resolve emergencies and had a favorable interaction with the clinic in such situations, and about a third of the patients needed to discuss issues privately with the doctor and had a positive interaction, while another third probably did not face such situations, giving neutral responses to the items that targeted them; there was also a smaller percentage of patients who probably faced such situations but did not obtain the interaction they wished for from the doctor (see Table 5 and Figure 4).Patients' opinions toward nurses in dental offices were also generally favorable.
Very small percentages of patients had negative opinions about nurses: only 2.7% thought that they did not explain things carefully, only 2.3% felt that they made them feel like they were wasting their time, and 12.8% (a somewhat higher percentage) believed that the nurse did not always listen attentively when they talked about their problems (see Table 6 and Figure 5).Patients' opinions toward nurses in dental offices were also generally favorable.Very small percentages of patients had negative opinions about nurses: only 2.7% thought that they did not explain things carefully, only 2.3% felt that they made them feel like they were wasting their time, and 12.8% (a somewhat higher percentage) believed that the nurse did not always listen attentively when they talked about their problems (see Table 6 and Figure 5).The final questions of the survey addressed the facilities offered by the dental office.The main issue reported by patients was the lack of seating in the waiting room-56.3% of them pointed out this aspect.It thus emerges that patients' unfavorable opinions primarily target the reduced capacity of the waiting room and, subsequently, the seats, which were perceived as being uncomfortable (see Table 7 and Figure 6).The final questions of the survey addressed the facilities offered by the dental office.The main issue reported by patients was the lack of seating in the waiting room-56.3% of them pointed out this aspect.It thus clearly emerges that patients' unfavorable opinions primarily target the reduced capacity of the waiting room and, subsequently, the seats, which were perceived as being uncomfortable (see Table 7 and Figure 6).Based on the responses recorded in the survey, we calculated, using arithmetic means, quantitative scores at the level of each patient, reflecting their general opinion toward the six dimensions evaluated by the survey, namely general satisfaction and opinions about doctors, the appointment system, the level of accessibility in the office, the behavior of the nurses, and the facilities offered.
These general scores have a variation range between 1 and 5, with a value of 1 meaning a completely favorable opinion and a value of 5 meaning a completely unfavorable opinion toward each of the investigated dimensions (to obtain such a result, the questionnaire items that targeted negative aspects were recoded, so that patient responses had uniform meanings across the entire survey).For the interpretation of the calculated quan- Based on the responses recorded in the survey, we calculated, using arithmetic means, quantitative scores at the level of each patient, reflecting their general opinion toward the six dimensions evaluated by the survey, namely general satisfaction and opinions about doctors, the appointment system, the level of accessibility in the office, the behavior of the nurses, and the facilities offered.
These general scores have a variation range between 1 and 5, with a value of 1 meaning a completely favorable opinion and a value of 5 meaning a completely unfavorable opinion toward each of the investigated dimensions (to obtain such a result, the questionnaire items that targeted negative aspects were recoded, so that patient responses had uniform meanings across the entire survey).For the interpretation of the calculated quantitative scores, the corresponding interquartile ranges were used, as shown in Table 8.The overall satisfaction score of patients, total and comparative, by demographic characteristics is shown in Table 9.
Discussion
Donabedian outlines four distinct rationales for examining patient satisfaction.Satisfaction is a goal of care, a result of care, an outcome, and can enhance the impact of care by increasing patient compliance, and it is the patient's evaluation of the care received.
Studies on patients' satisfaction with their dental care have been around since at least the 1980s, but up until recently, researchers have mostly concentrated on how sociodemographic factors affect patients' opinions of their dentists.
Researchers at multiple institutes have studied patient satisfaction with dental care.Patient satisfaction is influenced by various aspects beyond treatment quality, including facilities, personnel demeanor, and fundamental environmental requirements [23].Most patients seeking treatment at dental training schools were between the ages of 30 and 40 in the majority of the studies [24].More than half of the participants in this study were aged between 50-70 years (50.3%).This area requires development and should be addressed by the public health department and college management.Health camps should be organized to raise awareness of the services available at our institutions among the younger population in the region.
The notion of consumerism, which involves incorporating the patient's perspective in the evaluation of services, has become more prominent in the last years.Patients can contribute to assessing the quality of oral health care by establishing standards of care, offering information for evaluation, and expressing satisfaction or dissatisfaction with the care received [25][26][27].Most replies came from female patients (68%) because there was a higher volume of patients in the female division.This aligns with the findings of Naguib et al.'s study [28] but is in contrast to Habib et al.'s study, where the female response rate was 55.7% [29].
Patients' happiness has been studied at several dental schools throughout different countries.The investigations revealed that the primary reason for seeking care in these clinics is due to the perceived high quality of service and the patients' health concerns [30][31][32][33].Patients who struggled to schedule appointments readily expressed a low level of satisfaction.The patients in the survey expressed a high level of satisfaction with their appointments (p = 0.034).They were also happy with the appointment selections that worked well with their schedule (35.9%).
The reception desk and team typically handle appointments and are the initial point of contact for patients at the clinics.They play a crucial role in the team, and the high satisfaction levels noted in our study are promising.It is important to relay positive feedback to the reception team.
When it comes to the facilities, it has been discovered that patients are more satisfied when the facilities are nice, modern, and have comfortable waiting areas.Compared with studies conducted by Al-Refeidi et al. [34] and Mahrous et al. [35], this finding is significantly lower, but it is similar to a study conducted by Naguib et al. [28].
Several factors that affect dental patients' happiness have been studied.One of the most important factors is the dentist's communication abilities, which should include thorough explanations of procedures and treatments [9].Bradshaw et al. found that patients are more satisfied when treatments are provided quickly and there is less waiting time [36].Not only that, but the dental clinic's physical setting, such as the comfort of the waiting rooms and the level of cleanliness, is also crucial [35].
One of the most important things a healthcare provider should have is good communication skills so that they can ensure that their patients are satisfied with the treatment they receive.A high degree of satisfaction has been linked to the dentist's attitude and care for the patient's demands, according to previous research [37,38].Consumers' willingness to use dentistry clinics is an area where little empirical data is available, according to Pinkerton et al. [39].Despite widespread agreement that surveys of patients' opinions are useful for gauging the quality of healthcare providers' and facilities' offerings, Holden et al. [40] found that researchers have paid surprisingly little attention to how satisfied dental patients are with their treatment.Othman and Abdel Razak [41] discovered that 45.6% of patients were satisfied with their dentists' ability to explain treatment plans to them before they began.In the same manner, 70.6% of patients in our study reported being satisfied with how the doctor clearly explained everything before any treatment.
This might be because the study is taking place in a classroom setting, where teaching students how to properly communicate and engage with patients is a major emphasis.Patients dislike dentists who start treatments without explaining them, as pointed out by Hellyer [42].
A prior study stated that unhappiness with the way patients were treated by their dentists was frequently cited as the reason for switching dentists by 46% of the dentists polled.Patients reported being "unhappy with dentist" as the primary motivation for seeking out a new dentist in more current research [43].
The care our patients receive is of the utmost importance to us.This has led to very positive feedback from our patients about the treatment they received.Thus, 69.3% of our patients felt perfectly satisfied with how they were treated.
The Patient Satisfaction Questionnaire, consisting of 46 items (PSQ-46), is a widely used tool for measuring patient satisfaction with healthcare services [44].It was developed by Ware and colleagues in the 1970s and 1980s as part of the Medical Outcomes Study and it was proved to be a robust tool for measuring patient satisfaction [45].Its development was driven by the need for a reliable and valid instrument that could capture the multifaceted nature of patient experiences with healthcare services [46].
The quality of medical services can be assessed by considering the amount of patient satisfaction and the success rate of treatments.Attaining satisfactory patient outcomes and averting disease effects hinge on the crucial factor of satisfaction [47].Furthermore, it serves as a primary objective of therapeutic activities and is a noteworthy measure of the standard of care.
The scale's multidimensional approach, patient-centered focus, and established reliability make it a valuable tool for researchers and healthcare providers aiming to improve the quality of care and patient satisfaction [47].
Compared with other scales, PSQ-46 provides a more detailed analysis of patient satisfaction.For instance, SERVQUAL measures service quality across tangibles, reliability, responsiveness, assurance, and empathy, but it is more general and not healthcarespecific [48].
HCAHPS includes 29 items focusing on communication, responsiveness, environment, pain management, medication communication, discharge information, overall hospital rating, and willingness to recommend, primarily for public reporting and hospital comparisons in the U.S. [49].PSSUQ focuses on system usability, with 16 items covering system usefulness, information quality, and interface quality, mainly for technology and electronic health records [50].
While SERVQUAL and HCAHPS are useful for broader service quality and standardized hospital comparisons, and PSSUQ is specific to technology usability, PSQ-46 stands out due to its detailed, patient-centered approach that is specifically designed for healthcare, making it particularly valuable for in-depth quality assessments.
One limitation of this study is that the five-point Likert scale can provide a wide range of responses.Also, the study relies on self-reported data from patients, which can be subject to response bias, including social desirability bias, where respondents may give answers that they believe are more socially acceptable.
On the other hand, there can also be recall bias, as patients' satisfaction levels could be influenced by their memory of past experiences.More recent visits may be overemphasized compared with older ones.
With regard to geographic and demographic limitations, we can consider the fact that the study is confined to the northeast region of Romania, and the results may not be applicable to other regions with different healthcare systems, cultural contexts, or demographic profiles.
Addressing these limitations in future research could help improve the robustness and applicability of these findings in assessing patient satisfaction with oral healthcare services.
Conclusions
According to the study's findings, participants were satisfied with the services, staff, treatment, and patient-dentist interaction in dental clinics run by the College of Dentistry of Iasi University.
To maintain a high level of satisfaction and to make further improvements, patient satisfaction should be evaluated on a regular basis.Additionally, more qualitative research is needed to identify the psychological, behavioral, and social aspects that contribute to dental patients' satisfaction with their treatment.
Figure 1 .
Figure 1.Percentage of responses to the first section of the questionnaire.
Figure 1 .
Figure 1.Percentage of responses to the first section of the questionnaire.
Figure 2 .
Figure 2. Percentage of responses to the second section of the questionnaire.
Figure 2 .
Figure 2. Percentage of responses to the second section of the questionnaire.
Figure 3 .
Figure 3. Percentage of responses to the third section of the questionnaire.
Figure 3 .
Figure 3. Percentage of responses to the third section of the questionnaire.
Figure 4 .
Figure 4. Percentage of responses to the fourth section of the questionnaire.
Figure 4 .
Figure 4. Percentage of responses to the fourth section of the questionnaire.
Figure 5 .
Figure 5. Percentage of responses to the fifth section of the questionnaire.Figure 5. Percentage of responses to the fifth section of the questionnaire.
Figure 5 .
Figure 5. Percentage of responses to the fifth section of the questionnaire.Figure 5. Percentage of responses to the fifth section of the questionnaire.
Figure 6 .
Figure 6.Percentage of responses to the sixth section of the questionnaire.
Figure 6 .
Figure 6.Percentage of responses to the sixth section of the questionnaire.
Table 1 .
Demographic characteristics of the study sample.
Table 2 .
Patient responses to the first section of the questionnaire-General satisfaction level (frequency distributions).
Table 3 .
Patient responses to the second section of the questionnaire-Opinions about doctors (frequency distributions).
Table 3 .
Patient responses to the second section of the questionnaire-Opinions about doctors (frequency distributions).
Table 4 .
Patient responses to the third section of the questionnaire-Opinions about appointments (frequency distributions).
A3: It is often difficult to get a doctor's appointment 2 0.6 37 12.1 136 44.4 113 36.9 18 5.9 A4: It's easy to see the doctor of my choice
Table 4 .
Patient responses to the third section of the questionnaire-Opinions about appointments (frequency distributions).Healthcare A1: It's easy to get an appointment at a convenient time 2024, 12, x FOR PEER REVIEW 7 of 15
Table 5 .
Patient responses to the fourth section of the questionnaire-Opinions about accessibility (frequency distributions).Healthcare 2024, 12, x FOR PEER REVIEW 8 of 15
Table 6 .
Patient responses to the fifth section of the questionnaire-Opinions about nurses (frequency distributions).
Table 6 .
Patient responses to the fifth section of the questionnaire-Opinions about nurses (frequency distributions).
N4: The nurse makes me feel like I'm wasting her time
Table 7 .
Patient responses to the sixth section of the questionnaire-Opinions about facilities (frequency distributions).
Table 7 .
Patient responses to the sixth section of the questionnaire-Opinions about facilities (frequency distributions).
Table 9 .
The overall satisfaction score of patients-total and comparative-by demographic characteristics.Wallis tests were used.The significance level was set at p = 0.05.
|
2024-06-17T15:34:24.888Z
|
2024-06-01T00:00:00.000
|
{
"year": 2024,
"sha1": "aff59fef955c7459b9cf8735294ba68484b0acf5",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2227-9032/12/12/1195/pdf?version=1718293046",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c6bc02c28b40e16db9b07cd9fac4039bf42003b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
234463717
|
pes2o/s2orc
|
v3-fos-license
|
Borders and boundaries in the lives of migrant agricultural workers
In 2018, roughly 72% of the 69,775 temporary migrant agricultural labourers arriving in Canada participated in the Seasonal Agricultural Workers Program (SAWP). Despite having legal status in Canada, these individuals are often systematically excluded from community life and face barriers when accessing health and social services. SAWP workers’ exclusion from many public spaces and their incomplete access to the benefits of Canadian citizenship or residency provide us a unique opportunity to examine social and political mechanisms that construct (in)eligibility for health and protection in society. As individuals seeking to care for the sick and most marginalized, it is important for nurses to understand how migrant agricultural workers are positioned and imagined in society. We argue that the structural exclusion faced by this population can be uncovered by examining: (1) border politics that inscribe inferior status onto migrant agricultural workers; (2) nation-state borders that promote racialized surveillance, and; (3) everyday normalization of exclusionary public service practices. We discuss how awareness of these contextual factors can be mobilized by nurses to work towards a more equitable health services approach for this population.
Borders and Boundaries in the Lives of Migrant Agricultural Workers
Temporary migrant agricultural workers are a major labour force in Canada, doing the vast majority of farm work in the country (Canadian Agricultural Human Resources Council, 2016). This group primarily works under the Seasonal Agricultural Workers Program (SAWP). In 2018, 50,550 of the 69,775 temporary migrant agricultural labourers arriving in Canada participated in the SAWP (Government of Canada, n.d.). The federal government first piloted this program in 1966 in Ontario to address what were perceived as temporary labour shortages in the agricultural sector. Administered through bi-lateral agreements between Canada, Mexico, and 11 Caribbean nations, the SAWP has expanded to all provinces in Canada (Employment and Social Development Canada [ESDC], 2019). Each year, workers migrate to and from Canada, some remaining in the SAWP for decades.
For these thousands of workers, the degree to which they feel integrated or isolated from the wider community greatly impacts their quality of life. Yet they are often excluded from public spaces and their access to public services, including health services, is limited by a variety of complex factors. These include direct and indirect coercion from bosses and government officials, geographic and linguistic isolation, exclusionary policies, lack of access to transportation, lack of social networks, and experiences of racism (Caxaj & Diaz, 2018;Hennebry, 2012;Hjalmarson et al., 2015;Robillard et al., 2018). SAWP workers' exclusion from many community spaces and their incomplete access to the benefits of Canadian citizenship or residency require nurses and other healthcare professionals to consider the unique social and political mechanisms that threaten the health and wellbeing of this group. For nurses, understanding the structural and socio-cultural forces that create inequities in the lives of migrant agricultural workers becomes a foundation for both equitable practices and policy advocacy (Caxaj & Plamondon, 2020).
In this commentary, we explore inequities as social and political mechanisms that make visible the boundaries of who is written inside and outside of 'community.' Specifically, we discuss three key processes of marginalization faced by migrant agricultural workers: (1) borders that inscribe inferior status onto migrant agricultural workers; (2) borders that justify racialized surveillance of this population and; (3) exclusion reinforced through everyday health and social care practices. By reflecting on our own scholarship and experiences supporting the health of migrant agricultural workers, we consider how these workers are positioned in our communities, and furthermore, how the conditions faced by this group construct them as outsiders. Lastly, we suggest some strategies for nurses who want to challenge forces of marginalization through political action and provide equity-oriented care to migrant agricultural workers.
Nation-state Borders Inscribe Inferior Status
Borders are not only fixed, invisible lines that divide the world into nation-states, but also ideological constructs that create and reinforce differences. Borders exist both at national territorial boundaries but also within the geographic and political centers of nations (Balibar & Swenson, 2004). Where borders create an 'us' and 'them' conceptualization, they then become a justification to keep 'them' out and 'us' in, however these groupings are defined. As restrictions on human movements, borders have historically had limited impact on preventing people from migrating to places where more opportunities exist. Indeed, more people migrate today than ever before. In 1990, the International Organization for Migration (IOM) (2018) estimated that there were just over 152 million international migrants. By 2017, that number had jumped by more than 100 million to an estimated 257 million (IOM, 2018). It is notable that this shift has occurred in the context of increasing global restrictions on movement across borders. Rather than keeping people out, borders work to control people's relationship to the state and limit their ability to claim rights and entitlements once they are within the boundaries of the nation-state (Anderson et al., 2009) In the case of migrant agricultural workers and other temporary foreign workers, WITNESS VOL 2(2) 94 their relationship with the state is an incomplete one-the act of crossing the border into an industrialized nation (albeit legally) solidifies their positions as only partially included in society. While many migrant farmworkers spend more time in Canada than their 'home' countries, they are barred from becoming citizens with full access to leg al., politic al., and social rights. They are "permanently temporary" (Hennebry, 2012). In short, though borders are porous for the 'legal' migrant farmworker, these borders also present as rigid obstacles to these individuals' freedom by creating a large pool of workers that are disposable, precarious, and 'flexible' (see Walia, 2013;Faraday, 2012;McLaughlin & Hennebry, 2013). In this way, borders are better understood as a structural means of creating inequitable status rather than a mechanism that effectively restricts flows of migration.
Workers' limited access to health and social services keep them on the periphery in many ways (Sargeant & Tucker, 2010). Constraints on the assertion of their labour, housing and human rights (Moyce & Schenker, 2018) further narrow their everyday world by forcing workers to focus on their physical, economic, or emotional survival. For instance, lack of enforcement of labour and housing standards makes it difficult for workers to refuse unsafe work, report injuries sustained on the job, or demand appropriate safety equipment or training (Hennebry, 2012;Hennebry & Preibisch, 2012). Workers' dayto-day attempts to stay safe and physically recover from exploitative conditions will also limit their ability to feel connected to the wider community.
In addition to legally subordinating a large group of people within the nation-state, borders are also responsible for creating ideas about who belongs and who does not. Since its inception, the category of 'migrant worker' marked some migrants as deserving of citizenship and full and permanent inclusion within the state, and others as undesirable for permanent inclusion, justifying differential rights for each group (Sharma, 2008). This inequity, while not explicitly labeled in these terms, nonetheless contributes to the creation of racialized and class hierarchies of workers. Migrants who fall into the categories of permanent residents or citizens have high levels of formal education, are proficient or fluent in English, practice a profession deemed by the government as 'high-skilled', and/or have a significant amount of capital to start a business venture or invest in the Canadian economy. Disproportionately, migrants who meet these requirements come from countries in the Global North or the wealthiest families in the Global South (Bhuyan et al., 2017;Costigan et al., 2016). On the other hand, migrants deemed suitable for temporary status only are those with lower levels of education or whose qualifications are not recognized in Canada. Many speak little to no English and come from countries in the Global South whose economies have been devastated by neoliberal economic policies (Lewis et al., 2015).
Nation-state Borders Promote Racialized Surveillance and Limit Access to Public Services
When people move away from their place of birth, especially poor people moving across national boundaries into wealthier regions, they are often framed as dangerous invaders. These xenophobic perspectives ignore the complex histories of human migration that span thousands of years (Goldberg, 2002). Where migrants are conceptualized as a risk or a danger to others, the state and the public are portrayed as victims, particularly in the case of undocumented migrants. These perceptions manifest in structural violence when they are used to justify increasingly restrictive immigration policies as well as the widespread criminalization and incarceration of migrants (Walia, 2013).
Borders as a construct of inequity can be seen in how criminalization extends to 'legal' migrants as well, including temporary migrant farmworkers. The conflation of temporary migrant and 'illegal' migrant, which are both state-produced categories, has long been part of popular discourse. In addition to being racially profiled and discriminated against, migrant agricultural workers may be suspected or assumed to be 'illegal,' and subjected to cultural narratives that WITNESS VOL 2(2) 95 paint them as deviants (Anderson, 2010;Forcier & Doufour, 2016;. A clear example of this surveillance occurred in 2016 when four migrant agricultural workers left a farm in British Columbia where they had worked. Despite the workers' legal right to refuse work, their departure from the farm was treated as a criminal activity. Some local media published their names and passport numbers, and in their media release, the police advised the community to stay vigilant for signs of their whereabouts (Handschuh, 2015;Sthankiya, 2015). This example also suggests that implicit discourses of Whiteness and nationhood converge with private interests to extend the surveillance and policing of migrant workers, whose bodies are made focal points of racialization in public spaces. Other delineations of 'difference' may include depictions of workers as passive objects, outsiders, or 'charity cases' (Aguiar et al., 2005;Caxaj & Plamondon, 2020;Inouye, 2012), which can serve to justify workers' limited access to public services. Like undocumented workers, migrant agricultural workers in Canada also live with a fear of deportation. Despite enjoying leg al., albeit temporary, status in Canada, SAWP workers have no easy pathway to permanent residency or citizenship and are liable to have their employment terminated at any time for no reason and with no appeals process (Basok et al., 2014;Faraday 2012). As SAWP participants' work permits are incredibly restricted, the loss of employment often means immediate mandatory return to one's country of origin. Consular officials and employers take advantage of migrants' precarity by reminding them of their disposability and explicitly threatening to send them home (Cohen & Caxaj, 2018). These threats are not hollow, as many migrants each year find out. Orkin et al. (2014) were able to gain access to detailed records from the Ontario growers association, Foreign Agricultural Resource Management Services (FARMS). These records illuminated details of medical repatriation (involuntary return to 'home' countries due to an illness or injury) in Ontario between 2001 and 2011, showing that more than 780 migrant farmworkers were medically repatriated during this period. The potential of deportation is sufficient to ensure even these migrant workers, who are legally employed in Canada, feel disposable and remain compliant (Basok et al., 2014). Legislated processes of deportation of those legally admitted to Canada for temporary labour demonstrates that inequity is an active rather than a passive social process.
Since workers' job status is insecure, they may assess the risk of officially reporting a concern to be more dangerous than enduring workplace abuse or mistreatment (Hennebry, 2012;McLaughlin, 2007). In fact, even though hundreds of complaints are documented by non-government agencies , migrant agricultural workers rarely make official complaints because of the systemic onus put on workers to navigate the system and absorb the risks of reporting (Faraday, 2012). Key barriers include a lack of oversight mechanisms and an over-reliance on worker-initiated complaints, workers' geographic isolation, transportation barriers, language barriers and limited access to interpreters, limited knowledge of, or ability to navigate services, and poor networks of support Robillard et al., 2018). The cumulative impact of these regulatory and contextual barriers is a ritualized and systemic lack of access to medical care and legal protections for this group. These conditions consequently fortify the notion that migrant agricultural workers belong only to the periphery, despite being formally entitled to certain rights 'on paper'.
Exclusion Reinforced through Everyday Practices
Migrant agricultural workers' experiences of segregation and isolation are complex and multi-faceted. Several studies indicate that migrant agricultural workers are largely restricted to the farm due to geographic, linguistic, and workplace restrictions Horgan & Liinamaa, 2017). And workers' mobility outside of the farm, whether to access health care services or to buy groceries, is typically mediated by, and dependent upon, their WITNESS VOL 2(2) 96 employers (Hennebry et al., 2016). Furthermore, migrant agricultural workers may endure long hours without breaks, stretches of time without days off, as well as expectations that they must always be 'on call.' This workplace climate, in combination with a permit tied to a particular employer, has the effect of restricting workers to their employer's property while placing their health at risk (Sikka, 2013;Strauss & McGrath, 2017). Curfews and other 'house rules,' commonly posted in migrant agricultural workers' lodgings, pose further barriers for workers to access both formal and informal health supports (Cohen & Caxaj, 2018;Perry, 2018). Posted rules may include instructions banning workers from having visitors, requiring them to be locked into their residence after a certain time of night, and prohibiting them from consuming alcohol. Our current research parallels prior findings from Ontario, demonstrating that these types of house rules tend to be stricter on farms that hire migrant women (Cohen & Caxaj, 2018;Encalada Grez, 2011). These rules limit workers' mobility and connection to the wider community, and perhaps most concerning, impinge on this group's ability to access health services to which they would otherwise have a right .
The social and political conditions that maintain migrant agricultural workers on the margins are also self-perpetuating in that they prevent workers' interests and needs from being considered. For instance, we have seen public deliberations related to local food security initiatives, rural housing governance, and the location of community health clinics in contexts where migrant agricultural workers' voices are notably absent. In addition, our engagement with health and social care providers has indicated that principles of confidentiality are continuously violated by a systematic lack of third-party translators and a default inclusion of employers' preferences in workers' care plans and recreational activities . So, the exclusion and marginalization of migrant agricultural workers is both normalized and reinforced by everyday practices in social and healthcare settings. In healthcare settings, this exclusion can be enacted even when migrant agricultural workers are physically present by prioritizing the preferences and involvement of those in positions of power over those of migrant agricultural workers who are precariously positioned.
Migrant agricultural workers' differential access to health and social services reveal hidden eligibility criteria that reinforce their partial and precarious status in society. In large part, this is most evident through the power given to employers to mediate migrant agricultural workers' access to basic services and amenities. For example, workers' access to the grocery store, medical attention, and workplace training are all formally recognized as responsibilities of the employer (Reid-Musson, 2017), which reinforces the position of power as the employer serves as a gatekeeper (McLaughlin, 2007;Reid-Musson, 2017). Further complicating this dynamic, conflicts of interest arise when an employer is expected to act as a service liaison, particularly in terms of weighing risks to productivity and profit against the wellbeing and entitlements of their employees. For instance, migrant agricultural workers have reported that employers encouraged them to work through an injury or illness, not to report a workplace incident, to use over-the-counter treatments instead of seeking medical attention, or to postpone seeking medical help until a less busy time of the season (Caxaj & Cohen, 2019). Although migrant agricultural workers are technically eligible for provincial health coverage, national pension plans, unemployment insurance, and other government programs, very few of them are able to secure access to these benefits (Hennebry et al., 2016;Robillard et al., 2018). In Ontario, migrant agricultural workers rely on their bosses to help them register in the provincial health care program, which in practice has meant that very few workers are actually enrolled (Robillard et al., 2018). These realities illustrate that both the employer and the state are active gatekeepers in maintaining an elusive yet effectively restrictive boundary that excludes migrant agricultural workers from accessing health and social services.
Implications for Nursing
While we developed this manuscript pre-Coronavirus disease (COVID-19), the WITNESS VOL 2(2) 97 marginalizing forces that we have highlighted above have ultimately limited the possibility of a cohesive and dignified public health response for migrant workers. Consider Ontario Premier Doug Ford's public statement that urged migrant workers to get tested (Jeffords & Jones, 2020) and suggested workers 'hid' from testing (Jeffords, 2020), oblivious to workers' limited access to transportation and unmediated access to services. And despite calls by experts Haley et al., 2020;Weiler et al., 2020), all levels of government and various health units have failed to systematically employ effective outreach and communication methods that would ensure adequate monitoring and follow-up for workers who are exposed to COVID-19. Often, public health units have screened migrant agricultural workers for symptoms or assessed housing conditions in the presence of their boss or supervisor. Ultimately, the circumstances resulting in the death of three migrant agricultural workers infected by COVID-19, Juan Lopez Chaparro, Rogelio Munoz Santos, and Bonifacio Eugenio Romero, suggest that these deaths were preventable had more investment of time and resources been put into protecting this vulnerable workforce (Migrant Worker Health Expert Working Group, 2020).
Nurses have a vital role in questioning the way that health services have been traditionally delivered to this population without addressing the unique needs and challenges faced by this group. For instance, nurses can advocate for programming that helps build rapport and relationships with this population such as faceto-face translation that enables follow-up care and two-way communication, critical during both individual and public health emergencies. Nurses can also develop proactive public policies and practices that anticipate and counter discriminatory and xenophobic reactions towards migrant worker populations that have been seen across the country because of COVID-19 .
Nurses have an important role as advocates by questioning the racist logics that differentiate between 'deserving' and 'undeserving' immigrants, noting the ways that legal migration does not guarantee adequate access to health services. Even being aware that migrant agricultural workers are entitled to the same workplace health and safety protections and benefits that they pay into can help foster a different ethic of care when working with migrant agricultural workers. Given the inherent interdisciplinarity of our role, nurses are uniquely equipped to consider the holistic and multi-faceted needs of this population and to work to develop public health responses and models of care that are better suited for migrant agricultural workers. In select regions in Ontario and other provinces, nurses are already involved in the provision of targeted care for this population as well as primary care programming for migrant agricultural worker programs. Yet overall, health services for this population are still largely inaccessible and rife with barriers, some of which stem from health care professionals' limited understanding of the needs of this group (McLaughlin & Tew, 2018). Across healthcare settings, nurses can help ensure that service delivery be unmediated, non-coercive, accessible and confidential (Caxaj & Plamondon, 2020).
Looking to the long-term, the current context highlights the ways that migrant agricultural workers have often not been integrated into our mandate of nursing care. While nurse scholars, particularly in the US context have highlighted the unique risks that this group faces for certain communicable diseases, the analysis of the root causes underlying these risks have often been limited (see for example, Albarran & Nyamathi, 2011; Moyce et al., 2019). The unintentional consequence of this type of scholarship has been to shift the blame for health disparities on to this population. While some researchers have identified relevant determinants of health for migrant agricultural workers, limited attention has been given to the structural/historical forces at play, such as border politics, that are determining more proximal risk-factors (e.g., see Ballestas, 2008, on health-seeking, andKilanowski, 2013, on health education). Key to effectively delivering care for this population is to understand the ways in which this group may be precariously positioned because of social isolation, the nature of their employment, or their status in Canada. Concepts such as structural violence (Farmer et al., 2009), intersectionality (Crenshaw, 2017), and WITNESS VOL 2(2) 98 marginalization championed and adopted by many critical nurse scholars (see for example, Varcoe et al., 2014;Hall, 1999) are key to bridging an examination of macro-level forces shaping migrant workers' day-to-day, with an application of equity-oriented care in both policy and practice (see for example, Holmes, 2013;Robillard et al., 2018;Salami et al., 2018). Given that migrant agricultural workers represent a displaced population, many of who are Indigenous (Asad & Hwang, 2019), applying a cultural safety lens (Papps & Ramsden, 1996) in caring for this population may also help nurses think through assumptions, privileged positions, and systemic exclusion that hinder their health trajectories (Browne et al., 2009). Our paper confirms the importance of these concepts in helping nurses provide more relevant and responsive care to migrant agricultural workers. It invites greater attention to border politics, temporary status and precarious labour among critical nursing scholars, in order to work towards a more comprehensive notion of inclusion in our advocacy and mandate for health and social justice. Ultimately, nurses have a mandate to protect and advocate for patients who have been under-served and marginalized. To advocate effectively, nurses must understand the ways that border politics and so-called routine practices create health inequities that uniquely affect migrant agricultural workers. With this awareness, nurses can work with migrant agricultural workers to challenge the complicity of the healthcare system with these marginalizing forces.
Conclusion
In this paper, we examined the ways migrant agricultural workers are 'written out' of our communities. This knowledge provides a foundation for nurses to understand the structured inequities experienced by this population. We outlined three key sociopolitical forces that marginalize migrant agricultural workers and undermine their ability to access services. First, we discussed how borders create racialized hierarchies of workers who are given only partial rights. Second, we discussed how nation-state borders, particularly the ideologies underpinning them, help to frame migrant agricultural workers as potentially deviant, criminal, and requiring surveillance. Finally, we discussed how exclusion is normalized through everyday health and social practices that are often coercively mediated by the employers of migrant workers and the nationstate. Each of these forces ultimately exacerbates migrant agricultural workers' marginalization, limits their ability to participate in the wider community, and poses significant obstacles for them to access health and social services. Nurses can and must play a role in challenging these taken-for-granted practices that undermine migrant workers' access to services, rights to protections, and ability to meaningfully participate in society.
|
2021-01-07T09:08:08.426Z
|
2020-12-30T00:00:00.000
|
{
"year": 2020,
"sha1": "3e620d38975bbc479c5ef2794eb15fe25bb9ba09",
"oa_license": "CCBYNC",
"oa_url": "https://witness.journals.yorku.ca/index.php/default/article/download/69/48",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e453bf5225129fff3527a493e2ae35a01fb13ff1",
"s2fieldsofstudy": [
"Sociology",
"Political Science",
"Environmental Science"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
15842778
|
pes2o/s2orc
|
v3-fos-license
|
The impact of an ICME on the Jovian X‐ray aurora
Abstract We report the first Jupiter X‐ray observations planned to coincide with an interplanetary coronal mass ejection (ICME). At the predicted ICME arrival time, we observed a factor of ∼8 enhancement in Jupiter's X‐ray aurora. Within 1.5 h of this enhancement, intense bursts of non‐Io decametric radio emission occurred. Spatial, spectral, and temporal characteristics also varied between ICME arrival and another X‐ray observation two days later. Gladstone et al. (2002) discovered the polar X‐ray hot spot and found it pulsed with 45 min quasiperiodicity. During the ICME arrival, the hot spot expanded and exhibited two periods: 26 min periodicity from sulfur ions and 12 min periodicity from a mixture of carbon/sulfur and oxygen ions. After the ICME, the dominant period became 42 min. By comparing Vogt et al. (2011) Jovian mapping models with spectral analysis, we found that during ICME arrival at least two distinct ion populations, from Jupiter's dayside, produced the X‐ray aurora. Auroras mapping to magnetospheric field lines between 50 and 70 R J were dominated by emission from precipitating sulfur ions (S7+,…,14+). Emissions mapping to closed field lines between 70 and 120 R J and to open field lines were generated by a mixture of precipitating oxygen (O7+,8+) and sulfur/carbon ions, possibly implying some solar wind precipitation. We suggest that the best explanation for the X‐ray hot spot is pulsed dayside reconnection perturbing magnetospheric downward currents, as proposed by Bunce et al. (2004). The auroral enhancement has different spectral, spatial, and temporal characteristics to the hot spot. By analyzing these characteristics and coincident radio emissions, we propose that the enhancement is driven directly by the ICME through Jovian magnetosphere compression and/or a large‐scale dayside reconnection event.
Introduction
The Einstein Observatory first permitted the identification of Jupiter's X-ray emission during the 1980s [Metzger et al., 1983]. Since then, Röntgen satellite, Chandra, and XMM-Newton X-ray observatories have provided the opportunity to study the spatial, spectral, and temporal characteristics of this X-ray emission in more detail [Waite et al., 1994;Gladstone et al., 1998Gladstone et al., , 2002Elsner et al., 2005;Branduardi-Raymont et al., 2004, 2007a, 2007bBhardwaj et al., 2005Bhardwaj et al., , 2006]. Jupiter's X-ray emission consists of two components: an equatorial/ disk component and a high-latitude north and south auroral component [Metzger et al., 1983;Waite et al., 1994]. The disk emission is found to be dominated by elastic and fluorescent scattering of solar X-ray photons in the upper atmosphere, meaning that changes in the Sun's X-ray emission induce changes in Jupiter's disk emission [Maurellis et al., 2000;Branduardi-Raymont et al., 2007b;Bhardwaj et al., 2005Bhardwaj et al., , 2006Cravens et al., 2006]. The majority of the auroral X-ray emission above ∼60 ∘ latitude is thought to be due to charge exchange (CX) interactions between precipitating ions and atmospheric neutral hydrogen molecules [Waite et al., 1994;Cravens et al., 1995Cravens et al., , 2003Cravens and Ozak, 2012]. The origin of the ions, however, has been a matter of debate; they could either come from the magnetosphere or from the solar wind. In this work we explore the question intensity would be from sulfur in the outer magnetosphere. The authors also indicated that pulsed reconnection could explain the period observed by Gladstone et al. [2002], suggesting that a 30-50 min period would be expected from this process. Bonfond et al. [2011] suggest that the quasiperiodic UV flaring with timescales of 2-3 min found poleward of the main oval, in a region close to the X-ray hot spot, may also be caused by pulsed dayside reconnection.
When investigating the Jovian X-ray aurora spectra, Branduardi-Raymont et al. [2004, 2007b and Elsner et al. [2005] showed a slight preference for sulfur and therefore a magnetospheric origin, but Elsner et al. [2005] concluded that they were unable to rule out carbon. Further modeling [Hui et al., 2009Kharchenko et al., 2006Kharchenko et al., , 2008Ozak et al., 2010Ozak et al., , 2013 has demonstrated that a good fit to the spectra can be found with a combination of 1-2 MeV/amu oxygen and sulfur ion lines. Hui et al. [2010] also found that the majority of spectra could be well fitted without carbon lines, although one set of spectra had a better fit with a carbon-oxygen model. They also noted significant variation between observation dates and between northern and southern auroras. This north-south pole variation may be expected because Jupiter's 9.6 ∘ dipole tilt ensures that the viewing geometry of one pole is always significantly impaired relative to the other. This means that additional spatial features (and the spectral lines associated with them) can be viewed more clearly for one pole than the other. Additionally, the magnetic field footprints in the north pole feature a significant kink structure between 90 ∘ and 150 ∘ S3 longitude [Pallier and Prangé, 2001], which is well fitted by a magnetic anomaly ; this is absent from the south pole, which may relate to its more diffuse X-ray emission .
Hard X-Rays and the Main Oval
Equatorward of the hot spot is the UV main oval. By comparing Chandra auroral X-ray events, Branduardi-Raymont et al. [2008] showed that hard X-rays (energies greater than 2 keV) map well to the main UV oval. This emission is found to be well fitted by bremsstrahlung radiation from precipitating electrons [Branduardi-Raymont et al., 2007b], implying a spatial coincidence of the X-ray and UV-emitting electron populations.
The main oval is well evidenced as mapping to 20-30 R J , where upward field-aligned currents in the corotation breakdown region could generate downward precipitation of 20-100 keV electrons [Hill, 2001;Cowley and Bunce, 2001;Nichols and Cowley, 2004]. This region is significantly separated from the 63-92 R J standoff distance calculated by , and thus, emission might not be expected to be directly influenced by the solar wind. However, in contrast with this apparent isolation, Branduardi-Raymont et al. [2007b] note that in 2003 XMM-Newton observations showed that both hard and soft X-ray emissions varied at a time of increased solar activity [Branduardi-Raymont et al., 2004, 2007b. UV main emissions connected with the hard X-ray emission are also known to be modulated by the solar wind [Pryor et al., 2005;Nichols et al., 2007;Clarke et al., 2009;Nichols et al., 2009].
Connecting Solar Wind and Auroral Variations
While the impact of a southward turning interplanetary magnetic field and the pressure pulse induced by an interplanetary coronal mass ejection (ICME) on the Earth's aurora are known to produce auroral brightening [Elphinstone et al., 1996;Chua et al., 2001], the impact on Jupiter's larger magnetosphere is not well understood. There are two key challenges associated with examining relationships between solar wind conditions and the Jovian aurora. First, the timescales for the propagation of a solar wind-induced shock through the Jovian magnetosphere are not well understood. Second, without in situ measurements of the solar wind conditions close to Jupiter, we rely on propagation models to estimate the solar wind conditions upstream of Jupiter. The propagation of the solar wind beyond the inner heliosphere becomes increasingly complex, meaning that outside of certain limiting conditions (e.g., Jupiter in opposition) the uncertainty associated with propagation models can be on the order of days, making it difficult to precisely correlate solar activity with auroral intensification. However, Gurnett et al. [2002] found that Jovian hectometric radio emission bursts (0.3-3 MHz) coincided with maxima in solar wind density. Prangé et al. [2004] and Lamy et al. [2012] have used these enhancements in radio emission to trace the progress of ICME-induced shocks through the solar system. Further, Echer et al. [2010] and Hess et al. [2012Hess et al. [ , 2014 found that non-Io decametric radio emission bursts are correlated with periods of increased solar wind dynamic pressure.
Jupiter's auroral variations in response to changes in solar wind pressure are well catalogued at other wavelengths [Barrow et al., 1986;Ladreiter and Leblanc, 1989;Kaiser, 1993;Prangé et al., 1993;Baron et al., 1996;Zarka, 1998;Pryor et al., 2005;Nichols et al., 2007;Clarke et al., 2009;Nichols et al., 2009;Hess et al., 2012Hess et al., , 2014, 10.1002/2015JA021888 but X-ray emission is yet to be investigated in this manner. In particular, there have been few previous opportunities to connect X-ray observations of high-latitude precipitating ions with solar wind conditions. There has also been limited analysis of how the spatial morphology of X-ray features varies over time. In the current work, we analyze auroral spatial features, connect them with spectral features, and compare their morphology and evolution over time to better understand how solar wind conditions and local time magnetosphere variation might drive them.
In section 2, we consider the propagation of an ICME to Jupiter and describe how two Chandra X-ray observations and radio measurements were timed to coincide with the expected arrival time of the ICME at the planet. In section 3, we present polar projections of the X-ray events, identifying changes in their spatial distribution between the observations. In section 4, we compare the auroral lightcurves for each observation and find connections between a bright X-ray auroral enhancement and decametric radio emission thought to be induced by the ICME. In section 5, we compare the spectra for the hot spot and the auroral enhancement, identifying changes between the observations, which are possibly induced by the ICME. We then compare the X-ray polar projections for specific energy ranges (section 6), based on the different precipitating particles species generating the emission. For instance, we provide polar projections for X-ray emission only from oxygen ions, in order to compare this with other ion species and electrons. By doing this, we find that there is an X-ray auroral region closer to the UV main oval that is dominated by emission from high charge-state ions of sulfur or carbon. While poleward of this, the population is more of a mixture of high charge-state oxygen and high charge-state carbon/sulfur ions. In section 7, we bin the X-ray events based on the timing of specific subsolar longitudes (noon times) and use these to identify how auroral developments relate to the evolution of the magnetosphere. Using the Vogt et al. [2011] model, we map the magnetospheric source and local time dependencies of the hot spot region and the auroral enhancement region. This indicates to what extent X-ray emission may be driven by the opening/closing of magnetic field lines, the location of the Sun relative to Jupiter's magnetosphere, and the magnetosphere's auroral footprints. In section 8, we investigate periodicities in the X-ray emission and their relationships to specific ion species. In sections 9-11 we summarize results, provide discussion on these, and draw conclusions.
October 2011 Jupiter Observations
The two Chandra X-ray observations reported here were undertaken to attempt to establish if and to what extent the solar wind drives Jupiter's X-ray aurora. Having previously observed variations in X-ray emission possibly associated with increased solar activity [Branduardi-Raymont et al., 2007b], an extreme solar event such as an ICME was thought to provide the opportunity to better understand this connection. To minimize the uncertainty associated with models that propagate the solar wind conditions to Jupiter and to maximize the X-ray flux and spatial resolution, it is important that Jupiter is observed close to opposition, with the smallest possible Earth-Sun-Jupiter angle. Opposition occurred in October 2011, so a Chandra Target of Opportunity (TOO) proposal was submitted to observe Jupiter at the time when an ICME was predicted to arrive.
We used the 1.5-D MHD mSWiM model [Zieger and Hansen, 2008; http://mswim.engin.umich.edu/] to determine the solar wind parameters at Jupiter. This allowed us to propagate solar wind measurements from 1 AU to Jupiter. Inspection of the solar wind density, velocity, and the interplanetary magnetic field (IMF) timelines ( Figure 1) indicated the predicted arrival of an ICME at Jupiter over 2 and 3 October 2011, day of year (DOY) 275-276 (Figure 1). At this time, the Earth-Sun-Jupiter angle was ∼25 ∘ and Jupiter was ∼4.07 AU from the Earth, meaning that the propagation model offered a relatively low uncertainty of 10-15 h and that Jupiter was within the angular extent of the ICME [Robbrecht et al., 2009a[Robbrecht et al., , 2009b. To account for this uncertainty, we smooth the mSWiM propagations over a 30 h moving average.
The most accurate parameter is solar wind velocity, followed by density and the tangential component of the magnetic field (B T ) [Zieger and Hansen, 2008], which points toward the cross product of the solar rotation vector and the direction radially away from the Sun toward Jupiter. Inspecting the mSWiM model propagations of the solar wind reveals an increase in density from 0.03 cm −3 on DOY 274.5 to a peak of 0.21 cm −3 on DOY 276.75 (Figure 1a). Density then decreases from this peak back to a minimum of 0.015 cm −3 on DOY 279.0. The median densities measured upstream of Jupiter by Pioneer 11, Voyager 1, and Voyager 2 were 0.13, 0.14, and 0.15 cm −3 , respectively, indicating that the mSWiM averaged solar wind density is above the median value [Jackman and Arridge, 2011] (see supporting information for these distributions). There is also a modest increase in solar wind velocity during this time from 490 km/s on DOY 274.5 to 500 km/s on DOY 276.0 ( Figure 1b). This then decreases gradually to 450 km/s by DOY 279.0. These solar wind velocities are similar to the median velocity upstream of Jupiter measured by Pioneer 11 (493 km/s) but represent an increase over the Voyager 1 and 2 median velocities (439 and 441 km/s, respectively). The mSWiM-predicted density and velocity is much closer to the mean from Pioneer 11, Voyager 1, and Voyager 2 upstream measurements (0.26, 0.23, and 0.25 cm −3 and 497, 446, and 448 km/s respectively ), suggesting that the variation in solar wind conditions represent a more modest ICME.
The B T magnetic field plot appears to show a rotation in the solar wind magnetic field at this time, with the field oriented in the positive B T direction from DOY 274.5 to DOY 277 and a negative B T direction from DOY 277 to 280, before returning to a positive orientation again (Figure 1c). This variation in IMF along with the simultaneous increase in density and velocity is consistent with an ICME with flux rope-like interior structure [Hanlon et al., 2004].
We also note that the mSWiM model shows that a much stronger ICME was incident at Jupiter from DOY 268 to 272 and the solar wind can be seen to be returning to non-ICME conditions from DOY 272.5. The arrival of this preceding ICME is also accompanied by bursts of Jovian radio emission [Lamy et al., 2012]. It is possible that this preceding ICME may also have driven changes in the Jovian magnetosphere, which are still observable in the X-ray observations reported here.
Chandra X-Ray Observations
Based on the predicted arrival of the ICME at Jupiter, two TOO observations were made by the Chandra X-ray Observatory Advanced CCD Imaging Spectrometer (ACIS). Each observation lasted 11 h, providing coverage of at least one full Jupiter rotation (∼9 h 55 min). Two observations separated by a couple of days were requested in order to optimize our chances to observe Jupiter during the ICME impact and during relaxed conditions. Both observations were made with the back-illuminated (S3) CCD, which has the highest sensitivity to low-energy X-rays. To simplify the analysis, the observatory was oriented so that the moving image of Jupiter remained on the same output node of the CCD throughout each observation. The first observation was timed to coincide with the predicted arrival of the ICME at Jupiter and lasted from ∼ 21:55 on 2 October to 09:30 on 3 October 2011 (day of year 275.9-276.4). The second observation ran from 14:35 on 4 October until 02:20 on 5 October 2011 (day of year 277.6-278.1). Figure 1 shows the times of these observations between red (first observation) and blue (second observation) dotted lines plotted onto the mSWiM solar wind propagation diagram. These suggest that the density peak occurred toward the end of the observation. The second observation occurred when solar wind density was returning to conditions outside of an ICME-induced shock. Figure 1c also shows that the tangential component of the solar wind magnetic field was aligned in an opposite direction for the two observations. However, we note that the 10-15 h uncertainty could lead features to be shifted into or out of the observations. The ability of ACIS to detect soft X-rays from optically bright, extended targets is hampered by substantial transmission through its optical blocking filters (OBFs) at wavelengths between 0.8 and 0.9 μm. Jupiter at opposition fills some 6000 pixels of an ACIS CCD. In the 1999-2000 observations, each of these pixels received an average charge equivalent to a 140 eV X-ray. The value has gradually decreased since then-due most probably to contamination buildup on the OBFs. By November 2014, it had fallen to ∼70 eV/pixel, as estimated from observations of Betelgeuse.
To distinguish X-rays from charged particles passing through the CCDs, an on-board digital filter scans the charge distribution in each CCD image, seeking local maxima surrounded by charge patterns peculiar to X-rays. The extra optical signal turns all genuine X-ray events into nonevents, which are never reported to the ground. The solution, outlined in Elsner et al. [2005], has been to (a) take CCD bias frames with Jupiter out of the field of view and (b) increase the digital filter's threshold levels by 140 eV, allowing the software to compensate for the optical signal. During subsequent ground processing, the 5 × 5 block of pixels reported for each event candidate are used to subtract the background charge, including the optical signal.
If the optical contamination were exactly 140 eV/pixel, the energy of an X-ray could be recovered without any additional systematic error. In practice, Jupiter exhibits strong limb darkening in the near infrared, and most Jovian X-ray emission comes from the polar regions which are observed close to the limb. Also, the optical point spread function of the Chandra mirrors is strongly diffracted by the intermirror gaps, adding to the limb darkening. The result is that some low-energy X-rays, especially those whose charge is split between pixels, are still filtered out. The loss incurred has been estimated by reprocessing a group of eight ACIS observations (a total of 104 ks) of the supernova remnant E0102-72.3-an extended source similar in angular size to Jupiter, which exhibits a strong low-energy thermal bremsstrahlung component-adding successive levels of "optical" contamination and measuring the resulting change in low-energy spectral flux. The correction came to less than 1% for X-rays above 600 eV, 5 ± 1% at 430 eV, and 10 ± 2% at 220 eV, below which energy the sensitivity of the ACIS CCDs drops off rapidly. To account for this, we applied a correction to the auroral spectra (section 5).
Radio Observations
Alongside Chandra X-ray observations, a series of multi-instrument, multiplanet observations were conducted and were initially reported in Lamy et al. [2012], including radio observations of Jupiter during the indicates bursts of non-Io decametric radio emission that suggest the arrival of a forward shock at Jupiter [Hess et al., 2012[Hess et al., , 2014. "Io" indicates Io decametric radio emission associated with activity from Io. The black horizontal arrows indicate the timings of the Chandra X-ray observations. The first non-Io decametric burst occurs 0.1 DOY before the end of the first Chandra observation, suggesting that a forward shock arrived at Jupiter during the first X-ray observation. same interval. Using both ground-based observations, from the Nançay decameter array, and space-based observations, from WIND, STEREO A and B, Jupiter was found to display intensifications of auroral decametric to hectometric emission close to three successive ICMEs, the second of which is investigated here. These enhancements driven by the solar wind activity were consistent with those evidenced by Gurnett et al. [2002] for hectometric emission with Galileo and more recently by Hess et al. [2012Hess et al. [ , 2014 for decametric to hectometric emission from Galileo, Cassini, and Nançay observations. The radio observations obtained at the time of the Chandra observations ( Figure 2) were shifted to account for light travel time from Jupiter to Earth. Since non-Io decametric radio emission has been found to be correlated with solar wind pressure [Hess et al., 2012[Hess et al., , 2014, investigating this radio emission helps to constrain the arrival time of the ICME-induced shock.
Non-Io decametric emission is arc shaped in the time-frequency plane and the shape of this arc is indicative of the side of the magnetosphere from which it originates. The vertex early or vertex late curvature of these arcs indicates whether the emission source was located westward (Jovian dawn) or eastward (Jovian dusk) of the observer (in the direction of Earth). Hess et al. [2012Hess et al. [ , 2014 showed that forward shocks (where the magnetosphere may be compressed by increased solar wind pressure) are often followed by emission from only one side of the magnetosphere. They showed that reverse shocks (where the solar wind pressure decreases and the magnetosphere may expand) are often followed by emission from both sides of the magnetosphere (i.e., both vertex early and vertex late emission would be observed). At DOY ∼276.3 and 276.7, STEREO A and B data showed two bursts of decametric emission with only vertex early morphology, which suggests the incidence of two solar wind forward shocks at these times. The first of these two bursts coincided with our first X-ray observation, occurring 2.5 h (0.1 DOY) before the end of the observation (see Figure 2). At ∼276.2 there is also a fainter burst of non-Io decametric emission.
Two additional radio bursts also featured in the STEREO data: a burst of Io-D decametric emission at 276.0 and a less intense burst which was only observed by STEREO B (where both spacecraft observed the other bursts) and was difficult to distinguish between Io and non-Io decametric emission at DOY 277.7. This second indistinguishable burst occurred one Io orbit after the burst on DOY 276.0, which may suggest that Io is the source. If Io is not the source, then it may suggest that a magnetospheric disturbance has been maintained over one Jupiter rotation and that Jupiter's magnetosphere is therefore not completely quiet during the second observation. A corresponding auroral X-ray enhancement would go undetected for the burst on DOY 276.0 because the auroral footprints had not rotated into view at this time. It would also be very difficult to distinguish the burst on DOY 277.7, since the auroral footprint will have been on the limb of the Jovian disk at this time. . System III (S3) coordinate projections onto Jupiter's geographic north pole (plot center) for the (left) first observation, during which the ICME arrived at Jupiter, and the (right) second observation, 1.2 days later. Lines of constant Jovian S3 longitude radiate outward from the pole, increasing clockwise in increments of 30 ∘ from 0 ∘ at the bottom of the projection. Concentric dotted circles outward from the pole represent lines of 80 ∘ , 70 ∘ , 60 ∘ , and 30 ∘ latitude. The alternate green and black contours indicate VIP4 model magnetic field strength in Gauss. The outer red oval is the Grodent et al. [2008] contour of Io's footprint (5.8 R J ). The inner red contour is the footprint for the 30 R J field line from Vogt et al. [2011] mapping using the Grodent et al. [2008] anomaly model. The thick orange contour is the average location of the UV main oval from two HST observation campaigns in 2007 [Nichols et al., 2009]. The projections show more X-ray events in the hot spot (160 ∘ -180 ∘ S3 longitude, 60 ∘ -70 ∘ latitude) during the first observation than the second. The events appear to spread from the hot spot into the region from 150 ∘ to 160 ∘ . More clearly identifiable is the bright change in emission in the Auroral Enhancement Quadrant (180 ∘ -270 ∘ S3 longitude, 55 ∘ -90 ∘ latitude). The distribution of this emission is not only enhanced in the main oval but also poleward of this and at lower latitudes near Io's magnetic footprint.
North Pole Projections
Using the technique applied in Gladstone et al. [2002], Elsner et al. [2005], and Branduardi-Raymont et al. [2008], time-tagged Chandra X-ray events were reregistered into Jupiter's System III (S3) (1965) spherical latitude-longitude coordinates centered on the rotation poles. Hence, a sky-projected disk of 1.01 R J was used for both observations (shown in the supporting information). It should be noted that when reregistering to S3 coordinates, events emitted close to the limb of the Chandra-facing disk will have larger spatial uncertainties because of the increased obliquity of the planet's surface relative to the observer.
We estimated spatial uncertainties on events based on Chandra's spatial resolution, by perturbing the Jupiter-centered disk by two pixels in the x and y directions, then reregistering the events into S3 coordinates.
To identify the spatial distribution of auroral X-rays for the two observations, we present projections looking down onto the rotational north pole of Jupiter. Figure 3 shows these projections for both observations. Figure 4 shows counts versus latitude plots to quantify the latitudinal concentrations of X-rays. During these observations the south pole emission was obscured by the viewing geometry, so we focus on the north pole projections only.
We observe a range of differences in the spatial distribution of X-rays between the observations (Figures 3 and 4). A surprising difference is a broad bright auroral enhancement in the first observation between 180 ∘ and 270 ∘ longitude and above 60 ∘ latitude. The emission in this area is much dimmer in the second observation. This enhancement is significantly spatially separated from the hot spot (S3 longitude: 160 ∘ -180 ∘ , latitude 60 ∘ -70 ∘ [Gladstone et al., 2002;Elsner et al., 2005;Branduardi-Raymont et al., 2008]), where the brightest X-ray emission was previously observed. The region above 60 ∘ latitude and with longitudes 180 ∘ -270 ∘ features 201 ± 14 X-ray counts in the first observation compared to 76 ± 9 counts in the second. Given the changing solar wind conditions throughout the observations (section 2) and our lack of knowledge concerning the processes governing both the hot spot and the auroral enhancement, we shall analyze the two separately. We refer to the 90 ∘ -180 ∘ longitude quadrant as the "Hot Spot Quadrant" (HSQ) and to the quadrant between 180 ∘ and 270 ∘ longitude as the "Auroral Enhancement Quadrant" (AEQ). However, we note that there is brightening across both quadrants and that this may be connected.
We focus first on the HSQ. For both observations, the majority of the auroral emission (above 60 ∘ latitude) occurs poleward of the 30 R J contour (the inner red oval in Figure 3), indicating that the precipitating particles originate farther away from Jupiter than this. The whole region of the HSQ inside the 30 R J contour contains 113 ± 11 counts in the first observation compared to 78 ± 9 counts in the second. Previously [Gladstone et al., 2002;Elsner et al., 2005], the hot spot was defined as located between 160 ∘ and 180 ∘ S3 longitude and 60 ∘ and 70 ∘ latitude, where we find 52 ± 7 counts in the first observation and 37 ± 6 counts in the second observation. We find that the hot spot appears to spread out spatially in the first observation. The outer edge of the hot spot (at longitudes 150 ∘ -160 ∘ and latitudes 55 ∘ -60 ∘ ) is where the greatest change occurs, with 55 ± 7 X-ray counts in the first observation compared to 28 ± 5 counts in the second. This changing emission occurs between the 30 R J contour and the hot spot, in a region which during a 2007 Hubble Space Telescope (HST) observing campaign was where the poleward edge of the UV main oval was observed [Nichols et al., 2009]. The second observation appears to have its events much more concentrated in the previously defined hot spot. UV observations have shown that when solar wind compression regions onset, the UV auroras brighten in the "active region" close to this X-ray region, near noon and poleward of the main oval [Grodent et al., 2003;Nichols et al., 2007]. . The lightcurves were generated by placing events above 60 ∘ latitude in S3 coordinates into 1 min bins. These were then shifted to account for Jupiter-Earth light travel time of 34 min (UT 34 min). The subsolar longitude at the time of the observations is indicated along the top of each plot. The green vertical dashed line indicates the onset of the brightest burst of non-Io decametric emission in the STEREO A data. The projected area of each quadrant (as a percentage of the total area of Jupiter) is indicated by the blue (HSQ) and red (AEQ) dashed lines. At the point of maximum visibility each quadrant above 60 ∘ latitude takes up a projected area that is ∼3% of the total observable Jovian disk.
For the Auroral Enhancement Quadrant, the first observation displays additional bright features with respect to the second. The difference is most evident in Figure 4, which shows the emission is up to a factor of 5 brighter across all latitude regions from 55 ∘ to 85 ∘ during the first observation relative to the second. Additionally, Figure 4 shows that in the first observation the levels of emission observed in the AEQ are comparable to those in the same latitude range in the HSQ. Comparing the changes in counts for the HSQ and AEQ could suggest that the HSQ is less sensitive to the ICME than the AEQ. Alternatively, it could suggest that the changes the ICME drives in the X-ray aurora develop with time or with varying solar wind parameters-as Jupiter rotates, the HSQ is visible first and the AEQ rotates into view slightly later ( Figure 5).
One other aspect to note from the HSQ latitude-count plot ( Figure 4) is that there appears to be increased emission from the disk/equatorial region. This suggests the presence of increased solar X-ray flux, which is fluoresced and elastically scattered in the Jovian atmosphere. The occurrence of a solar flare at a time consistent with the increase is confirmed by inspection of GOES X-ray lightcurves (see supporting information). Analysis of the polar projections for discrete energy regimes section 6 shows that the flare is not a significant contributing factor for the increased auroral emission, ensuring the validity of the changing auroral activity. We note that this solar flare is a distinct event from the ICME and directly introduces additional solar X-ray photons to the Jovian disk, while the ICME introduces X-rays indirectly.
Auroral X-Ray Lightcurves
To generate the auroral X-ray lightcurves, we took those events which occurred above S3 latitudes of 60 ∘ in the polar projections (section 3) and placed them into 1 min time bins. We then shifted the lightcurves to account for Jupiter-Earth light travel time. During the first observation, the X-ray emission was brighter and more variable with multiple enhancements that contain twice as many counts as similar enhancements in the second observation. To distinguish between variation in emission from the HSQ and the AEQ, we produced separate lightcurves for each quadrant ( Figure 5). To help identify any local time dependencies, we also indicate the subsolar longitude (SSL) corresponding to the timing of the observations. Figure 5 shows that the first half of each observation was dominated by the hot spot. In the first observation, the hot spot became visible shortly before DOY 276.04 and 80 ∘ SSL and the counts increased by up to a factor of 6, from ∼4 c/ks to peaks of 19-27 c/ks. For the second observation the hot spot appeared on the face before DOY 277.7 and the counts increased by up to a factor of 4.5, from 4 to 18 c/ks.
The AEQ shows the most striking difference between the lightcurves. The second observation was generally quiet, with ∼3-5 c/ks, with the exception of a single peak containing 9 c/ks at 277.93. In contrast, the first observation contained a prominent single peak of 33 c/ks at DOY 276.24, which lasted 15-25 min and was higher than the peak emission from the hot spot. Prior to the peak, there was a gradual increase from DOY 276.2 to 276.22. After the peak there was an abrupt drop to 17 c/ks and then a gradual decrease for 0.1 DOY afterward, as the region rotated out of view. From the moment the region began to be observable it was emitting 6 c/ks, while in the second observation it emitted only 1-2 c/ks, suggesting that the whole region was brighter throughout the first observation.
The peak of the enhancement occurred 1-1.5 h before the non-Io decametric radio burst at DOY ∼276.3 (indicated in Figure 5 by the dashed line). We also note that the fainter burst of non-Io decametric emission at DOY 276.2 coincides well with the preceding peak on the AEQ auroral lightcurve, suggesting a further possible connection between X-ray emission and non-Io decametric emission. The previously recognized connections between this non-Io decametric emission and forward shocks induced by ICMEs [Hess et al., 2012[Hess et al., , 2014 suggest that the heightened X-ray emission is also likely to be directly connected with the ICME.
We also detect periodicity in these lightcurves on the order of tens of minutes for both observations, and this is discussed and analyzed in section 8.
Spectral Extraction and Modeling
For analysis of the Chandra spectra we divided Jupiter's observed disk emission into three sections: a northern auroral zone, an equatorial region, and a southern auroral zone (see supporting information for regions selected). Given the limited visibility of the southern aurora, only the northern aurora is presented.
Using the CIAO software package (provided by the Chandra X-ray Center), we followed the standard procedures to extract spectra, which were then analyzed using the XSPEC package [Arnaud, 1996]. We applied a correction to the effective area to account for the increased energy thresholds applied within ACIS to circumvent optical light leaks through the OBFs (as discussed in section 2.1). To do this, we weighted energies below 0.7 keV based on fitting for the signal degradation to E0102-72.3, which provided a best fit curve of 1 − Y * (x − 0.7) * * 2 with Y = 0.50 and x = the energy of channel.
We again treated the HSQ and AEQ separately. To do this, we separated each observation into two halves based on the time at which the emission from the hot spot dimmed ( Figure 5) We fitted the spectra between 240 and 2000 eV, with a combination of lines with half widths fixed at 20 eV. This produced two challenges. First, the low count rates and large error bars produced unrealistically low reduced 2 values of 0.4-0.6 (for 105-111 degrees of freedom). Second, Chandra's spectral resolution and energy cutoff at ∼210 eV lead us to ignore the region from 210 to 250 eV, since the sharp drop in counts in this region inhibited good fitting. Table 1 and Figure 6 show the best fits.
Spectral Analysis
Inspecting the HSQ spectra (Figures 6a and 6b) first, both observations featured a large peak between 250 and 350eV, which could be from sulfur and/or carbon ions.
Between 500 and 900 eV there was a range of oxygen lines. Both observations contained lines near 600 eV and between 700 and 730 eV, which are likely to be from O VII and possibly also O VIII transitions. The first observation showed an additional spectral line at ∼ 860 eV, which could have either been from O VIII transitions or evidence for solar X-ray scattering from the disk. While the best fit model contained only one line at 730 eV, we were also able to obtain similar reduced 2 values by fitting two lines at ∼700 eV (O VII) and ∼780 eV (O VIII), which may suggest that the additional line at 860 eV was also an O VIII transition.
As mentioned in section 3, a solar X-ray flare reached Jupiter during the time covered by this spectrum (see supporting information for further details) and may have imprinted solar lines onto the spectrum. The additional emission above 700 eV could have been from Fe XVII, Fe XXI, or Ne X solar photons or a combination of oxygen and solar photons. We also observed a magnesium (Mg XI) line in the spectra near 1350 eV, which would be expected from a solar flare [Branduardi-Raymont et al., 2007a;Bhardwaj et al., 2005Bhardwaj et al., , 2006]. These solar features are absent or much less relevant in the AEQ and throughout the second observation.
For the AEQ, the difference between the spectra of the two observations is clear (Figures 6c and 6d). The first shows a prominent peak between 200 and 300 eV that appears to be 3-4 times higher for the first observation than the second. We were unable to model this accurately because of the low-energy cutoff and low spectral resolution, meaning that comparing fluxes and differentiating between sulfur and carbon was not possible. Between 300 and 500 eV there are additional transitions of carbon or sulfur which do not appear in the HSQ spectra or the AEQ spectrum for the second observation.
The morphology of the AEQ spectrum between 380 and 700 eV is particularly interesting. The emission between 550 and 600 eV is mostly O VII, and the line appeared to be asymmetric, with a sharp decline after 600 eV, which led the fit to underestimate the flux for this line in Table 1. This region of the spectrum is similar to that of comets LINEAR S4 and McNaught-Hartley displayed by Elsner et al. [2005]. This similarity to cometary solar wind charge exchange spectra could suggest a solar wind origin for some of the precipitating ions. Kharchenko et al., 2008;Branduardi-Raymont et al., 2007b] The 775 eV line appeared to be a good match for the O VIII transition. GOES data (supporting information) shows that the heightened solar X-ray flux from the first half of the observation was returning to normal at these times, so it is unlikely that solar photons caused the 700-900 eV morphology in this spectrum.
For the AEQ in the second observation, the spectrum is best fitted by a set of low flux sulfur/carbon and oxygen lines. Some of this emission may be contamination from the HSQ, which was still partially visible during these times.
Connecting Spatial and Spectral Features
Given that Chandra's spectral resolution is insufficient to definitively separate between the spectral lines of carbon and sulfur ions, we now examine the auroral morphology in different energy bands. By combining this with magnetic field mapping, we tried to establish the magnetospheric or solar wind origins for specific ion species. To do this, we binned X-rays into four broad energy bins for carbon/sulfur, oxygen, solar X-ray lines, and hard X-rays. We then plotted the polar projections for each energy range separately. The specific energy ranges were chosen based on (a) the ease with which regions could be differentiated in the spectrum, (b) the relevant spectral lines for different species , (c) Chandra's energy resolution limitations, and (d) by considering the solar X-ray lines from the equatorial region spectrum.
We estimated the carbon or sulfur emission from the spectra between ∼200 and 500 eV. We found that photons below 300 eV mapped almost exclusively to the auroral zone, with very little disk component (Figure 7), so we included these photons in our analysis. We considered the ∼800-1500 eV emission to come from fluoresced or scattered solar photons because this energy range contains the peak of the disk spectrum [Bhardwaj et al., 2005Branduardi-Raymont et al., 2007a]. It should be noted that some O VIII lines from completely stripped oxygen also fall in this energy range and may contribute some of the observed auroral emission. Finally, we consider 1500-5000 eV emission to be hard X-rays from precipitating electrons generating bremsstrahlung radiation [Branduardi-Raymont et al., 2007b.
We look first at the polar projections of 200-500 eV carbon/sulfur X-ray events ( Figure 7a) and find that for both observations almost all emission originated in the aurora, with very little equatorial emission. This confirms that the changing emission in this part of the spectra was unrelated to solar flares. We find that carbon/sulfur is the source of the brightening on the edge of the hot spot, between 150 ∘ and 160 ∘ S3 longitude (introduced in section 3). This emission lies in a region which during the 2007 HST observations [Nichols et al., 2009] featured the poleward edge of the UV main oval.
In the AEQ, for the first observation we find a large number of carbon/sulfur events between the Io footprint (∼5.8 R J ) and both the UV main oval and 30 R J contour. For the AEQ, we also find ion emission poleward of the 30 R J contour. This is unexpected, since previous observations showed that the majority of ion emission originated in the Hot Spot Quadrant. Emission from carbon/sulfur in the AEQ is largely absent from the second observation.
For the 500-800 eV oxygen emission (Figure 7b), events are also concentrated into the auroral zone. In the first observation, the events occur poleward of the 30 R J contour and the main oval reference contour in both the HSQ and AEQ, while in the second observation the auroral events are almost solely concentrated into the hot spot. Comparing the oxygen with the carbon/sulfur emission, we find that where there is some carbon/sulfur emission closer to the polar edge of the 30 R J contour, the oxygen emission generally originates poleward of this carbon-/sulfur-dominated emission region and appears to be more diffusely distributed across the entire polar region. Figure 7c shows the 800-1500 eV emission, dominated by solar photons, distributed across the disk, and not concentrated into the aurora, as expected. The hard X-rays (Figure 7d) cluster in two regions parallel with the 30 R J contour in the first observation and are less prevalent in the second. Figure 8 shows carbon/sulfur and oxygen latitude-count plots: the change between observations in carbon/sulfur emission is similar in both quadrants, while oxygen emission stays almost constant in the HSQ but changes by a factor of 3 in the AEQ. This differing behavior and mapping for carbon/sulfur emission and oxygen emission may suggest different sources for each.
Local Time Variation: Noon-Binned Projections and Magnetosphere Mapping
The configuration of Jupiter's magnetosphere will evolve throughout the observations. As Jupiter rotates, a specific S3 longitude-latitude auroral position will map to changing magnetospheric local time sources. To identify how this rotation, and the associated change in local time, changes the X-ray aurora and to identify possible magnetospheric local time origins for features, we mapped the magnetosphere footprint configuration at distinct subsolar longitudes (SSL). The SSL indicates which Jovian S3 longitude is directly facing the Sun at a given time-the location of noon.
To do this, we subdivided each 11 h observation into 50 min time bins. For each time bin, we compared the S3 coordinates of auroral spatial and spectral features with their mapped source regions using the Jovian magnetosphere-ionosphere model from Vogt et al. [2011].
The Vogt model maps contours of constant radial distance from the magnetic equator to the ionosphere by ensuring that magnetic flux at the equator equals magnetic flux in the ionosphere. This enabled us to map ionospheric footprints to their equatorial magnetospheric origins up to 150 R J from the planet, where the VIP4 model [Connerney et al., 1998] used for previous Jupiter X-ray observations was limited to 30 R J [Gladstone et al., 2002;Elsner et al., 2005;Branduardi-Raymont et al., 2008]. The Vogt model accounts for the bend-back of Jupiter's field lines, in order to map field lines to their magnetospheric local time origins. For instance, this could inform us that a specific ionospheric footprint maps to an equatorial magnetospheric source 50 R J from the planet at dawn magnetospheric local time.
Using NASA Jet Propulsion Laboratory Horizons ephemerides data, we chose the start and end times of 50 min X-ray bins to coincide with 30 ∘ increments of SSL. X-rays emitted at times when the SSL was 15 ∘ -45 ∘ were compared to the Vogt et al. [2011] mapping model at SSL 30 ∘ to identify the sources for these X-rays and so on for each 30 ∘ SSL increment.
Joy et al. [2002] showed that the magnetopause location of Jupiter is bimodal. During periods of low solar wind dynamic pressure, the nose of the magnetopause standoff is expected to reach ∼92 R J (an expanded magnetosphere), while for the high dynamic pressure periods, it will be as close as ∼63 R J (a compressed magnetosphere). Vogt et al. [2011] account for these two different possible magnetopause standoff distances by moving the magnetopause location based on the measured distances of Joy et al. [2002].
The plotted projections in Figures 9-11 show the expanded magnetosphere mapping of Vogt et al. [2011]. The magnetopause is indicated by a thick purple contour. Jupiter's closed magnetic field lines map to latitudes equatorward of the magnetopause mapping. Toward noon (at the nose of the magnetopause), these closed field lines are shown as contours from 15 R J (red contour) to 95 R J (green contour), in increments of 5 R J . For the compressed magnetosphere ( Figure 12) closed field line contours at the nose of the magnetosphere extend only as far as 65 R J (yellow contour). In the Jovian tail we mapped closed field contours up to 150 R J . X-ray emission that maps to closed contours is likely to be produced by precipitating particles on closed field lines originating in Jupiter's magnetosphere. X-ray emission that maps poleward of the magnetopause, to the region absent of contours, is from precipitating particles that are more likely to be on open field lines.
Since Jupiter was close to opposition, the SSL and subobserver longitude were only ∼6 ∘ separated, so that the noon position on the planet was close to the center of the observed disk. This means that counts originating near the limb of the Chandra-facing disk are easily identifiable on the time-binned projections and their larger uncertainties can be accounted for in the context of the magnetic footprint at that moment.
Analyzing the SSL-binned polar projections with Vogt mapping revealed previously unreported relationships. First, for both the expanded and compressed magnetospheres we find emission that mapped to the open field lines and also emission that mapped to the magnetosphere, suggesting that both could be sources for Jovian auroral X-rays. For the expanded model (Figures 10 and 11) the majority of the emission originated on the magnetosphere side of the magnetopause, while for the compressed model ( Figure 12) the majority of emission originated on open field lines.
This may be particularly noteworthy for the ICME arrival observation. During this observation a compression may be expected to shift the magnetopause boundary from ∼92 R J to ∼63 R J . It is this region mapping to 60-90 R J , across which the magnetopause would be compressed, which contained the hot spot expansion during the first observation and where we observed increased X-ray emission. The closeness of the emission to the magnetopause, our spatial uncertainties, and our uncertainty in the choice of expanded or compressed magnetosphere inhibited us from precisely quantifying the relative importance of a solar wind versus a magnetospheric origin. The Vogt et al. [2011] models showed, however, that the majority of X-ray-producing ions originate beyond 60 R J .
Figures 10 and 11 also show, and particularly for the first observation, that emission clusters along the open-closed field line boundary and seems to move with SSL, suggesting a local time dependence and relationship with processes in this region. The emission seems to follow the region where field lines would be opening or where closed field lines occur in the afternoon to dusk flank.
Noon-Binned Hot Spot Projections
For our observations, we considered the hot spot to be above 60 ∘ latitude and between S3 longitudes 150 ∘ -180 ∘ . We found for both observations that the hot spot had a strong local time dependence and emit- ted 78 of 100 X-rays (first observation) and 51 of 74 X-rays (second observation) before noon (165 ∘ SSL). After this time the hot spot became dimmer, despite the region remaining observable on the Jovian disk for several hours after this. Looking at the development of the magnetic field leading up to 165 ∘ SSL (Figure 13), we found that the majority of the hot spot emission originated on the dayside of Jupiter, with magnetospheric local times (MLTs) between 10:30 and 18:00. Later in the observation, when the field lines that mapped to MLTs after 18:00 were still observable in the hot spot, we found significantly less emission from the region.
Having found that the hot spot emission occurred predominantly in the projections 90 ∘ -150 ∘ SSL (Figures 10 and 11) (prior to mapping to MLTs of 18:00), we analyzed these more closely. For the 90 ∘ SSL projection, Each dot is an X-ray photon. For further plot details see Figure 9.
the hot spot was close to the limb of the disk, so there was a large uncertainty of 10 ∘ -20 ∘ in the X-ray coordinates. Based on this, we focused our attention on projections of 120 ∘ and 150 ∘ SSL (Figures 12 and 13), where the uncertainty was closer to 5 ∘ latitude-longitude.
Considering the first observation 120 ∘ SSL projection (Figures 12 and 13 Figure 12 (left column)) or 95 (green- Figure 12 (right column)). For color coding and plot details see Figure 9.
poleward of this between 70 and 120 R J (green contours) and also on open field lines. The emission was weaker in the second observation for this SSL projection ( Figure 13).
For the 150 ∘ SSL projection, both observations ( Figure 13) contained clustering of X-rays between 160 and 170 ∘ S3 longitude and 60 and 70 ∘ latitude from the afternoon-dusk flank of the magnetosphere . Given that the time binning is broad (50 min) across 30 ∘ SSL, it is uncertain whether these field lines were open or closed for most of this X-ray emission. Considering uncertainties in the spatial location, this region would map either to the solar wind or closed field lines between 90 and 150 R J . The similar source in both 120 ∘ and 150 ∘ SSL may suggest that the processes are persistent.
Finally, inspecting the 210 ∘ SSL projection (Figure 13), we found that the hot spot contained very little emission, despite remaining on the observable disk. The emission appeared to have followed those field lines that mapped to MLT regions from 12:00 to 18:00 as Jupiter rotated, and we found emission in both the outer magnetosphere and on open field lines in this area.
To reflect our spatial uncertainties, the timing spread of events and their broad spatial distribution in each region, we found a broad range of MLT sources for the emission. For the 120 ∘ and 150 ∘ SSL projection, most ion emission originated from magnetosphere locations with local times between 10:30 and 18:00. For the 210 ∘ SSL projection, events mapped to MLTs of 8:30-19:00 ( Figure 13). However, we note that none of these MLTs account for ion travel time from regions near the magnetopause to Jupiter's pole. During this time, the magnetosphere will rotate and so the origins for the particles may be at earlier MLTs than we have suggested. Without knowing the location of the energization region for the ions, it is difficult to quantify this time lag.
Noon-Binned Auroral Enhancement Projections
To identify the source(s) and development of the auroral enhancement, we focus on the 240 ∘ , 270 ∘ , and 300 ∘ SSL projections ( Figure 14). Unfortunately, the auroral region had just begun to rotate out of view at this time, so a lot of the brightening occurred close to the limb of the disk, meaning that there were uncertainties of 10 ∘ -20 ∘ on the S3 coordinates of many X-rays.
The 270 ∘ SSL projection, when the auroral enhancement occurred, contained a broad spread of emission from closed lines in the outer magnetosphere and field lines that were open to the solar wind. This showed both oxygen and carbon/sulfur emissions from the open field line region. The emissions broadly mapped across the dayside of the planet between 06:00 and 16:00 MLT.
The 300 ∘ SSL projection had almost all the emission close to the limb, making it challenging to determine the location of the events because of the S3 uncertainties. Carbon/sulfur and oxygen emissions appeared to originate from the magnetosphere, from lower latitude regions than the 15 R J footprint and from the open regions.
While we cautiously note that the counts were much lower for the hard X-ray emission from electrons (green), the hard X-rays appeared to cluster on the dawnside of the disk. This can be seen on the polar projections for SSLs 120 ∘ , 210 ∘ , and 240 ∘ (Figures 13 and 14). These regions mapped to MLTs 02:00-06:30 h. This is on the opposite side of the magnetosphere to the origin for the precipitating ions but is consistent with the vertex early dawn origin for the non-Io decametric emission that is observed coincident with the first observation and which is also produced by electrons.
Timing Variation and Periodicity
Following the lead of Gladstone et al. [2002] and Elsner et al. [2005], we searched the observations for periodicities by selecting a circle (radius: 6.5 ∘ , center: 67 ∘ latitude, 170 ∘ longitude-see supporting information for further details) in S3 coordinates centered on the hot spot and then Fourier transformed the lightcurve from this region to generate power spectral density (PSD) plots. We found that the area used by Gladstone et al. [2002] and Elsner et al. [2005] showed periodicity at two significant timescales during our first observation, at 12 and 26 min. Their significance increased by expanding the circle to a radius of 8 ∘ , centered on 65 ∘ latitude and 163 ∘ S3 longitude. This larger region included more of the broad spatial spread of hot spot emission in the first observation, showing that the period was also present in the emission between the hot spot and 50 R J contour. For the second observation, we found that the most statistically significant period occurred using the same S3 circle as Gladstone et al. [2002] and Elsner et al. [2005].
To estimate the single-frequency probability of chance occurrence (PCO) for the detected periods, we used the statistical methods of Leahy et al. [1983]. The results are shown as dotted horizontal lines in Figures 15a-15d. The lowest statistical significance and therefore highest PCO of 10 −1 is at the bottom of the plot, and the highest statistical significance and therefore lowest PCO of 10 −6 is toward the top of the plot.
For the first observation, we found two strong periods (Figure 15a). The most prominent period occurred with a period of 26 min and a PCO of less than 10 −6 . This is shorter and more significant than the Gladstone et al. [2002] period (∼45 min, 4 × 10 −6 ). The second period had a timescale of 12 min and a PCO of 10 −5 . We tested a range of locations and sizes of regions encompassing the hot spot and found that these two periods dominated, although which of the peaks was most dominant did swap. The 26 min peak was more dominant on the edge of the hot spot, where the carbon/sulfur particles were more concentrated than oxygen. The 12 min period was more dominant above 70 ∘ latitude where the carbon/sulfur and oxygen are more evenly distributed.
Periodicities in the second observation were weaker than in the first (Figure 15b). The most prominent period was at 42 min, with a PCO of 5 × 10 −4 , not as significant as the period in the first observation or that reported by Gladstone et al. [2002]. There was also indication of a shorter period of 19 min, but this was even lower in significance.
To determine whether one period was associated with one particle population, we used the same 8 ∘ radius region centered on 65 ∘ latitude and 163 ∘ S3 longitude and generated PSDs for discrete energy ranges. Figure 15c shows a prominent 26 min period at high significance for the carbon/sulfur ions, with a PCO of 10 −5 . It also shows a much weaker 12 min period with a PCO of 2 × 10 −3 . Conversely, the oxygen emission ( Figure 15d) exhibited no 26 min period, and the strongest period was at 12 min with a PCO of 5 × 10 −3 . This suggests that one dominant sulfur/carbon population produced the 26 min period, while a second combined population of sulfur/carbon and oxygen generated the 12 min period. For the second observation, the number of X-ray events was too low to provide reliable results when separating the carbon/sulfur and oxygen populations. The paucity of hard X-rays from precipitating electrons also made it difficult to identify a significant period for them, although there is a suggestion of some 5-10 min periodicity for the first observation (see supporting information). We also tested regions across the rest of the auroral zone and disk and found no other significant periods (see supporting information). During the first observation two periods were detected at 12 and 26 min. The 26 min peak was more significant than the 45 min period reported by Gladstone et al. [2002]. The second observation contains a less distinctive periodicity, with the most prominent period at 42 min. The hot spot region was found to be much broader during the first observation, so a different region was used for each PSD to maximize the significance of the periods and to utilize as much emissions from the expanded hot spot as possible (see text for details). Carbon/sulfur emissions are dominated by the 26 min period and also feature a less significant 12 min period. The oxygen emissions feature no 26 min but do feature the less significant 12 min period. When the two populations are combined, the 12 min period becomes much more significant. The dotted horizontal lines show single-frequency probabilities of chance occurrence (PCO) for the detected periods [Leahy et al., 1983]. The lowest statistical significance and therefore highest PCO of 10 −1 is at the bottom of the plot, and the highest statistical significance and therefore lowest PCO of 10 −6 is toward the top of the plot.
The two periods in the first observation could have been due to harmonics, although in this case it is challenging to explain how the period is divided between the two separate particle populations in this manner. This division by energy also suggests that they are unlikely to be from instrumental influence.
Summary of Results
We summarize results separately for the Hot Spot Quadrant (S3 longitude: 90-180 ∘ ) and the Auroral Enhancement Quadrant (S3 longitude: 180-270 ∘ ), since solar wind conditions may have been different for each (see Figure 1) and the spatial, spectral and temporal features differ.
Hot Spot Quadrant
1. Spatial Emission. The change in emission in the hot spot is not as significant as in the AEQ (Figure 4). This increased emission is concentrated between the previously reported hot spot location [Gladstone et al., 2002;Elsner et al., 2005] and the 50 R J footprint. This gives the appearance of the hot spot having expanded for the first observation. 2. Spectra. Both observations feature prominent 200-400 eV carbon/sulfur peaks and a prominent peak in the O VII spectral region between 550 and 620 eV. The first observation features either increased O VIII emission or increased solar photon emission. (Figure 7). The 200-500 eV (carbon/sulfur) emission is mostly responsible for the increased emission between the normal hot spot location and the 50 R J footprint in the first observation. Generally, 500-800 eV (oxygen) emission occurs poleward of this concentrated carbon/sulfur emission. We also find that the carbon/sulfur emission does not behave like the oxygen emission, with the carbon/sulfur emission brightness more enhanced than the oxygen emission for this expanded hot spot. Vogt et al. [2011] model mapping. 78% (first observation) and 69% (second observation) of hot spot emission occurs before noon in the region. This timing coincides with the region mapping to magnetospheric local times between 10:30 and 18:00 h. Most of the carbon/sulfur emission originates in the outer magnetosphere between 50 and 90 R J and on open field lines, while the oxygen emission originates farther from Jupiter (70-120 R J ) or on open field lines (with identification of an open or closed origin depending on uncertainties in spatial resolution and choice of compressed/expanded magnetosphere mapping). The expansion of the hot spot occurs on field lines mapping to the region where the magnetopause has been found to move during compression from 92 R J to 63 R J . The Vogt et al. [2011] model mapping showed that the majority of X-ray-producing ions originate beyond 60 R J . 5. PSDs. The first observation features two significant periods at 12 and 26 min-shorter timescales than previously reported [Gladstone et al., 2002]. The second observation shows a less significant period of 42 min, closer to the 45 min timescale of Gladstone et al. [2002]. The 26 min period is strong in carbon/sulfur emission in the hot spot, but not in oxygen emission. The 12 min period is present for both carbon/sulfur and oxygen, but with much lower significance for each. When the two populations are combined the period becomes significant.
1.
Lightcurves. An auroral enhancement occurs during the first observation, the peak of which is ∼8 times brighter than for emission in the region during the second observation. This occurs 1-1.5 h before a non-Io decametric radio burst, a previously recognized signature of ICME-induced forward shocks [Hess et al., 2012[Hess et al., , 2014Lamy et al., 2012]. 2. Spectra. The spectra from the first and second observations are different: there is an enhanced 200-400 eV carbon/sulfur double peak and a prominent peak in the O VII spectral region between 550 and 620 eV during the first observation. These peaks are much less prominent in the second observation. Between 380 and 700 eV the spectrum appears similar to cometary spectra from solar wind charge exchange . 3. Energy-Binned Polar Projections. Both the 200-500 eV (carbon/sulfur) and 500-800 eV (oxygen) emissions are increased by a factor of at least 4 for both energy ranges in the first observation relative to the second. This is different to the hot spot emission, where carbon/sulfur is preferentially enhanced. 4. SSL projections with Vogt et al. [2011] model mapping. The enhancements broadly map across the dayside of the planet between 06:00 and 16:00 MLT parallel with the open-closed boundary. The emission maps to open field lines and closed field lines in the outer magnetosphere and also to low-latitude regions between Io's footprint and the 15 R J contour. 5. Hard X-rays. The 1500-5000 eV (electron bremsstrahlung) emission is observed in clusters in the main oval region. It coincides with dawn on the surface and originates at MLT 02:00-06:30 h. This is on the opposite side of the magnetosphere to the source of the X-ray charge exchanging ions. 6. PSDs. No significant periodicity was detected from the AEQ ion emission.
Discussion
In the discussion, we attempt to address the following questions: What are the source regions for Jupiter's X-ray aurora? What processes in these regions produce X-rays and how might these relate to the ICME?
The spectral, spatial, and temporal differences between the hot spot and the auroral enhancement emission lead us to treat the two features separately. Our analysis of the periodicity, spectral and spatial origins of the emission, suggests that both the hot spot and auroral enhancement each have multiple X-ray sources regions.
Throughout the first observation, the SSL-binned projections with Vogt et al. [2011] mapping show clustering of ion precipitation in the open field line region (Figure 10). This appears to indicate that there is at least some level of precipitation of ions from both the open and closed field lines throughout the first observation. This is less clear for the second observation, where there appears to be lower levels of open field line emission and more is instead concentrated on closed field lines. The Vogt et al. [2011] models showed that the majority of X-ray-producing ions originate beyond 60 R J . If we assume a compressed magnetosphere (with a standoff distance at 63 R J ), the open field lines therefore contribute a large proportion of the emission, while for an expanded magnetosphere (with a standoff distance at 92 R J ), closed field lines are the dominant source. Figure 16. Summary of X-ray source mapping (not to scale) accounting for uncertainties in photon spatial mapping. The x axis indicates the equatorial radial distance from Jupiter that the source regions map to. The different X-ray regions are indicated by the striped blocks: the hard X-ray region (green), the region dominated by high charge-state sulfur region (red), and the mixed high charge-state carbon/sulfur and oxygen regions (red and blue).
The X-Ray Hot Spot 10.1.1. Where Is the Hot Spot Source?
While the auroral enhancement emission appeared to originate from several regions that map to different magnetospheric locations, the hot spot remained confined to a more limited region fixed in the planet's rotating frame. This spatial confinement permits more precise identification of possible sources for the precipitating ions that produce X-ray emission in this region.
The 200-500 eV sulfur and/or carbon emission features an additional component from lower latitudes than the 500-800 eV oxygen emission. If we assume an expanded magnetosphere, we find that most of the 200-500 eV emission maps to a region between the outer magnetosphere and the magnetopause, originating between 50 and 90 R J (Figure 12). This model suggests that most 200-500 eV emission is from precipitation of high charge-state sulfur ions in the outer magnetosphere, as proposed by Cravens et al. [2003]. It also suggests that there may be some slight precipitation from open field lines and therefore possibly from carbon ions in the solar wind but that this is a smaller proportion of the emission. However, in the case of a compressed magnetosphere, the emission originates is more evenly distributed between carbon ions in the solar wind and from sulfur ions from the outer edge of the magnetosphere (for a compressed magnetosphere this is 50-63 R J at the standoff point).
The observed strong 26 min periodicity for these 200-500 eV X-rays may support a sulfur source, since if the period originated in the solar wind, we would expect to also observe oxygen exhibiting it (as the most abundant heavy ion in the solar wind). The absence of oxygen emission from the 26 min period and spatial separation between these two species suggests that the lower latitude feature is from a dominant sulfur population, which does not include oxygen of a sufficiently high charge state. The 12 min period increases in significance when oxygen is combined with carbon/sulfur, suggesting that there is a second population consisting of a mixture of both oxygen and carbon/sulfur. Alongside the periodicity, the spatial mapping suggests a different origin for each population: one solely sulfur population with 26 min periodicity from 50 to 70 R J and the other an oxygen + carbon/sulfur population from closer to the magnetopause and possibly from open field lines. Comparison of the two observations would seem to suggest that the lower latitude sulfur-dominated population is more sensitive to changes in the solar wind conditions, since it is much more prevalent in the first observation.
Io injects both oxygen and sulfur into the Jovian magnetosphere, so if both X-ray-producing populations originate in the outer magnetosphere, there needs to be an explanation for why the 50-70 R J region is dominated by sulfur emission and features less oxygen emission. Oxygen ions that produce X-rays have a higher ionization energy than sulfur ions. For instance, O 6+ requires 739 eV to become ionized [Drake, 1988], while S 6+ -S 9+ only requires 281-447 eV [Biémont et al., 1999]. This means that it is possible to have a magnetospheric region where there is sufficient energy for charge stripping and X-ray production from sulfur, but not from oxygen. More energy would be expected to be injected closer to the magnetopause either through pulsed dayside reconnection, where the field lines closer to the magnetopause would be more perturbed [Bunce et al., 2004], or through field-aligned potentials , which would be expected to increase with radial distance from Jupiter. It is therefore possible that either of these mechanisms could create a higher-energy region closer to the magnetopause and a lower-energy region deeper into the magnetosphere. It is also possible that quenching and opacity effects, as suggested by Kharchenko et al. [2008] and Ozak et al. [2010], may need to be considered to explain the spatial and periodic differences between the two populations. Figure 16 summarizes the equatorial mapping of the sources for different precipitating particles generating the observed X-rays. Findings from recent work by T. Kimura et al. (Dynamics and source location of Jupiter's high energy X-ray aurora investigated by Chandra, XMM-Newton and Hisaki satellite, manuscript in preparation) also identify similar sources for X-rays and identify both closed field lines in the outer magnetosphere and open field lines beyond the magnetopause as possible X-ray sources.
The presence of both magnetospheric and cusp precipitation is not precluded by the findings of Cravens et al. [2003], Bunce et al. [2004], or Kharchenko et al. [2006, but cusp precipitation would only be the dominant source of emission during auroral UV flare-like conditions or heightened solar wind conditions. The mSWiM propagation and radio emission show solar wind densities increased at Jupiter during the first observation, suggesting that these heightened solar wind conditions may have been present. Cusp precipitation would include precipitation from protons, which are highly abundant in the solar wind and would be expected to generate bright polar UV flares . Without coincident UV observations at the time of the X-ray observations reported here, it is difficult to identify levels of proton precipitation and therefore to further separate a solar wind or magnetosphere source for the higher-latitude mixed population of high charge-state oxygen and carbon/sulfur.
The precise magnetospheric origins of each particle depends on not only the spatial uncertainties but also the internal field model used to initialize the Vogt et al. [2011] mapping. Vogt et al. [2015 analyzes the differences in each model (VIP4 [Connerney et al., 1998], Grodent Anomaly Model and VIP Anomaly Longitude [Hess et al., 2011]) and highlights the differences between each. From a simple X-ray hot spot comparison, we found that the Grodent Anomaly Model we used in this work normally mapped X-rays closer to Jupiter. VIPAL and VIP4 often mapped emission beyond the magnetopause. When the Grodent Anomaly Model did map X-rays more distantly than VIP4, then there was often less than 10 R J separation and local times were often 0.5-3 h later than VIP4 or VIPAL.
What Process Drives the Hot Spot X-Ray Emission?
We find that in both observations the ions that precipitate to produce the hot spot originate from locations between 10:30 and 18:30 on the dayside magnetosphere. Particularly, we find that emission occurs alongside locations where recently opened field lines may occur or on closed field lines in the afternoon flank (but still close to the magnetopause and on the dayside of the planet). Bonfond et al. [2011] map quasiperiodic auroral flares in the far UV to the same region in Jupiter's magnetosphere at local times between 10:00 and 18:00 and note the similarity between these flares and flux transfer events observed by Pioneer and Voyager probes. They suggest possible connections between these UV and X-ray features and the Jovian cusp.
Combined with the dayside origin, the periodicities observed may also be a clue to the mechanisms driving the emission. Using Ulysses, Marhavilas et al. [2001] found dual periods of 15-20 min and 40 min in energetic particles upstream from the Jovian bow shock. This may indicate a solar wind connection for emission. Ulysses also detected 20 min and 40 min periodicities in the dusk magnetosphere [Anagnostopoulos et al., 1998Karanikola et al., 2004]. Alternatively, the 12 min period falls within the 10-20 min timescale of Jovian global ultra-low-frequency oscillations [Khurana and Kivelson, 1989]. High-energy ions have also been previously observed to have periods within this range [Wilson and Dougherty, 2000]. At Earth, ultra-low-frequency waves are often associated with dayside reconnection [Prikryl et al., 1998] or with either compression from shock events or Kelvin-Helmholtz instabilities [Dungey and Loughhead, 1954;Chandrasekhar, 1961;Kivelson and Russell, 1995]. It seems possible that one or more of these mechanisms may contribute to the detected hot spot periods in our observations. Bunce et al. [2004] found that pulsed dayside reconnection perturbing outer magnetosphere field lines would generate arc-like emission and an ∼30-50 min period, not dissimilar to the 26 min period we observe. They also suggest that this is more likely to occur during high solar wind pressure, such as during our first observation. At this time, in support of a reconnection origin, emission appears to cluster close to regions where reconnection could occur ( Figure 10). Desroche et al. [2012] found that reconnection was possible in the afternoon to dusk region based on plasma flow shear speeds, = 10 and = 1, which may suggest that the local time dependence of hot spot emission could be connected with this process.
If the 26 min periodicity were to be related to bounce times on field-aligned potentials instead, then it remains challenging to explain the shared 12 min oxygen and carbon/sulfur periods in this way. This is because the different masses of oxygen and sulfur/carbon would produce different bounce times for ions that originated in the same region. Their shared period may therefore favor a non-bounce time-related mechanism for the 12 min period in the first observation. We note that this 12 min period is of the same order of magnitude as the Alfvèn wave transit times calculated by Bunce et al. [2004]. If the periodicity does relate to the Alfvèn transit time, then the shift in period from 12 or 26 min to 42 min may make sense in the context of a shift in magnetopause distance because of solar wind-induced compression/expansion of the magnetosphere.
For the second observation, when the solar wind was returning to pre-ICME conditions, emission still originates from the dayside of the planet but more prominently from locations in the magnetosphere closer to 15:00-18:00 MLT, along recently closed field lines ( Figure 13). Kimura et al. (manuscript in preparation) suggest that flow shear effects such as Kelvin-Helmholtz instabilities (KHIs), also found at the magnetopauses of Saturn [Masters et al., 2010;Wilson et al., 2012] and Earth [Hasegawa et al., 2004], may be an important factor, and thus an explanation for the periodicity in the Jovian X-ray emission. KHIs are expected to develop on both the dawn and dusk flanks of the planet and are expected to become more substantial moving down the flanks, where the velocity shears are larger, as the magnetosphere and solar wind become progressively more rolled-up [Miura, 1984;Nykyri et al., 2006]. These structures could either inject solar wind particles directly into the magnetosphere, through small-scale reconnection events [Fairfield et al., 2000;Nykyri and Otto, 2001], or could facilitate the transport of momentum across the magnetopause boundary layer [Miura, 1984;Chen and Kivelson, 1993], during the linear phase prior to rollup. Multiple current systems are generated by KHIs [Masters et al., 2010], which may provide the needed energization source to create the high charge-state ions that can produce X-rays.
At Earth, Taylor et al. [2012] reported a dawn-dusk asymmetry in rolled-up vortices detection, with higher frequency on the postnoon dusk flank, while a previous study by Hasegawa et al. [2006] reported as many KHIs on either flank. Unlike Earth, the Jovian magnetosphere is populated by highly corotating plasma [Thomsen et al., 2010;Mauk et al., 2009], which contributes to a larger shear at the dawnside, where the corotation is sunward [Johnson et al., 2014]. As a result, this larger shear is expected to favor the generation of KHI on the dawnside rather than on the duskside [Desroche et al., 2012[Desroche et al., , 2013. However, based on the development timescale of Kelvin-Helmholtz vortices in relevance to Jovian orbital period, the structures at the dawn and dusk flanks may primarily originate from the same location [Johnson et al., 2014], which could result in observation of rolled-up vortices at earlier MLTs.
KHIs similar to those at Earth are less able to explain the first observation emission that originates closer to the nose of the magnetosphere, near to noon MLT. Cowley et al. [2007], however, find that flow shear along the open-closed field line boundary would be important at Jupiter and capable of generating high-latitude aurora. The shear increases when the magnetosphere is compressed due to increased angular velocity of the magnetospheric plasma, which could cause auroral emission to brighten [Nichols et al., 2009], so it may be that flow shear is also relevant close to the nose.
It remains unclear as to why the hot spot feature is localized in these and previous observations Gladstone et al., 2002;Branduardi-Raymont et al., 2008] and restricted to limited longitudes of the Jovian pole. If the hot spot is driven by KHIs or dayside reconnection, then this may imply either that these processes are localized for the Jovian magnetosphere or that the high-energy downward current region that produces X-rays is localized.
The high-energy electrons that generate the bremsstrahlung emission originate on the opposite side of the planet to the ion emission, in regions between 02:00 and 06:30 magnetospheric local time. At Earth, similar features are associated with whistler mode waves and the dawn chorus. The possible periodicity in the 5-10 min range may be consistent with this explanation. Dawn storms at Jupiter have been observed in the UV on several prior occasions [e.g., Gustin et al., 2006;Clarke et al., 2009;Nichols et al., 2009] and may be capable of supplying sufficiently energetic electrons for X-ray bremsstrahlung emission. The hard X-ray emission from high-energy electron precipitation also increased during the first observation. Brightening of the UV main emission has been observed to occur coincident with solar wind shocks [e.g., Nichols et al., 2009]. Simultaneous UV-X-ray observations would help to further constrain these connections between brightness variation in the UV main oval and increased hard X-ray emission from high-energy electrons in this region. They would also help to identify global current systems, with UV helping to highlight upward currents (away from the planet) and X-rays from ions helping to identify downward currents (toward the planet).
The Auroral Enhancement 10.2.1. Where Is the Auroral Enhancement Source?
In the quadrant from 180 ∘ to 270 ∘ S3 longitude, we note the largest change in auroral emission between the two observations the bright auroral enhancement on day of year 276.25. The brightest peak of this event lasts ∼20 min, 2-4 times longer than the flare reported by Elsner et al. [2005]. Figures 7 and 14 show that the ion emission originates from a range of different latitudes and therefore maps to several different closed and open field line regions, suggesting that at this time, there may be several downward current regions on which the ions can precipitate. The precipitating particles also originate from a range of different magnetospheric local times across the dayside of Jupiter from dawn to close to dusk.
What Process Connected to the ICME Drives the Observed Auroral Enhancement?
The auroral enhancement occurs 1-1.5 h prior to a bright non-Io decametric radio burst (Figure 2), which has previously been found to relate to the impingement of a solar wind forward shock [Gurnett et al., 2002;Lamy et al., 2012;Hess et al., 2012Hess et al., , 2014. The mSWiM propagation also suggests the arrival of an ICME close to this time. The combination of this radio emission and the mSWiM-predicted solar wind density peak leads us to believe that the bright X-ray auroral enhancement is driven directly by this ICME.
What process could be directly responsible for this X-ray brightening? The driver does not seem to be a continuation of the same process that produces the hot spot emission because the properties of the two emissions differ. The prominent differences between the AEQ and HSQ emission include a different population of precipitating particles (Figures 6 and 8); the enhancement emission is spatially less localized than the hot spot emission (Figures 3, 7, 8, and 10-14); the enhancement emission seems to increase temporally into a concentrated flare-like event, with no significant periodicity in the ion emission ( Figures 5 and 15), while the hot spot emission exhibits clear pulsations.
The AEQ features also seem atypical when compared with other X-ray observations Gladstone et al., 2002;Branduardi-Raymont et al., 2008]. While the hot spot may be driven by KHIs or pulses of dayside reconnection close to a downward current region, we suggest that the auroral enhancement is driven by a less common process that is directly associated with the changing solar wind parameters induced by the ICME. Inspecting the mSWiM propagation ( Figure 1) implies that the driver relates to either increased solar wind density or changing interplanetary magnetic field angle (as suggested by the rotation in B T ). We propose two possible drivers based on these changing solar wind parameters, but note that they might not be independent drivers: (1) an ICME-induced compression event or (2) an ICME-induced instance of large-scale dayside reconnection.
Increased ram pressure from the heightened solar wind density (Figure 1a) could drive a Jovian magnetosphere compression. The Vogt et al. [2011] mapping shows X-ray emission from several regions inside the magnetosphere, suggesting that the ICME transfers energy into the magnetosphere, so that ions are sufficiently energetic for X-ray production. This also raises questions as to the location of the downward currents (on which the ions precipitate) at this time. Compression events have been suggested to drive changes in Jupiter's current system and therefore acceleration processes Bunce, 2003a, 2003b]. Adjustments to the location of downward currents, induced by the compression, may therefore explain the observed broad spatial spread of ion emission, which during the auroral enhancement is not restricted to the hot spot as it normally is.
Alternatively, or in combination with a compression, a large-scale instance of dayside reconnection may explain the observations. Desroche et al. [2012] showed that dayside reconnection would be confined to local regions on the magnetopause for certain IMF orientations, but varying IMF angle could lead dayside reconnection to occur across a larger proportion of the magnetopause. Masters [2015] further shows for Saturn that changing IMF angle can lead to increased reconnection voltages and a larger spatial scale of magnetopause reconnection. This could result in increased injection of solar wind particles and energization of a larger region of the outer magnetosphere plasma, explaining the observations of the larger spatial scale of emission and the observed change in the precipitating population from the spectra. The inverse of this mechanism may also help to explain reduced emission from the hot spot for some observations, since a less favorable IMF angle would suppress reconnection and therefore emission from the hot spot. Further comparison of Jupiter X-ray emission with upstream IMF measurements would help to investigate this relationship.
The Vogt et al. [2011] mapping also lends weight to the argument that solar wind-magnetosphere coupling is at work during this interval. It is possible that the solar wind compression and/or possible associated dayside reconnection for favorable IMF direction can lead to an opening of magnetic flux on the dayside, and concurrent X-ray flaring. Cravens et al. [2003], addressing charge exchange, show that X-ray emissivity from solar wind particles depends on solar wind velocity and density, which is in line with our observation of increased emission. We also found that the magnetospheric mapping suggests an open field line origin for at least some of the emission. This is supported by similarities between the AEQ spectrum and cometary spectra, which are known to be produced by solar wind charge exchange (from direct solar wind precipitation). However, we are cautious to note that the complex configuration of the Jovian magnetosphere at this time may not be accurately represented by the Vogt et al. [2011] mapping model, so the magnetospheric mapping at this time may be less reliable.
The low frequency of such ICME events, relative to the timescales of X-ray observations, may help to explain why these features have not been previously reported in the literature and why the second observation seems to have an AEQ that is again largely devoid of emission. We also note that such events may be confused with hot spot emission, if they occur at a time when the hot spot is in the observable quadrant, as opposed to this observation where the hot spot was rotating out of view when the auroral enhancement occurred.
While we suggest that the solar wind does drive several changes in Jupiter's X-ray aurora, we note that the importance of the solar wind as a driver of magnetospheric dynamics and that the existence of Dungey cycle processes at Jupiter remains a subject of debate Bagenal, 2007, 2008;Cowley et al., 2008].
Given that our findings are based on only two observations with this type of analysis, application of this approach to other observations would help to determine whether these features persist, how and where they originate, and whether there are systematic trends between the X-ray aurora and solar wind.
Conclusion
We report the first X-ray observation that was planned to coincide with an ICME arrival at Jupiter and find evidence for ICME-induced changes in the northern X-ray aurora. We observe changes in the morphology, spectra, and periodicity of the emission at this time. We particularly find an auroral enhancement by a factor of 8, occurring 1-1.5 h before a bright burst of non-Io decametric radio emission, often associated with the arrival of an ICME-induced fast-forward shock [Hess et al., 2012[Hess et al., , 2014Lamy et al., 2012] and at a time when solar wind propagation models indeed predict an ICME arrival.
We have used Vogt et al. [2011] magnetospheric mapping to identify the origin of the X-ray emission. This mapping suggests that most auroral X-ray emissions came from precipitating ions with origins beyond 60 R J on both open and closed field lines. Spatial uncertainties and uncertainties as to whether the magnetosphere is compressed or expanded at this time inhibit us from quantifying from which side of the magnetopause the majority of emission originates. The region between 50 and 70 R J is dominated by 200-500 eV emission, which we attribute to precipitating high charge-state magnetospheric sulfur ions. At higher latitudes that map between 70 and 120 R J and to open field lines, there is a mixture of precipitating high charge-state carbon/sulfur and oxygen ions.
In the hot spot, these separate origins for ions of different species is supported by periodicity measurements. In the first observation we find a strong 26 min period associated with the carbon/sulfur (200-500 eV) emission, but not with the oxygen (500-800 eV) emission. We do, however, find a 12 min period at a low level of significance in both the oxygen and carbon/sulfur emission. When the two populations are combined, the 12 min period becomes significant. The periods of 12 and 26 min in the first observation are distinctly shorter than the 42 min period we detect in the second observation, which is close to the 45 min timescale found by Gladstone et al. [2002].
X-ray emission is concentrated in regions near to open field lines. On the basis of the magnetospheric local time of the source and the origin close to the magnetopause, alongside the periodicities and heightened solar wind conditions, we suggest that pulses of dayside reconnection [Bunce et al., 2004;Desroche et al., 2012] near a magnetospheric downward current region could be driving the X-ray hot spot emission. We also suggest that the spectral, spatial, and temporal differences between the hot spot emission and auroral enhancement emission imply that they are not created by a continuation of the same process. Instead, we suggest that the auroral enhancement is directly driven by the ICME through a compression event and/or a larger-scale instance of dayside reconnection than that producing the hot spot emission.
Other mechanisms in the outer magnetosphere, near the magnetopause, such as KHIs, may also have an important role in transferring momentum and energy in our observations, given that the Dungey cycle may well be less important for Jupiter than Earth Bagenal, 2007, 2008;Delamere and Bagenal, 2010;Johnson et al., 2014].
We believe that the approach of applying Vogt et al. [2011] model mapping to energy-binned, subsolar longitude-binned X-rays offers excellent possibilities for mapping the origins of the Jovian X-ray aurora and thus better understanding the Jovian outer magnetosphere and the processes occurring close to the magnetopause. Similar analysis on new and archival X-ray observations is required to determine whether the features observed in these observations persist and how they relate to systematic trends in solar wind conditions. Combining observations of this kind with the approach and arrival of the Juno spacecraft in 2016 will offer further opportunities to understand the processes governing Jovian auroral X-rays.
|
2018-04-03T02:13:10.572Z
|
2016-03-01T00:00:00.000
|
{
"year": 2016,
"sha1": "1cfd1fe320d41ca53bf7d8971356c9a31e7e9e04",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1002/2015ja021888",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "5055764c3f3d475544aa486e084df56b8730fbfe",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics",
"Geology"
]
}
|
210973552
|
pes2o/s2orc
|
v3-fos-license
|
Macrophages in xenotransplantation
Xenotransplantation refers to organ transplantation across species. Immune rejection of xenografts is stronger and faster than that of allografts because of significant molecular differences between species. Recent studies have revealed the involvement of macrophages in xenograft and allograft rejections. Macrophages have been shown to play a critical role in inflammation, coagulation, and phagocytosis in xenograft rejection. This review presents a recent understanding of the role of macrophages in xenograft rejection and possible strategies to control macrophage-mediated xenograft rejection.
INTRODUCTION
Xenotransplantation is the transplantation of organs, tissues, or cells across species. Pigs are considered to be an ideal organ source for xenotransplantation due to their physiological similarity to humans and the feasibility of pig breeding. However, immune rejection of xenografts is believed to be stronger and faster than that of allografts. This can be attributed to the significant molecular differences between species. Xenograft rejection can be temporally divided into four interlinked immune rejection types: hyperacute rejection (HAR), acute vascular and cellular rejections, and chronic rejection [1].
HAR is mediated by human antibodies against carbohydrate moieties, such as galactose α1, 3-galactose (α-gal) and N-glycolylneuraminic acid, present on the surfaces of pig endothelial cells [2]. Binding of pre-existing antibodies to these antigens and subsequent complement ac-tivation destroy pig endothelial cells, resulting in xenograft rejection within a few minutes [3]. During the last two decades, outcomes of pig-to-non-human primate organ transplantation have been markedly improved owing to development of genetically-engineered pigs; transplants from such pigs help avoid HAR [4].
However, many aspects of immune responses to xenografts still pose a critical problem for successful transplantation.
Macrophages are phagocytic innate immune cells that play a crucial role in host defense. Recent studies have revealed the involvement of macrophages in immune rejection of organ transplants. In animal allotransplant models, macrophages recognize allogeneic antigens, induce immune responses, and thus contribute to graft rejection [5,6]. In addition, they are able to kill allogeneic cells by phagocytosis [7]. Clinical studies exhibit a positive correlation between macrophage infiltration and graft rejection [8][9][10][11]. The recently arising role of macrophages in allograft rejection must be one of the interesting topics in the field of transplant immunology. Therefore, the purpose of this paper is to review the recent understanding of the role of macrophages in xenograft rejection and possible strategies to control macrophage-mediated xenograft rejection.
HIGHLIGHTS
• Damage-associated molecular pattern (DAMP) release during ischemia reperfusion injury is one of the main causes of activation of macrophages, which play a critical role in inflammation and coagulation in xenograft rejection.
• Cross-talk between macrophages, hepatocytes, and vascular endothelial cells by producing immune mediators, such as monocyte chemoattractant protein 1 (MCP-1), interleukin (IL)-6, and creactive protein (CRP) may play a critical role in inflammatory responses and coagulation in pig-to-baboon organ transplantation.
• Early generation of MCP-1, IL-6, and CRP as well as DAMPs needs to be controlled to avoid inflammation and coagulation.
Inflammation and Coagulation
Inflammation, triggered by innate immune cells as a defense mechanism against infectious agents or tissue damage, is a major problem in organ transplantation.
Damaged host cells release or secrete various damage-associated molecular patterns (DAMPs) [12]. In organ transplantation, DAMP release from injured tissues is inevitable during ischemia reperfusion injury (IRI) and levels of the released DAMPs increase following IRI.
Unlike other DAMPs, adenosine triphosphate (ATP) is a relatively small molecule and is recognized by specific cell surface receptors, such as P2X and P2Y. Binding of extracellular ATP to these receptors induces inflammatory responses of macrophages [25]. A study using a murine liver allotransplantation model has suggested extracellular ATP involvement in increased graft dysfunction and the involvement of the reduction of regulatory T cell frequency in overall graft survival [26].
The role of the coagulation pathway in IRI and its crosstalk with the inflammatory pathway have been recently proposed [27]. Tissue factor (TF), which is the primary initiator of coagulation and is expressed on both, monocytes/macrophages and endothelial cells, is a central player in providing a bridge between these pathways [28]. Macrophages play a critical role in coagulation as well as inflammation in xenograft rejection ( Fig. 1).
During IRI, DAMPs activate monocytes/macrophages, which then, produce proinflammatory cytokines. In response to DAMPs and the pro-inflammatory cytokines, TF is rapidly induced by these cells in the graft recipients, and becomes exposed to blood [29,30]. Cell surface TF can complex with factor VIIa, and thereby trigger coagulation by activating factor X and subsequent coagulat- [31]. The interplay between the inflammatory responses Designated organs where indicated DAMPs have been identified and studied in clinical solid organ transplantation. and the coagulation system plays a significant role in xenograft rejection [33][34][35][36].
Phagocytosis
The mechanism by which macrophages distinguish between self and allogeneic non-self organs, tissues, cells, or antigens and promote organ rejection has been recently clarified [37]. Mice that lack T, B, and natural killer cells could distinguish allogeneic antigens from those of self-tissues and induce an innate response. This innate allo-activation is triggered by mismatch between donor and recipient signal regulatory protein α (SIRPα), which is a cell surface molecule interacting with CD47.
Similarly, macrophages are able to recognize and destroy xenografts through their cell surface interactions between CD47 and SIRP 1α (Fig. 1B). Due to their molecular incompatibility, an impaired interaction between pig CD47 and human SIRP 1α can result in the phagocytic killing of pig endothelial cells by human macrophages [38].
However, pig hematopoietic cells expressing human CD47 could be protected from phagocytic killing by human macrophages in hematopoietic cell engraftment experiments [41].
Chemotaxis and Acute Phase Responses
Macrophages are the major cells infiltrating into an allograft during severe rejection [42]. Similarly, macrophage infiltration occurs just after IRI of a xenograft and persists until graft rejection [43,44]. The infiltration level of macrophages was significantly higher in α-gal knockout xenogeneic islets than in allogeneic islets [45]. The mechanism of monocyte accumulation within a xenograft is thought to be associated with the production of chemokines, such as monocyte chemoattractant protein 1 (MCP-1), in the graft [35,46].
In pig-to-baboon heart and kidney transplantation, it was observed that early elevated serum levels of MCP-1, IL-6, and C-reactive protein (CRP), which is an acute phase protein synthesized by hepatocytes in response to proinflammatory cytokines [47], precede consumptive coagulopathy [35]. In addition, increased numbers of monocytes were associated with enhanced expression of TF [35]. These results, taken together with those of previous reports indicate that IL-6 provokes liver cells to produce CRP [48], which stimulates endothelial cells to produce MCP-1 [49], and both IL-6 [50] and CRP
STRATEGIES TO CONTROL MACROPHAGE-MEDIATED XENOGRAFT REJECTION
A recent study has suggested that there is increasing evidence for sustained inflammatory response in pig-to-baboon xenograft recipients, and this systemic inflammation is a critical hurdle for successful xenotransplantation [52]. Therefore, therapeutic prevention of inflammation is necessary to achieve successful pig organ xenotransplantation. expression is induced in activated monocytes/macrophages while CD154 is also expressed on monocytes/ macrophages during inflammation [54]. Blockade of the IL-6 receptor with the anti-IL-6 receptor mAb, tocilizumab, resulted in a reduction in the levels of CRP [36] and serum histones [55] upon pig-to-non-human primate xenotransplantation.
Anti-inflammatory Drugs
Other inflammatory drugs were also tested. Nuclear factor kappa B inhibitor, parthenolide, significantly suppressed histone-induced pig endothelial cell death in in vitro study [55]. CVF, which had been originally used to deplete complements causing HAR following xenotransplantation, was found to reverse the increased IL-6 and MCP-1 levels in pig-to-baboon heart and artery patch transplantation [56].
Since the effective treatment of an established inflammatory response to DAMPs is relatively difficult, selective and rapid blocking or scavenging of released DAMPs would be a more promising therapeutic strategy.
Indeed, anti-histone therapy was found to prevent histone-induced inflammation in xenotransplantation [55].
Mice treated with HMGB1 antibody were protected against pulmonary dysfunction and had improved lung allograft outcomes [16]. Blockade of HMGB1 secretion by small molecule inhibitor was found to be beneficial to prevent the loss of islet grafts and to reverse diabetes in murine syngeneic islet transplantation [57]. Administration of ATP antagonist to a recipient mouse for 2 weeks led to prolonged survival of the transplanted allogeneic heart [58].
Laboratory studies have suggested possible manipulation of the inflammatory response by using a DAMP antigen rather than using an antibody or antagonist.
Ischemic preconditioning with HMGB1 protected grafts from IRI through TLR4 signaling in renal and hepatic allotransplantation [59,60]. In addition, genetic overexpression of HSP27 could reduce IRI-induced apoptosis of graft cells and delay the onset of acute rejection in murine heart allotransplantation [61].
Targeting Macrophages
Deletion or inhibition of macrophages can attenuate graft injury and prolong graft survival [62]. In recent animal and clinical studies, some macrophage subsets have been reported to act as regulatory cells, and the adoptive transfer of these macrophages significantly prolonged graft survival. A subset of macrophages was found to suppress allogeneic T cell proliferation and inhibit dendritic cell maturation [63,64]. Furthermore, adoptive transfer of these macrophages promotes graft survival and minimizes immunosuppression [64,65].
Although immunological memory has long been thought to be driven exclusively by adaptive immunity, new evidence suggests that various tissue-derived factors can induce epigenetic changes, leading to the formation of innate memory of macrophages [66].
Conflict of Interest
Jae Young Kim is an editorial board member of the journal but did not involve in the peer reviewer selection, evaluation, or decision process of this article. No other potential conflicts of interest relevant to this article were reported.
Funding/Support
This study was supported by research grant from the Korean Society for Transplantation (2019-04-03001-002).
|
2020-01-30T09:03:22.912Z
|
2019-12-31T00:00:00.000
|
{
"year": 2019,
"sha1": "2ec004d0b932c6700cf8485244d2027dec92ae1f",
"oa_license": "CCBYNC",
"oa_url": "https://synapse.koreamed.org/upload/SynapseData/PDFData/1083kjt/kjt-33-74.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "543f506f88b6dffb945334fee746f5ad4836b9f6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
259375427
|
pes2o/s2orc
|
v3-fos-license
|
Absolute value linear programming
We deal with linear programming problems involving absolute values in their formulations, so that they are no more expressible as standard linear programs. The presence of absolute values causes the problems to be nonconvex and nonsmooth, so hard to solve. In this paper, we study fundamental properties on the topology and the geometric shape of the solution set, and also conditions for convexity, connectedness, boundedness and integrality of the vertices. Further, we address various complexity issues, showing that many basic questions are NP-hard to solve. We show that the feasible set is a (nonconvex) polyhedral set and, more importantly, every nonconvex polyhedral set can be described by means of absolute value constraints. We also provide a necessary and sufficient condition when a KKT point of a nonconvex quadratic programming reformulation solves the original problem.
Introduction
Mangasarian [11] introduced absolute value programming as mathematical programming problems involving absolute values. So far, researchers have paid primarily attention to absolute value equations. More general systems or even optimization problems were studied quite rarely; some of the few works include [12,28]. Our aim is to change this focus and turn the attention to linear programs with absolute values.
Notation Given a matrix A, we use A i * for its i-th row and A * i for its i-th column. Next, diag(v) is the diagonal matrix with entries given by vector v, I n is the identity matrix of size n × n, e i is its i-th column and e = (1, . . . , 1) T is the vector of ones (with convenient dimension). Given a set M, we use conv M for a convex hull of M. The sign of a real r is sgn(r) = 1 if r ≥ 0 and sgn(r) = −1 otherwise. The positive and negative parts of a real r are defined r + = max(r, 0) and r − = max(−r, 0), respectively. For vector or matrix arguments, the absolute value, the sign function, the positive and negative parts are applied entrywise.
Absolute value linear programming We introduce an absolute value LP problem in the form where c ∈ R n , b ∈ R m and A, D ∈ R m×n . By f * we denote the optimal value, and by M we denote the feasible set. Throughout the paper we assume that D is nonnegative: Notice that our assumptions are made without loss of generality for the following reason. The objective function is considered as a linear function since otherwise it can be transformed into the constraints by standard techniques. Equations can also be split to double inequalities by standard means (this need not be the best way from the numerical point of view, but for mathematical analysis we can do it with no harm).
Nonnegativity of matrix D also does not cause harm to generality. If this is not the case, we write D = D + − D − , where D + ≥ 0 and D − ≥ 0 are the entrywise positive and negative parts of D, respectively. Problem (1) then reads which is equivalent to Due to nonnegativity of D − is this transformation equivalent: If x solves (2), then x and y := |x| solves (3). Conversely, if x, y solves (3), then x solves (2). Problem (3) follows the structure of (1), which concludes the explanation.
Roadmap The aim of this paper is to address the fundamental solvability, geometric and computational properties of the problem. In particular, the paper is organized as follows. Section 1.1 provides a motivation, showing that many problems can be naturally expressed as absolute value LP problems. In Section 2, we study the geometric structure of the feasible set and give some conditions for boundedness, solvability, connectedness and convexity. We also address the computational complexity issues and propose a certain type of duality. Section 3 is devoted to the relation of the feasible set and general (nonconvex) polyhedral sets; we show that every polyhedral set admits an absolute value description. Section 4 handles integrality of the vertices of (2). In Section 5, we consider a quadratic programming reformulation of the absolute value LP problem and provide a characterization when KKT points are optimal solutions of (2). Eventually, Section 6 focuses on the special situation of the so called basis stability, in which the problem is efficiently solvable.
Motivation
Many (computationally hard) problems can easily be reformulated by means of absolute value LP. In this section, we mention some of them.
Absolute value equations The feasibility problem is called the absolute value equation and has attracted the attention of many researchers in recent years [7,13,11,16,20,25,29,30]. Despite its simple formulation, the problem is NP-hard [11] to solve. Obviously, it can be solved by absolute value LP since problem (1) is more general. Further, as observed many times (see [13,11,20]), the problem of absolute value equations is equivalent to the standard linear complementarity problem. Therefore, (1) has the potential to handle various optimization problems with complementarity constraints.
Integer linear programming Consider a 0-1 integer linear program The problem equivalently states which is an absolute value linear program. In view of this transformation, many other NP-hard problems are directly reformulated by means of absolute value LP. This is particularly the case for problems arising in graph theory, including the maximum clique, the maximum cut, the vertex cover, the maximum matching, or the graph coloring problem.
Disjunctive programming [1] Since min(a, b) = 1 2 (a + b − |a − b|), we can formulate the disjunctive inequality The formula for the minimum is recursively expanded, e.g., min(a, b, c) = min(a, min(b, c)) Thus disjunctions of more terms is easily extended. The expression becomes rather cumbersome, but with help of additional variables a convenient form is derived. For equations, we can effectively handle disjunctions of more than one equation. To be concrete, consider a disjunction of two systems of equations Its absolute value reformulation consists of m equations Interval linear programming [4,23] Let Consider a class of linear programs whereà ∈ [A ± D]; this is a type of an interval LP problem. By the theory of interval linear programming on the range of the optimal values [2,4,17,21], the value f * is the best achievable optimal value of the class of LP problems. That is, Further, the feasible set of (1) is the union of feasible sets of (6), {x;Ãx ≤ b}.
Basic properties
In this section, we discuss basic properties of problem (1) and particularly of the feasible set We use the notation M(b) when the right-hand side vector b is subject to some changes. First, observe that problem (1) is nonconvex and nonsmooth optimization problem. Even worse, the feasible set M can be disconnected. Consider, for example, the constraint |x| = e, which characterizes the number of 2 n isolated points (±1, . . . , ±1) T .
The problem becomes tractable, provided that we restrict to any orthant; we get rid of the absolute value then, and (1) turns into an LP problem. More concretely, let s ∈ {±1} n and consider the orthant defined by the sign vector s, that is, the orthant diag(s)x ≥ 0. Within this orthant, the feasible set M reads since we substitute |x| = diag(s)x. As a consequence, we have Hence, the feasible set is the union of at most 2 n convex polyhedra. As another consequence, we can solve (1) directly by a reduction to 2 n LP problems Obviously, if the ith column of D is zero, then we do not need to distinguish the sign of s i and the overall complexity decreases. Therefore the complexity is to solve 2 linear programs, where is the number of nonzero columns of D.
This simplification is not artificial since many absolute value linear programs arising from other fields may have a naturally reduced number of non-zero columns of D. Consider the integer linear program (4). Using the reformulation (5) we get the following absolute value linear program in matrix form, Even though the number of constraints and variables increased, the number of nonzero columns of D remains the same. Hence the orthant-by-orthant decomposition complexity remains the same, too. Eventually, the orthant-by-orthant decomposition partially reveals the structure of the vertices of the convex hull of M. Proposition 1. Let x * ∈ M and s := sgn(x * ). If x * is a vertex of conv M, then it is a vertex of the convex polyhedral sets and Proof. First notice that x * is feasible for both (10) and (11). Since x * is a vertex of conv M, then it must be a vertex of both these subsets. Indeed, (10) is a subset of M ⊆ conv M since for every x satisfying (10) Boundedness The orthant-by-orthant decomposition approach applies to boundedness, too. The feasible set M is bounded if and only if (9) is bounded for every s ∈ {±1} n . In other words, for every s ∈ {±1} n , the system has only the trivial solution x = 0. Equivalently, we state it as follows. Proof. We use a reduction from the Set-Partitioning problem: Given a ∈ Z n , exists x ∈ {±1} n : a T x = 0?
We formulate it as |x| = e, a T x = 0, which we further rewrite as We claim that its feasibility is equivalent to non-trivial feasibility of x ≤ ey, −x ≤ ey, a T x = 0, ey ≤ |x|, y ≥ 0.
The certificate for unboundedness of M is any feasible solution and a non-trivial solution of (12). Therefore, the problem is co-NP-complete.
Solvability To ensure solvability for each right-hand side vector b ∈ R n , it is sufficient and necessary that Ax − D|x| ≤ −e is solvable.
Proof. "If." Let b ∈ R n be arbitrary and let x * ∈ R n be such that Ax * − D|x * | ≤ −e. If b ≥ 0, then x * ∈ M(b) and we are done. Otherwise, there exists k such that b k < 0. Define "Only if." Obvious.
However, it turns out that checking feasibility of the only instance with b = e is a computationally hard problem.
Eventually, we rewrite it into the canonical form of M, Eventually, the certificate for M(−e) = ∅ is a solution of M(−e). In view of the orthant decomposition, it is a solution of a system of type (9), so it has a polynomial size.
The proof also reveals that the problem remains intractable even when D has at most one nonzero row. On the other hand, in view of Observation 1, the complexity grows in the number of nonzero columns of D. That is, providing the number of nonzero columns of D is fixed, then the problem is polynomially solvable by the orthant-by-orthant decomposition approach.
Connectedness As we observed, the feasible set need not be connected. So one can be interested in conditions on connectedness. Clearly, if b ≥ 0, then it is connected via the origin. A stronger condition follows Proposition 5. The feasible set M is connected if the system of linear inequalities is solvable.
Proof. In view of (8), the feasible set M can be viewed as the united solution set of an interval system of linear inequalities. The rest follows directly from [6, Prop. 2], which gives a sufficient condition for connectedness in the context of interval inequalities.
Convexity There are two trivial examples, where the feasible set M is convex -the matrix D is zero, or the whole feasible set lies in one orthant. Nevertheless, the set can sometimes be convex even when it intersects the interiors of at least two orthants and D = 0. These situations are hard to characterize, but in essence they somehow combine the above two trivial examples.
Proposition 6. Let M be convex and denote by M s the set described by (A−D diag(s))x ≤ b. Let x 1 and x 2 , respectively, be any vertices of M s 1 and M s 2 , corresponding to bases B 1 and B 2 , and such that they lie in the orthants determined by sign vectors s 1 and s 2 .
Proof. Suppose to the contrary that D ij > 0 for certain i, j. From the assumptions of the proposition, Thus, for any strict convex combination x * := λ 1 x 1 +λ 2 x 2 , where λ 1 , λ 2 > 0 and λ 1 +λ 2 = 1, we have The condition presented in Proposition 6 is necessary for convexity of M, but not sufficient. Consider, for example, the system Then the condition is satisfied, see also Figure 1, but the set characterized by this system is not convex.
The situation where the feasible set M is convex and intersects several orthants may happen, for example, when different constraints are active in different orthants. Consider for concreteness the system Proof. From the assumptions of the proposition, From convexity of M, we have for any convex combination λ 1 x 1 + λ 2 x 2 , where λ 1 , λ 2 ≥ 0 and λ 1 + λ 2 = 1, Thus, we derive From the triangle inequality, the above holds as equation. Hence for each j ∈ {1, . . . , n} we have This can happen only if D ij = 0 or x 1 j x 2 j ≥ 0.
In [8] it was shown intractability of checking convexity of M, for the particular case of absolute value equations.
Complexity Since the absolute value equation problem Ax + |x| = b is NP-hard [11], it is also intractable to solve absolute value LP. In particular, it is NP-hard to check for feasibility of (1). Further, we show that it is also hard to verify that a certain value is the optimal value. Proof. We again utilize a reduction from the Set-Partitioning problem: Given a ∈ Z n , exists x ∈ {±1} n : a T x = 0?
We formulate it as |x| = e, a T x = 0.
Consider the absolute value LP problem max a T x subject to |x| = e, a T x ≤ 0.
Its optimal value is zero if and only if the Set-Partitioning problem is feasible. Therefore it is NP-hard to check if f * = 0. The certificate for f * = 0 is any solution of the Set-Partitioning problem, which proves NP-completeness.
Duality The interval linear programming viewpoint (7) allows us to introduce a certain kind of duality in absolute value LP (cf. duality in interval LP [19,27,21]). Let be the dual problem to (6). Based on weak duality in LP and (7) we get weak duality for (1) Strong duality can be derived under a certain assumption. Basically, we need to ensure strong duality in the LP instances, that is, Checking this property is known to be co-NP-hard [19], but there are cheap sufficient conditions. One of them is feasibility of (6) for eachà ∈ [A ± D], which was addressed in the proof of Proposition 5. (15) is feasible, then
Further geometric properties
By Observation 1 we know that the system describes a polyhedral set that is convex within each orthant. A natural question is whether it holds the other way round as well: Can every polyhedral set that is convex inside each orthant be described by a system (16) without an increase of dimension (additional variables)? The answer is negative.
which is depicted in Figure 2. If this set could be formulated as (16), then there should be an absolute value inequality such that it reduces to inequality −x 1 + x 2 ≤ 0 in the nonnegative orthant. In the nonnegative orthant, the absolute value inequality reads (α + γ) Thus the absolute value inequality takes the form In the orthant associated with the sign vector (−1, 1), this system reads There is no α such that this inequality is satisfied for every point x ∈ M ; note that the problematic points are those on the half-line parallel to x 2 . Thus, there is no simple way to express this system using (16). Nevertheless, we can still reformulate M by means of absolute value inequalities, but on account of increasing the number of variables. With help of an additional variable x 3 , we describe M by the system The case x 3 ≥ 0 characterizes the left part of M (as depicted in Figure 2), and the case x 3 ≤ 0 characterizes the right part of M .
This example indicates that the problematic polyhedral sets are those that are unbounded and there is an unbounded direction perpendicular to an axis. Indeed, avoiding such cases, we can prove the property to be true. Theorem 1. Let M ⊆ R n be a polyhedral set and convex in each orthant. Suppose there is no unbounded direction in the boundary of M that is orthogonal to an axis. Then M can be described by means of (16). Proof. The set M is characterized by a union of convex polyhedra described by linear inequalities. Consider any inequality a T x ≤ b from the description of M . Without loss of generality assume that this inequality characterizes a convex part of M in the nonnegative orthant. Consider now the absolute value inequality where α > 0 is large enough. Let us focus on an arbitrary but fixed orthant associated with a sign vector e = s ∈ {±1} n ; the corresponding orthant is characterized by the inequality diag(s)x ≥ 0. Within this orthant, the absolute value inequality takes the form of where I = {i; s i = −1}. We claim that any feasible point x ∈ M , diag(s)x ≥ 0, satisfies this inequality. It is sufficient to prove it for vertices and extremal directions only. If i∈I x i = 0, then point x lies on the border of the nonnegative orthant and the inequality 0 ≤ b − a T x obviously holds. Thus we can assume that i∈I x i < 0. Since −2α and α > 0 is large enough, the inequality is satisfied for any vertex. Hence it remains to inspect the extremal directions. In order that the inequality is violated, there must be an unbounded edge x * + λy * , λ ≥ 0, such that a T y * > 0 and y * i = 0 for every i ∈ I. However, this means that the edge is orthogonal to the axes x i , i ∈ I; a contradiction.
The technique described in the proof provides an efficient representation of M by absolute value inequalities. If M is characterized by m facets, then the resulting system (16) consists of m inequalities. Figure 3 illustrates the application of Theorem 1. Example 1 indicated that we can overcome the orthogonality assumption of Theorem 1 but on account of additional variables. Indeed, only n additional variables are needed to rewrite any polyhedral set that is convex within any orthant in the form of (16). In fact, we present a stronger result on reformulation of an arbitrary polyhedral set.
It can be characterized as the absolute value system (16) with at most log(m) additional variables.
Proof. Suppose first that m = 2 k for some natural k. Each convex polyhedral set {x; A i x ≤ b i } can be uniquely associated with a vector s i ∈ {±1} k by a suitable bijection; notice that s i is not interpreted as the sign vector of some feasible solutions now. We claim that M is characterized by the system If x ∈ M , then the point x satisfies a particular system A k x ≤ b k for some particular k.
We simply put z := αs k , where α > 0 is sufficiently large. Then (x, z) satisfies system (18); we need α > 0 large enough in order that (x, z) satisfies (18) for i = k. Conversely, let (x, z) satisfy (18) and let s i := sgn(z) be the sign vector of z. Then x lies in the the convex polyhedral set associated with s i since If m is not a power of 2, we proceed analogously; we just omit some of the terms in the above summation in (18). For example, if m = 3, then the corresponding system reads Example 2. Consider the union of the convex polyhedral sets described as in (17). The set can be reformulated by means of integer linear programming using the constraints and additional continuous and binary variables The direct way to express it as an absolute value linear system produces This system employs m(n + 1) variables, and the reformulation to the canonical form (16) increases the number by m more. In contrast, the technique from Theorem 2 requires merely log(m) new variables.
Integrality
This section aims to make a link with integer programming. In particular, we are interested in integrality of the vertices of the feasible set M. In the theory of integer linear programming, integrality of vertices is related to unimodular and totally unimodular matrices [26]. Recall that a matrix A ∈ Z m×n is unimodular if each basis (nonsingular submatrix of order m) has the determinant +1 or −1 [26,Sect. 21.4]. Such matrices naturally appear in the context of absolute value programs, too.
Throughout this section, we assume that A, D ∈ Z m×n . By a vertex of M, we mean a vertex of (11) for some s ∈ {±1} n , but the following characterization is also valid when we employ (10) instead.
Therefore, x * is a vertex of (A − D)x ≤ b, and because it lies in the correct orthant, it is also a vertex of M. As shown above, it is non-integral.
Due to row linearity of the determinant we have that if the condition of Proposition 10 is satisfied, then matrix (A−D diag(s)) T is unimodular for each s ∈ {±1, 0} n . In particular, A T is unimodular. 1 2 }. However, this is not possible since the matrix det(A i0 ) is integral and thus its determinant should be integral as well. Using this approach iteratively, we obtain that the matrix (A − D diag(s)) T is unimodular for each s ∈ {±1, 0}.
It is known [26] that unimodularity of a matrix is a polynomially decidable problem. In our case of the absolute value problem, the characterization of Proposition 10 is exponential.
It is an open problem what is the real complexity of the problem -is it polynomial or NPhard? Anyway, for the selection of the sign vector s, we cannot reduce the set {±1} n to a fixed subset of polynomial size. Proposition 11. There is no subset S ⊆ {±1} n of size at most 2 n−1 − 1 such that the condition of Proposition 10 can be reduced to s ∈ S.
Proof. Suppose to the contrary that such a set S exists for some n. We first show that there are two vectors s 1 , s 2 ∈ {±1} n \ S such that |s 1 − s 2 | = 2e i for certain i ∈ {1, . . . , n}. To see it, observe that the set {±1} n consists of all vertices of the n-dimensional hypercube. It is known that the size of a maximum independent vertex set of the hypercube is 2 n−1 (and the maximum independent vertex set consists of either those vectors that have odd number of minus ones, or those with even number). Since the cardinality of the set {±1} n \ S is at least 2 n−1 + 1, there must be inside two vertices connected by an edge, and these two vertices s 1 and s 2 have the required form.
Without loss of generality suppose that s 1 = e, s 2 = e − 2e 1 and let For any s ∈ S, the matrix (A − D diag(s)) T is unimodular; in fact, it is unimodular for each s ∈ {s 1 , s 2 }. However, it is not unimodular for s ∈ {s 1 , s 2 }, so the reduction to S is not sufficient.
Regarding the complexity, one polynomially decidable subclass is that where D has fixed rank. We first discuss the case with D having rank one, and then we extend it to an arbitrary fixed rank. In the following, s 0 denotes the 0-norm of s, that is, the number of nonzero entries in s. Second, suppose thatà is singular, butà + uv T diag(e i ) is nonsingular for some i ∈ {1, . . . , n}. We proceed in the same way as in the previous case; we just substituteà ≡ A + uv T diag(e i ).
Eventually, suppose thatà and matricesà + uv T diag(e i ), i = 1, . . . , n, are singular. If all matrices in the formà + uv T diag(s), s ∈ {±1} n , are singular, then we are done. So suppose there is s * ∈ {±1} n such that C :=à + uv T diag(s * ) is nonsingular; from the assumption, we know that det(C) = ±1. Now, the function is zero at s = 0 and s = ±e i , i = 1, . . . , n. Hence the linear function v T diag(s − s * )C −1 u is constantly −1 at s = 0 and s = ±e i , i = 1, . . . , n. This means that it is constant for each s ∈ R n , which contradicts the case s = s * .
"Only if." This is clear from Proposition 10 and the discussion below it.
In the statement of Proposition 12, considering only those vectors s ∈ {±1} n such that s 0 ≤ 1 would not be sufficient. As an example, let Then the characterization of Proposition 10 is not satisfied (take, e.g., s = (1, −1) T ), but for each s ∈ {±1, 0} n such that s 0 ≤ 1 the matrix (A − D diag(s)) T is unimodular.
Quadratic programming reformulation
A common technique to relax the absolute value |x| is to substitute x := x 1 −x 2 , x 1 , x 2 ≥ 0, and replace |x| with x 1 + x 2 . Herein, x 1 approximates the positive part and x 2 the negative part of x. In this way, problem (1) is simplified to the linear program which provides an upper bound on f * . The bound can be very weak: If x 1 and x 2 are feasible solutions of (19), then x 1 + αe and x 2 + αe are feasible for every α ≥ 0. Provided each row of D is nonzero, problem (19) is feasible (just take α large enough), even when (2) is infeasible. In order to obtain an equivalent reformulation, we include an additional term in the objective function, yielding a quadratic program where α > 0 is a large constant; by means of Schrijver [26] it can be a priori determined having a polynomial size. The additional term α(x 1 ) T x 2 ensures complementarity (x 1 ) T x 2 = 0 so that x 1 − x 2 is the optimal solution of (1). In this case, we say that a solution (x 1 , x 2 ) of (20) yields an optimum of (1). General nonconvex quadratic programs are hard to solve. In the following theorem, we characterize the class of problem for which any KKT solution automatically produces a feasible solution. In the formulation, we make use of the vector relation a b defined as a ≥ b, a = b.
Theorem 3. In problem (20), for any b ∈ R n each KKT point yields a feasible solution of (1) if and only if is infeasible.
Proof. First notice that the KKT conditions of the quadratic program (20) read where (22d) is the complementary slackness. "Only if." Let w be a solution of system (21) and define Then the KKT conditions (22) are satisfied, including the complementarity conditions. In addition, due to the definition of b, the pair (x 1 , x 2 ) is feasible to (20). From the assumption, By the definition of x 1 and x 2 we have x 1 i > 0 and x 2 i > 0. Therefore the complementarity (x 1 ) T x 2 = 0 is not satisfied. This also means that the point x * := x 1 − x 2 does not belong to M. To see it, recall that "If." Let x 1 , x 2 , u, v, w satisfy the KKT conditions (22). From u T x 1 = 0 we have x 1 i = 0 or u i = 0 for each i. The former implies c i − ((A + D) T w) i ≥ 0 and the latter implies Suppose to the contrary that the KKT point (x 1 , x 2 ) does not produce a feasible solution. Thus we have (x 1 ) T x 2 > 0, that is, there is i such that x 1 i > 0 and x 2 i > 0. Then u i = v i = 0 and (21) is feasible.
Notice that solvability of (21) can be checked in polynomial time by means of linear programming. The system can be stated as where ε > 0 is small enough with polynomial size (cf. [26]).
Due to NP-hardness of the absolute value LP problem and strong conditions of Theorem 3, the system (21) is often feasible. However, the class of infeasible instances is nontrivial. It comprises not only the case D = 0, but also the instances for which the optimum x * satisfies Ax * ≤ b, that is, x * is an optimum of the LP problem max{c T x; Ax ≤ b}.
Proposition 13. Suppose that (1) has an optimum. If (21) is infeasible and Ax ≤ b is feasible, then there is an optimum x * of (1) such that Ax * ≤ b.
Proof. Suppose to the contrary that no optimum x * of (1) satisfies Ax * ≤ b. That is, the system is infeasible. By the Farkas lemma, the dual system has a solution (w * , z * ). If z * = 0, then again by the Farkas lemma applied to the resulting system A T w = 0, w ≥ 0, b T w < 0 we obtain that the system Ax ≤ b is infeasible; a contradiction. Thus z * > 0 and we can assume without loss of generality that z * = 1.
Hence we have Premultiplying inequality Ax * − D|x * | ≤ b by w * , we get If D T w * = 0, then c T x * ≤ b T w * ; a contradiction. Therefore, in view of D ≥ 0, we have meaning that (21) is feasible; a contradiction.
Special situation of basis stability
We already observed a connection between absolute value LP and interval LP. Utilizing this relation, we can identify a class of problems, which are efficiently solvable -the so-called basis stable problems. In interval LP, basis stability refers to a situation, in which there is a common optimal basis of (6) for eachà ∈ [A ± D]. In this case, the absolute value LP problem is easily resolved. However, there are two drawbacks -first, the situation is rare, and, second, it is co-NP-hard to check for basis stability [5]. The good news is there there are sufficient conditions that work well [9,10]; we adapt them to our problem.
How to check for basis stability Let B be a basis. By A B we mean a restriction of A to the rows indexed by B, and similarly for A N , where N := {1, . . . , m} \ B are nonbasic indices. Basis B is optimal for the LP problem (6) with a certainà ∈ [A ± D] if and only ifà B is nonsingular and the following two conditions hold, To verify basis stability w.r.t. basis B, we have to check validity of these conditions for everyà ∈ [A ± D].
A condition for (24) works as follows. Consider the interval system of linear equations There exist many methods to solve it; see [14,15,18,24]. A solution to such a system is an interval vector [y, y] such that (à −1 B ) T c ∈ [y, y] for everyà ∈ [A ± D]. Thus we solve the interval system, and then we just check for y ≥ 0, which shows stability of (24).
For condition (25) How to find the optimal value and optimal solution Once stability of an optimal basis B is verified, the optimal value f * can be expressed as To solve this optimization problem efficiently by means of linear programming, we substitute y := (Ã −1 B ) T c ≥ 0 and write By the properties of the united solution set of interval systems of linear equations [14,18,24], we can express the feasible set of the above optimization problem as In this way, we obtain an LP formulation for f * .
After computing f * , we determine the optimal solution x * , too. Let y * be an optimum to (26). Determineà B ∈ [A ± D] B such thatà T B y * = c, which is an easy task [24]. Finally, we have x * =à −1 B b B .
Conclusion
In this paper, we thoroughly investigated geometric and computational-complexity properties of the absolute value LP problems. In particular, we presented various conditions for convexity, connectedness, boundedness and feasibility of the feasible set. We also investigated the formulation power of absolute value inequalities in characterizing nonconvex polyhedral sets. In linear programming, integrality of vertices relates to unimodular matrices, and in case of absolute value LP problems the unimodularity property extends to matrices of certain form. Absolute value LP problems can be reformulated by means of integer programming or quadratic programming; for the latter, we proposed a necessary and sufficient condition when a KKT point automatically produces feasible solutions of the original problem. Below, we sum up some of the problems that remain open; they mostly regard the feasible set M: • Necessary and sufficient condition for connectedness of M.
So far, only a simple sufficient condition is known. Some more results would be very desirable because handling disconnectedness in optimization is a hard task.
• Necessary and sufficient condition for convexity of M.
We proposed two necessary conditions, but a complete characterization of convexity is unknown.
• The computational complexity (polynomial vs. NP-hard) of checking integrality of the vertices of M for every b ∈ Z m . The characterization proposed in this paper is exponential in n, which however does not exclude the possibility of a polynomial characterization.
|
2023-07-10T06:41:17.099Z
|
2023-07-07T00:00:00.000
|
{
"year": 2023,
"sha1": "9ec08938edc3dc20e5197714e8131ab3751d1756",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9ec08938edc3dc20e5197714e8131ab3751d1756",
"s2fieldsofstudy": [
"Mathematics",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
10473005
|
pes2o/s2orc
|
v3-fos-license
|
Polymorphisms in Autophagy Genes and Susceptibility to Tuberculosis
Recent data suggest that autophagy is important for intracellular killing of Mycobacterium tuberculosis, and polymorphisms in the autophagy gene IRGM have been linked with susceptibility to tuberculosis (TB) among African-Americans, and with TB caused by particular M. tuberculosis genotypes in Ghana. We compared 22 polymorphisms of 14 autophagy genes between 1022 Indonesian TB patients and 952 matched controls, and between patients infected with different M. tuberculosis genotypes, as determined by spoligotyping. The same autophagy polymorphisms were studied in correlation with ex-vivo production of TNF, IL-1β, IL-6, IL-8, IFN-γ and IL-17 in healthy volunteers. No association was found between TB and polymorphisms in the genes ATG10, ATG16L2, ATG2B, ATG5, ATG9B, IRGM, LAMP1, LAMP3, P2RX7, WIPI1, MTOR and ATG4C. Associations were found between polymorphisms in LAMP1 (p = 0.02) and MTOR (p = 0.02) and infection with the successful M. tuberculosis Beijing genotype. The polymorphisms examined were not associated with M. tuberculosis induced cytokines, except for a polymorphism in ATG10, which was linked with IL-8 production (p = 0.04). All associations found lost statistical significance after correction for multiple testing. This first examination of a broad set of polymorphisms in autophagy genes fails to show a clear association with TB, with M. tuberculosis Beijing genotype infection or with ex-vivo pro-inflammatory cytokine production.
Introduction
Mycobacterium tuberculosis (M. tuberculosis), the main cause of tuberculosis (TB) worldwide, is an intracellular pathogen that primarily infects macrophages [1,2]. This pathogen resides and multiplies within a host-derived phagosome where it persists through interference with phagosome-lysosome biogenesis [3,4]. Recent studies suggest that autophagy, a homeostatic process involved in nutrient regeneration and immune responses, is involved in intracellular killing of M. tuberculosis [3,5,6], and that physiological or pharmacological induction of this process in vitro (i.e.: with ATP, IFN-c, vitamin D3) promotes fusion of phagosomes containing M. tuberculosis with lysosomes and subsequent killing of the pathogen in autophagic characteristic doublemembrane autolysosomes [1,3,7]. In addition, intracellular survival of M. tuberculosis was shown to depend on its ability to escape or inhibit autophagy [5,8], and a study by Kumar et al. found that genes that regulate intracellular survival of M. tubercu-losis, regardless of its genotype, are in the autophagy pathway itself or in pathways that affect autophagy [9].
Susceptibility to TB is partly genetically determined and variations in genes involved in the autophagic pathway may affect the host response to M. tuberculosis infection. Indeed, mice deficient in autophagy and autophagy related genes were found to be more susceptible to infection with M. tuberculosis [10,11] and human mononuclear cells with certain polymorphisms in autophagy related genes displayed an impaired ability to control M. tuberculosis growth [12,13], thus suggesting that polymorphisms in autophagy and autophagy related genes may be associated with TB. This appears to be the case as various polymorphisms in one autophagy gene IRGM, a downstream effector of IFN-c, have been associated with increased protection against M. tuberculosis infection in African-American [14] and Chinese individuals [15] and infection by particular M. tuberculosis genotypes in Ghana [16]. In addition, polymorphisms in a number of genes which affect autophagy, such as P2RX7, have also been associated with TB [17,18]. However, to our knowledge, besides IRGM no other gene of the autophagy pathway itself has been examined in TB patients. We have therefore examined a selection of autophagy genes in a large cohort of TB patients and healthy controls in Indonesia. Since susceptibility to TB may depend on the interplay between host and mycobacterial genotype [2,9,19], we also grouped patients' M. tuberculosis isolates into W-Beijing genotype strains, which account for one-third of all M. tuberculosis infections in Indonesia [20,21], and non-W-Beijing genotypes. Furthermore, in a Caucasian cohort that was genotyped for the 22 SNPs, we measured cytokine production in peripheral blood mononuclear cells (PBMCs) stimulated with M. tuberculosis.
Subject Recruitment
We previously recruited consecutive TB patients diagnosed in two outpatient clinics and two hospitals in Jakarta and Bandung (Indonesia) from January 2001 to December 2006, for a series of genetic studies examining host susceptibility to TB [19,22,23].
Diagnosis of pulmonary TB (PTB) was done according to World Health Organization criteria by clinical presentation and chest radiograph examination, followed by confirmation with microscopic detection of acid-fast bacilli in Ziehl-Neelsen-stained sputum smears and positive culture of M. tuberculosis on 3% Ogawa medium. For M. tuberculosis genotype analysis, mycobacterial DNA was extracted by bringing 2 loops of bacterial mass from an M. tuberculosis culture in saline solution and subsequently heating it at 95uC for 5 min. M. tuberculosis genotype was determined by using a commercially available Spoligotyping kit (Isogen Bioscience, Maarssen, The Netherlands) as previously described [20]. M. tuberculosis Beijing genotype was defined as a spoligo-pattern showing hybridization to at least 3 of the 9 spacers 35-43 and absence of hybridization to spacers 1-34. Spoligotyping was done at the Hasan Sadikin Hospital, Bandung, Indonesia. In addition, for quality control purposes, spoligotyping of 10% of the isolates and of all isolates lacking hybridization were also done at Gelre Hospital, Apeldoorn, The Netherlands. We excluded from the genetic studies patients with a confirmed diagnosis of extra-pulmonary TB (n = 93), diabetes mellitus (fasting blood glucose .126 mg/dL) (n = 139) and HIV-positive subjects (n = 10). Standard regimen for treatment of TB consisted of isoniazid, rifampin, pyrazinamide, and ethambutol (2HRZE/ 4H3R3) was administered free of charge to all patients according to the Indonesian National TB program.
During the above mentioned period we also recruited 1000 randomly selected age and gender matched, but genetically unrelated control subjects from the same, mostly poor and densily populated areas where TB is abundant. All control individuals were subjected to the same physical examination, blood tests and chest radiography as the TB patients. A total of 952 control subjects were enrolled in the study after excluding individuals with symptoms or chest X-ray abnormalities suggesting active TB or a history of TB.
A structured questionnaire was used for patients and control subjects to record clinical information, age, gender, self and parental ethnicity, socio-economic status and concurrent medical history.
Ethics Statement
All individuals recruited signed a written informed consent. The study protocol was reviewed and approved by the local institutional review boards of the medical faculty of university of Indonesia, the Eijkman institute for molecular biology in Jakarta in Indonesia and the Medical Ethical Committee Arnhem-Nijmegen in The Netherlands.
Blood samples were obtained by venapuncture. Genomic DNA was isolated from EDTA blood of patients, controls and a cohort of healthy volunteers using standard methods, and 5 ng of DNA was used for genotyping. Multiplex assays were designed using Mass ARRAY Designer Software (Sequenom) and genotypes were determined using Sequenom MALDI-TOF MS according to manufacturer's instructions (Sequenom Inc., San Diego, CA, USA). Briefly, the SNP region was amplified by a locus-specific PCR reaction. After amplification a single base extension from a primer adjacent to the SNP was performed to introduce mass differences between alleles. This was followed by salt removal and product spotting onto a target chip with 384 patches containing matrix. MALDI-TOF MS was then used to detect mass differences and genotypes were assigned real-time using Typer 4 software (Sequenom Inc. San Diego, CA, USA). As quality control, 5% of samples were genotyped in duplicate and each 384well plate also contained at least 8 positive and 8 negative controls, no inconsistencies were observed. DNA samples of which half or more of the SNPs failed (N = 90) were excluded from analyses. Variants with call-rates below 90% were also excluded from further analyses (n = 0).
For quality control purposes the genotype of at least two samples for each homozygous genotype were confirmed by sequencing using Sanger method with Big Dye Terminator version 3 (Applied Biosystems). After the cycle sequence reaction, the samples were purified by ethanol precipitation and analysed on a 3730 Sequence Analyzer (Applied Biosystems).
Previously polymorphisms in various genes were genotyped on a two-stage genome-wide association study (GWAS) using Illumina's GoldenGate Assay according to manufacturer instructions, aiming to discover genes relevant in pulmonary TB susceptibility in the same Indonesian cohort involved in the current study [29]. Among the SNPs studied, five were in autophagy genes and were included in our data analysis ( Table 1). The overlap of study subjects between the current study and the GWAS is shown in Figure 1.
Cytokine Production by M. tuberculosis Stimulated PBMC
Cells isolated from healthy Caucasian volunteers bearing various genotypes were examined for cytokine production induced by sonicated M. tuberculosis H37Rv (n = 67). These individuals were aged 23-73 years, 77% was male and none had a known TB contact. All gave written informed consent, and the study was approved by the Ethical Committee of the Radboud University Nijmegen Medical Centre, Nijmegen, The Netherlands.
Blood samples were obtained by venapuncture. The mononuclear cell fraction was isolated from blood by density centrifugation of blood, diluted 1:1 in pyrogen-free saline over Ficoll-Paque (Pharmacia Biotech, PA, USA). Cells were washed twice in saline and resuspended in culture medium (RPMI, Invitrogen, CA, USA) supplemented with gentamicin 10 mg/ml, L-glutamine 10 mM, and pyruvate 10 mM. Cells were counted in a Coulter counter (Coulter Electronics) and the number was adjusted to 5610 6 cells/ ml. A total of 5610 5 mononuclear cells in a 100 ml volume of RPMI was added to round-bottom 96-wells plates (Greiner) with or without sonicated M. tuberculosis H37Rv (final concentration: 1 mg/ml). After 24 hours, 48 hours (both without serum) or 7 days of incubation (in the presence of 10% serum), supernatants were stored at 220uC. Cytokine concentrations were assessed in the supernatants using enzyme-linked immunosorbent assay (ELISA). Cytokine measurements of TNF, IL-1b, IL-6, IL-8 (after 24 hours incubation); IFN-c (after 48 hours incubation), and IL-17 (7 days incubation) were performed in the supernatants using commercial ELISAs from R&D Systems, MN, USA (TNF, IL-1b, IL-8, and IL-17) or Sanquin, Amsterdam, The Netherlands (IL-6 and IFN-c).
Statistical Analysis
All data collected from the questionnaires and genotyping were analysed using SPSS version 17.0 (SPSS Inc., Chicago, IL, USA). The Hardy-Weinberg equilibrium (HWE) was checked for each SNP using the program HWE Version 1.10 (Rockefeller University, New York). The program Conting was used to calculate the x 2 and the associated values for a contingency table. Patient data were stratified for the M. tuberculosis genotype with which they were infected; Beijing or non-Beijing strains and the x 2 was calculated with SPSS. Differences in cytokine production were analyzed using the Wilcoxon signed rank test. All statistical analyses were 2-sided, and P,0.05 was considered to be statistically significant.
The available number of study subjects allowed us to observe a 5% allele difference between patients and controls for the SNP in IRGM (rs4958847), based on previously reported allele distribution in the general population, a power (b) of 0.80 and a significance level (a) of 0.05.
Study Subjects
A total number of 1022 confirmed pulmonary TB patients and 952 age-and gender matched community controls were included in the data analysis. As shown in Table 2, 78% of patients and control subjects were Javanese (a population group with relatively low genetic variance in Indonesia [30]) with similar age, gender distribution, and likelihood of having a BCG scar. Furthermore, both groups also had a similar socioeconomic status (not shown) and previous analysis in this cohort [29] showed that population stratification was minimal.
Association between Polymorphisms in Autophagy Genes and Susceptibility to TB
Polymorphisms rs11235604 (in ATG16L2), rs77228473 and rs77833427 (in ATG2A), rs74719094 (in ATG2B), rs72553867 (in IRGM), rs10493328 and rs10493329 (in ATG4C) were rare in the study subjects. With the exception of the SNP rs3759601 in ATG2B (HWE: 2p = 0.034), all polymorphisms were in Hardy-Weinberg equilibrium in the healthy controls. The distribution of the alleles for all polymorphisms analyzed in the current study is presented in Table 3. After Chi-square testing we did not detect significant associations between any genetic polymorphism and susceptibility to TB. This was also the case when the largest group (Javanese) was analysed separately (data not shown).
Association between Polymorphisms in Autophagy Genes and M. tuberculosis Genotype
To examine a possible association between host and mycobacterial genotype, autophagy gene polymorphisms were compared between patients infected with M. tuberculosis Beijing genotype and other M. tuberculosis genotypes. One hundred and sixty-one patients (33%) were infected with M. tuberculosis Beijing genotype strains, 322 with a non-Beijing strain, while no strain information was available for the remainder (n = 540). Patients infected with M. tuberculosis Beijing and non-Beijing strains were not significantly different in terms of age, sex, or history of previous tuberculosis treatment. The distribution of the alleles for all polymorphisms among patients infected with M. tuberculosis Beijing and non-Beijing strains is shown in Table 4. The polymorphism in LAMP1 (rs9577229) showed an association with TB caused by M. tuberculosis Beijing strains, when the TC was combined with the low prevalent TT genotype (p = 0.02). The same was true for the polymorphism in MTOR (rs6701524); when combining the AG with the low prevalent GG genotype, MTOR was significantly associated with infection with M. tuberculosis Beijing strains (p = 0.02). However, both associations lost statistical significance after correction for multiple testing.
Polymorphisms in Autophagy Genes and M. tuberculosis Induced Cytokine Production
Association between host genotype and M. tuberculosis induced cytokine production by PBMC was examined in healthy Caucasian individuals. Table 5 shows the difference in M. tuberculosis induced production of TNF, IL-1b, IL-6, IL-8, IFN-c and IL-17 by PBMC isolated from individuals stratified for different genotype of autophagy related genes. Six of these polymorphisms showed no polymorphic distribution in the Caucasian individuals and could therefore not be analysed. With the exception of ATG10 (rs1864183), for which a significant difference was found in IL-8 production between individuals with an AA and GG genotype (p = 0.04), no associations were observed between the investigated cytokines and the autophagy related polymorphisms. Figure 2 presents scatter plots of TNF, IFN-c, and IL-17 stratified for genotypes of both investigated polymorphisms in IRGM which was previously linked with susceptibility to TB.
Discussion
In-vitro data strongly support a role for autophagy in control of M. tuberculosis, and a study involving 2010 patients with pulmonary TB and 2346 control subjects from Ghana has previously reported an association between a polymorphism in the autophagy gene IRGM and TB [16]. To further explore a role of autophagy in TB we examined polymorphisms in a number of autophagy genes in TB patients and matched controls from Indonesia. Among almost 2000 subjects, no association was found between TB and 22 SNPs in 14 different autophagy and autophagy related genes, including IRGM and P2RX7 which were previously associated with TB. When TB patients were stratified according to M. tuberculosis genotype, associations were observed between SNPs in LAMP1, MTOR and infection with M. tuberculosis Beijing genotype, but statistical significance was lost after correction for multiple testing. No significant correlation was found between M. tuberculosis induced cytokine production and genotype of autophagy related genes in a separate cohort of healthy Caucasian volunteers. IRGM, a downstream effector protein of IFN-c, induces autophagy and subsequent generation of large autolysosomal organelles as a mechanism for the elimination of intracellular M. tuberculosis [31]. In a cohort of 2010 Ghanaian patients and 2346 controls a polymorphism (rs9637876) in IRGM was associated with decreased susceptibility to TB caused by M. tuberculosis Euro-American (EUAM) lineage, although not for M. tuberculosis East-African-Indian (EAI), Beijing, Delhi, M. africanum and M. bovis lineages [16]. In a study in the US, a polymorphism in IRGM (rs10065172) was more common in 370 African-American TB patients compared to controls, but not in 177 Caucasian patients compared to 110 Caucasian controls [14]. We did not find an association between TB, which in Indonesia is mainly caused by the M. tuberculosis Beijing lineage, and two different polymorphisms in IRGM.
P2RX7 is an autophagy related gene. It encodes for the P 2 X 7 receptor, a plasma membrane receptor which mediates ATPinduced autophagy and subsequent intracellular killing of M. tuberculosis upon upregulation in mature macrophages [32,33,34]. P2RX7 displays a high genetic heterogeneity [12], and a polymorphism with a C allele at position -762 in the P2RX7 promoter region was found to have a protective effect against TB in over 300 TB patients and 160 ethnically matched controls subjects from The Gambia [18]. However, no association was found between the same polymorphism and TB in our cohort of Indonesian subjects. It is noteworthy that the protective effect of this polymorphism in Gambian subjects was weak and that it did not correlate with altered receptor expression or activity, suggesting the effect of this SNP might be influenced by other host and pathogen factors [18]. In addition, the relative importance of the role of P 2 X 7 receptor in the control of M. tuberculosis growth is still debated since mice deficient for P 2 X 7 receptor displayed a similar ability to control pulmonary M. tuberculosis infection compared to wild-type mice [35]. Unfortunately, studies on the effect of P2RX7 polymorphisms in susceptibility to pulmonary TB in humans have not yet been done either in vivo or in other ethnic groups.
Polymorphisms in various genes have been associated with TB, but only polymorphisms in VDR [36,37,38], NRAMP1 [39,40] and MBL [41,42] were found to be associated with TB in different geographic regions and ethnic groups. However, the effect of SNPs in these genes varies among racial groups. SNPs in NRAMP1 were associated with an increased risk of PTB in Gambians [39] but were found to have a protective effect in Cambodians [40], polymorphisms in MBL were associated with protection against TB in South Africans [41] but in South Indians increased susceptibility to this disease [42], while SNPs in VDR were found to increase susceptibility to PTB in three African countries [36] but to have no effect in Cambodians [40]. As suggested by Fernando et al, these contrasting findings between different ethnic groups may be due to differences in allele frequencies [17]. In addition, the phylogeography of mycobacteria implies that M. tuberculosis lineages have become differentially adapted to genetic variations among racial groups [2].
The development of TB is the result of a complex interaction between the host and pathogen influenced by environmental factors [43]. After stratification according to M. tuberculosis genotype, we found a suggestive association between TB caused by M. tuberculosis Beijing genotype and a polymorphism in LAMP1, similar to what we have previously shown for polymorphisms in SLC11A/NRAMP1 [19]. However, nine major M. tuberculosis genotypes have been previously identified in Indonesia [20] and some polymorphisms analysed here may be associated with TB caused by other genotypes not identified in this study.
LAMP1 and LAMP2 are two major protein components of late endosome and lysosome membranes, thought to form a protective barrier against degradation by hydrolytic enzymes [44,45]. Mice lacking Lamp2 display impaired autophagy and lysosome biogenesis, while deletion of both Lamp1 and Lamp2 is embryonically lethal [44]. However, the contribution of these two lysosomal membrane proteins to phagosomal maturation and killing of intracellular pathogens still needs to be clarified.
Our group recently showed that inhibition of autophagy (genetically or with either siRNA or 3MA) increased IL-1b production [46,47,48]. However, with the exception of a SNP in ATG10 and IL-8, no differences in cytokine production were observed in M. tuberculosis stimulated PBMCs of healthy volunteers stratified for genotype of autophagy related genes.
Our paper has several limitations. First and most importantly, no tuberculin skin testing was performed in the control population. However, exposure to tuberculosis must be common in this group, as the majority of controls lived in households of tuberculosis patients, who mostly had a productive cough (98%) for a median of 3 months before first presentation at the TB clinic [49]. Second, as we powered our study on an expected 5% difference in allele frequency between the groups, we cannot exclude possible associations amongst SNPs with a lower frequency. This is the first paper to investigate the relation of different SNPs in a broad set 14 autophagy genes with susceptibility to TB, as well as with the infecting M. tuberculosis genotype and ex-vivo cytokine production. These data further supports the belief that susceptibility to TB has a polygenic nature and polymorphisms in more than one gene may be required to render individuals more or less susceptible to develop active disease.
|
2017-06-30T04:57:03.117Z
|
2012-08-06T00:00:00.000
|
{
"year": 2012,
"sha1": "1ba95d70d834f69ad7e0621bfde8648751bebc2e",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0041618&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ac2010cc40a833421bc789748cacd584f8305b53",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
218849592
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic Indicators in Acute Renal Failure
BACKGROUND Acute renal failure complicates 45% of cases in the general setup and up to 70% cases in the intensive care unit setup. Knowing the possibility of death is essential to determine the line of treatment and explaining prognosis to the patient and relatives. Multiple organ failure is a grave prognostic indicator in acute renal failure. We wanted to study the relation of indices to outcome in patients with acute renal failure. METHODS All patients above 18 years of age with acute renal failure who were admitted to hospital for a period of 1 year were included in the study. Those patients with pre-existing chronic renal failure were excluded from the study. Statistical package for Social Sciences Version 14 was used for statistical analysis. RESULTS Need for respiratory support, comatose state, thrombocytopenia, and increasing number of complications are significant prognostic indicators according to this study. The mortality rate of patients in acute renal failure in this study was 26%. Acute renal failure continues to be a leading cause of mortality in a hospital setup. Prognostic scoring will help not only to explain prognosis but also in triaging patients in case of natural or manmade catastrophes causing massive influx of patients to hospitals.
Acute renal failure is a syndrome characterized by a rapid (hours to weeks) decline in glomerular filtration rate. 1 It is a condition in which a patient with no known previous renal impairment develops rapidly failing renal function with an acute increase in serum levels of substances excreted by the kidney. 2 An increase in creatinine more than 3 times of normal range or a decrease in glomerular filtration rate greater than 75% or urine output less than 400 ml for 24 hours or anuria for 12 hours is evidence of acute renal failure. 3 Acute renal failure (also called Acute Kidney Injury) complicates approximately 5% of hospital admissions and 30% of admissions to intensive care units. 4 While acute renal failure complicates around 45% cases in general series and close to 70% cases in intensive care unit series, functional outcome is usually good among the surviving patients. As is true for any severe clinical condition, a prognostic estimation of acute renal failure is of great utility for both the patients and their families and the medical specialists for analysis of therapeutic manoeuvres and options. 5 A ≥101% increment of creatinine with respect to its baseline before nephrology consultation is associated with significant increase of inhospital mortality. 6 multiple organ failure is a poor prognostic factor in patients with acute renal failure in the setting of the intensive care unit. 7 Aminoglycosides are the single biggest cause for drug induced acute renal failure. 8 We wanted to study the relation of indices to outcome in patients with acute renal failure.
M E T H O D S
50 patients aged 18 years and above who were admitted to to a Medical College Hospital in South India for a period of one year with acute renal failure or developing the same during their stay in the hospital as evidenced by an increase in creatinine of more than 3 times of normal or a decrease in glomerular filtration rate greater than 75% or urine output less than 400 ml for 24 hours or anuria for 12 hours. Eligible participants were approached, and Informed consent was obtained before enrolling in the study. Subjects underwent detailed history taking and physical examination. The prognostic indicators to be correlated with outcomerecovery or death are-age, gender, hypotension, coma, jaundice, oliguria, nephrotoxic medication, respiratory support, and thrombocytopenia. Patients with chronic renal failure were excluded from the study.
Statistical Methods
The data collected has was analysed with SPSS Ver. 14 using chi square test and Pearson's test. p value less than 0.05 was considered to be significant.
R E S U L T S
Out of patients in the 18-49 years age group, 6 out of 24 people expired and 7 out of 26 people expired in the 50-80 years age group. There was no significant difference in mortality between younger and older age groups. There were more males (31) than females (19) who suffered from acute renal failure. However the difference in mortality was not significant as compared between the two gender groups. P value showed that there was no statistically significant difference between normotensives and hypotensives as 7 out of 35 normotensives and 6 out of 15 hypotensive patients expired. The consciousness level of the patient was a significant predictor of mortality in this study with 7 out of 8 comatose patients expiring. More than half (61.54%) of the patients requiring some kind of respiratory support expired showing that the need for respiratory support was another statistically significant independent predictor of mortality. While none of the patients with normal urine output expired, 13 out of 47 oliguric renal failure patients expired. However statistical significance could not be attributed since almost all (47/50) the cases were in oliguria. 9 out of 38 patients with normal and 4 out of 12 patients elevated bilirubin levels respectively expired. 5 out of 31 patients with normal platelet count and 8 out of 19 patients with thrombocytopenia expired making thrombocytopenia an important predictor for mortality in acute renal failure. The mortality rate of patients increased as the number of complications increased. Patients with only one complication had no mortality. Patients with 2 or 3 complications had just under 20% of mortality. However, patients with 4 or 5 complications had 100% mortality. 66% of the patients diagnosed to be acute renal failure underwent dialysis. Total mortality rate: 13 out of 50 patients expired. Mortality rate in this study was 26%.
D I S C US S I O N
Patients were divided into 2 groups -those from 18-49 years and those from 50-80 years. There was no significant B A C K G R O U N D difference in mortality between younger and older age groups in this study. This finding does not correlate with that of Stott et al 9 in London, UK and Chertow et al 10 in Massachusetts wherein increased age corresponded with higher mortality. This study finding however does correlate with the findings of Oliveira et al 11 in London and Obialo et al 12 in Georgia, USA where in both instances the elderly did not have a poor prognosis compared to the younger age groups. This is significant as it means that aggressive treatment need not be withheld in the elderly.
Obialo et al 12 had also significantly higher men affected with acute renal failure as compared to women, but they found that mortality was higher in females than males. While this study agrees with Obialo et al that more men than women are affected, it does not find any statistically significant difference in mortality between men and women. Hypotension was defined as any patient with blood pressure lower than or equal to 90 mmHg systolic or those requiring inotropic support. While Vincent et al 13 showed that hypotension can itself cause acute renal failure, this study finds that mortality rate between hypotensives and normotensives is not statistically significant.
Patients were divided into 2 groups -those with Glasgow coma scale equal to or less than 8 and those with a score 9 or above. Samimagham et al 14 found that low Glasgow coma score was an important predictor of mortality in acute renal failure and our study also came to the same conclusion that consciousness level is an independent predictor of mortality. Respiratory support was recognized as any patient requiring support to maintain oxygen saturation whether it be by venturi mask or ventilator support. There was a significant correlation between the need for respiratory support and mortality which was in agreement with Kuiper et al 15 where they found that mechanical ventilation may aggravate or even initiate acute renal failure.
Oliguria was defined as urine output less than 400 ml/day. While oliguria was found to be an early predictor of mortality in critically ill patients by Macedo et al, 16 a statistical significance could not be made out in this study as almost all (94%) cases of acute renal failure were oliguric and there were not sufficient non oliguric patients to compare the findings with. Jaundice was defined as total bilirubin greater than 1.5 mg/dl. While Amerio et al 17 found that rise in total bilirubin was directly proportional to rise in mortality, this study did not find any such difference in mortality.
Thrombocytopenia was defined as total platelet count less than 1.5 lakhs/cumm. While little over 16% of the patients with normal platelet count expired due to acute renal failure, as many as 42% of patients with acute renal failure in thrombocytopenia expired. These findings correlate with Chertow et al 18 where thrombocytopenia was associated with increased mortality. Complications included hypotension (cardiovascular system), comatose (central nervous system), decreased urine output (nephrology), need for respiratory support (respiratory system), thrombocytopenia (haematology) and jaundice (hepatology). Each complication represents a different organ system in the body. No patient had all 6 complications. Brivet et al 19 found that mortality increased with increase in number of organ systems involved. This study agrees with the findings of Brivet et al as there was a significant correlation between increase in number of complications and increase in mortality. While patients with only 1 organ system involvement had no mortality, those with 2 or 3 organ systems involved had a mortality rate of just under 20% and those with 4 or 5 organ systems involvement had a 100% mortality.
66% of the patients diagnosed to be acute renal failure required haemodialysis. This finding correlates with the Robertson et al 20 study in which 63.9% of patients in acute renal failure required dialysis. The mortality rate in this study was 26% as compared to 34% by Levy et al. 21 the lower rate of mortality was probably due to increased awareness by hospital staff, early detection of renal failure, interdepartmental coordination and early intervention in the management of acute renal failure.
C O N C L US I O N S
Need for respiratory support, comatose state, thrombocytopenia and increasing number of systems involved are reliable predictors of mortality. This study is useful as it reveals a prognosticating system in which 'number of complications' may be utilized to predict the possibility of mortality in a patient with acute renal failure. It consists of only 6 variables and its simplicity makes it practical to employ it in the wards and may be used to explain prognosis to the patient and his/her relatives.
|
2020-04-16T09:17:59.084Z
|
2020-03-09T00:00:00.000
|
{
"year": 2020,
"sha1": "069a98222acdfb80a5c8b9b55f8eea416fd57f11",
"oa_license": null,
"oa_url": "https://doi.org/10.18410/jebmh/2020/107",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b037bede916b5db173ca7c98930c9c64e80a119f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
24249155
|
pes2o/s2orc
|
v3-fos-license
|
A Hybrid Convex Hull Algorithm for Fingertips Detection
Objectives: This article presents a hybrid convex hull algorithm to reduce computational resources in fingertips detection from an image. Methodology: In this paper, we suggest to reduce the computational resources by leveraging on two proven algorithms and techniques in order to extract the convex hull vertices directly from a binary image without going through the edge detection process. This is done by embedding Bresenham algorithm within Jarvis March to replace most of the work required in the edge detection process. Findings: The hybrid convex hull algorithm which we have suggested requires only four global extreme points to begin with and thus the pre-processing step takes much less resources. The new algorithm yields time complexity of O(N2). Novelty/Improvement: The hybrid convex hull algorithm offers a direct way to detect the convex hull of the original image without edge detection process. A Hybrid Convex Hull Algorithm for Fingertips Detection Guat Yew Tan1*, Bo Shen Woun1 and Ya Ping Wong2 1School of Mathematical Sciences, Universiti Sains Malaysia, Penang, Malaysia; tan_gy@usm.my, wounboshen@hotmail.com 2Faculty of Computing and Informatics, Multimedia University, Cyberjaya, Malaysia; ypwong@mmu.edu.my
Introduction
Convex hull of a finite set of planar points S is defined as the smallest convex polygon P that encloses S 1 . It is a common structure which is widely used in many applications 2 . Various convex hull algorithms have been developed and the first algorithm was proposed in 1967 by Bass and Shubert 3 . Later in 1972, Graham was the first to introduce O(n log n) convex hull algorithm which is considered as an important algorithm in both accuracy and efficiency 1,4 . Graham uses radial sort on the points and checks repeatedly for convexity of every three subsequent points along the polygon perimeter. In the same year, Sklansky introduced the O(n) convex hull algorithm by 8-connect concavity tree technique 5 . The algorithm, though simple, fails on some self-intersecting polygons 6 . In 1983, Sklansky introduced a modified version 7 to add an additional process to create a polygon monotonically in both horizontal and vertical directions prior to the concavity tree technique as in his previous algorithm. However this modified algorithm does not always work and sometimes even yields non-simple polygons 8 . Despite these weaknesses, Open CV, which is a widely used image processing tool, uses Sklansky algorithm due to its simplicity 9 . In 1973, a simple gift wrapping algorithm (Jarvis March) with O(nh), where h is the number of convex hull edges, was introduced by Jarvis. The algorithm measures the angle of the line rotating about an ankle extreme point, and takes the point which forms the smallest angle, as another extreme point 10 . In 1985, Quickhull algorithm which has the complexity of O(n log n) was introduced. It is based on quick sort methodology to process the set of planar points by dividing the points according to two left and right extreme points and discard points strictly inside the upper and lower-hulls recursively 11 . In 1977, divideand-conquer convex hull was introduced by Preparata and Shamos with complexity O(n log n) 12 . This algorithm uses divide-and-conquer technique to divide the x-sorted points into two nearly equal halves repeatedly and then it finds the lower tangent for each side to merge the two sides together to form a polygon. hull formation. The time complexity analysis is discussed in Section 3 and conclusion is drawn in Section 4.
Methodology
Our algorithm takes in a binary image produced in 17 , in which de-noising procedures have been applied. Our preprocessing steps involve merely defining a bounding box based on four global extreme values and a global maximum point of the hand image under detection. To find the convex hull, the global maximum point is used as the first convex hull vertex. Bresenham lines are then drawn from the first vertex to the right edge of the bounding box with the purpose of looking for intersection point with the hand. By using Jarvis March algorithm, the intersection point is the second convex hull vertex found and the process is repeated until all four bounding box edges have been processed. The following sections describe the two major steps involved: pre-processing and convex hull detection.
Pre-processing Step: Defining the Bounding Box
In In binary image processing, for example fingertips detection, pre-processing in terms of key/feature points extraction, is always required before convex hull algorithm can be performed on the points extracted. In 13 , the pre-processing scans the 2D image clockwise to check for the extreme points in order to form a polygon, the proposed convex hull algorithm is then performed on the extracted polygon to check for the convexity of the polygon and to make necessary adjustments. In 14 , feature points of the object are extracted for generating convex hull before viewpoint invariant Fourier descriptor is used to calculate the set of invariants for three dimension planar object recognition. In 15 , the pre-processing scans the image for eight extreme points, and then divides the region within the extreme points into 5 regions, further scans are carried out on every region to find the boundary pixels for convex hull. In other projects, the typical way is to apply edge detection on an image prior to convex hull algorithm. In general, edge detection involves filtering techniques such as Laplacian or gradient, which requires a great amount of processing work 16 .
Proposition
Our project intends to bypass the edge detection process and apply the convex hull algorithm directly on a binary image to extract the fingertips vertices. In this paper, we present a hybrid algorithm to form convex hull by embedding Bresenham algorithm within Jarvis March algorithm, directly on an image with minimal preprocessing (Figure 1). The binary image we have used is considered to be free from noise after partitioning method is applied 17 .
The rest of the paper is organized as follows. Section 2 describes the methodologies in pre-processing and convex top vertex of the convex hull as p 0 , where m x ≤ p 0x ≤ M x , p 0y = M y . The rubber band will form a horizontal line, L, intersecting with the right edge of the bounding box, H. As p 0 is the rightmost top vertex of the convex hull, let l be a point on L, ∀l∈L, l ≠ p 0 , p 0x <l x ≤ M x and l y = M y , l∉S. Thus, L is the exterior line of the convex polygon. While p 0 is pinned, L rotates around p 0 clockwise until it intersects with a point p 1 where p 1 ∈S. In other words, L rotates θ angle with respect to its original position where θ is the smallest angle before it meets the first point p 1 where p 1 ∈S. It is clear that p 1 is a vertex of the convex hull as mentioned in Jarvis March algorithm 10 .
Convex Hull Formation
From Proposition 1, as p 0 = (hand x , M y ) is the global maximum point, it is also the vertex of the convex hull. To search for subsequent vertex, see the following steps. p 0 to r is checked. Bresenham algorithm is used to step through the points from p 0 to r. Given a point 3.
p, where p∈L, such that H(p) = 0,∀p∈L; move r one pixel down vertically, that is, let r = (r x , r y −1), and repeat step 2 until H(p) = 255 is encountered. H have been examined. Figure 4 shows the algorithm framework for the steps mentioned above for convex hull formation. In point detection of an image, there will be many convex hull vertices found. And as we are expecting hand image, the regions with high concentration of vertices are fingertips regions. Thus, any point within the group of the vertices can be used. For simplicity, we use the last point of a group as the fingertip point. Proposition 2. Pick a convex hull vertex, name it p 0 , and draw a straight line L from p 0 until L intersects with one edge of the bounding box. Ensure L is an exterior line of the convex polygon, i.e. L does not intersect with the convex polygon. By using p 0 as a pivot point, rotate L in clockwise direction. The first intersection point encountered by L with the hand pixel is a convex hull vertex (Figure 3).
Proof. Pin one end of a rubber band on a vertex, p 0 , pull the other end of the rubber band horizontally towards an edge of the bounding box, and ensure the rubber band stays as an exterior line to the convex polygon. This can be done from Proposition 1 by selecting the rightmost
Time Complexity Analysis
In the pre-processing step, we are looking for the global extreme values of an NxN image. It scans through the whole image once to find the four extreme values, thus having complexity of O(N 2 ). The scanning activity involves only checking the brightness value of each pixel.
For convex hull formation, only some of the black pixels are checked, i.e. the black pixels in between the fingers will not be checked. Thus, it is only the black pixels outside of the convex hull but within the bounding box are being checked. We have captured a few hand images as examples in Figure 5 and the black pixel percentages are shown in Table 1.
From Table 1, the maximum percentage of total black pixels outside of convex hull in the bounding box in all five images is 46.58%. We make a reasonable conclusion that for any outstretched hand image, the total black pixels outside convex hull in bounding box is always less than 50%. Thus, in the worst case scenario, the number of black pixels to be processed are ½N 2 . The time complexity is O(N 2 ). However, we must take into consideration that the checking algorithm in Bresenham involves only integers and the '+' operator, thus the processing time will be much faster.
Conclusion
A hybrid convex hull algorithm by using Bresenham algorithm embedded in Jarvis March has been developed.
The new algorithm is expected to reduce the resources allocated for edge detection and apply the convex hull algorithm directly on the pixels in binary image with minimal processes. Though the time complexity of our algorithm is O(N 2 ), our algorithm uses only integers and the '+' operators and thus is expected to be efficient as far as computer processing is concerned.
|
2019-04-21T13:13:40.243Z
|
2016-07-26T00:00:00.000
|
{
"year": 2016,
"sha1": "2c1795b8bbcbb713b2a13d71ba45689923fc6674",
"oa_license": null,
"oa_url": "https://doi.org/10.17485/ijst/2016/v9i28/97821",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "8202ffce6a059894f0240871c6dedc9be037399b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
18966161
|
pes2o/s2orc
|
v3-fos-license
|
Molecular mechanisms of luteolin-7-O-glucoside-induced growth inhibition on human liver cancer cells: G2/M cell cycle arrest and caspase-independent apoptotic signaling pathways
Luteolin-7-O-glucoside (LUT7G), a flavone subclass of flavonoids, has been found to increase anti-oxidant and anti-inflammatory activity, as well as cytotoxic effects. However, the mechanism of how LUT7G induces apoptosis and regulates cell cycles remains poorly understood. In this study, we examined the effects of LUT7G on the growth inhibition of tumors, cell cycle arrest, induction of ROS generation, and the involved signaling pathway in human hepatocarcinoma HepG2 cells. The proliferation of HepG2 cells was decreased by LUT7G in a dose-dependent manner. The growth inhibition was due primarily to the G2/M phase arrest and ROS generation. Moreover, the phosphorylation of JNK was increased by LUT7G. These results suggest that the anti-proliferative effect of LUT7G on HepG2 is associated with G2/M phase cell cycle arrest by JNK activation. [BMB Reports 2013; 46(12): 611-616]
INTRODUCTION
Apoptosis or programmed cell death is a normal component of the development and health of multicellular organisms. Cells die in response to a variety of stimuli, and in apoptosis, they do so in a controlled, regulated fashion. This makes apoptosis distinct from necrosis, another form of cell death, in which uncontrolled cell death leads to the lysis of cells, inflammatory responses, and potentially, to serious health problems. Apoptosis, in contrast, is a process in which cells play an active role in their own death. Apoptosis plays an important role in embryogenesis, metamorphosis, cellular homeostasis, tissue atrophy, and tumor regression. It is defined by morphological changes that include cell shrinkage, chromatin condensation, nuclear fragmentation, membrane blebbing, and apoptotic body formation (1)(2)(3). Oxidative stress and cell-cycle regulation are two essential elements in the apoptosis process. Apoptosis may be triggered by oxidative insults (4). Reactive oxygen species (ROS) are important chemical messengers in normal cells. They keep the balance with antioxidants in healthy cells (5). The accumulation of ROS results in oxidative stress, which mostly results in cell apoptosis (6). In addition, cell cycle arrest and apoptosis are closely linked to cell proliferation in mammalian cells (7)(8)(9). Because cancer involves deregulated cell proliferation and survival, inducing cell-cycle arrest is a feasible treatment to forestall continued tumor proliferation (10). A variety of natural and chemical compounds has been reported to interfere with the cell cycle, promote or inhibit apoptosis, produce important effects on their signal transduction and development progress, and even result in the death of tumor cells (11)(12)(13). Recent scientific efforts have focused on the potential roles of extracts of traditional herbs as alternative and complementary medications for cancer treatment. Flavonoids, a kind of polyphenol, have three phenolic subcomponents and are also commonly referred to as bioflavonoids (14). Phytochemicals in the flavonoid family have noted bioactivities to suppress the ROS, inflammation, and growth of tumors (15,16). Luteolin-7-O-glucoside (LUT7G), a flavone subclass of flavonoids, can be found in wild edible vegetables such as Ailanthus altissima. LUT7G possesses potential antibacterial, antifungal (17), antioxidant (18), and anti-inflammation effects (19). However, there has been no report related to the regulation of cell cycle and apoptosis on hepatocarcinoma cells. We investigated the anti-proliferation of LUT7G on tumor cells and the cellular mechanism of the cytotoxicity of LUT7G in HepG2 cells.
Cytotoxic effects of LUT7G on HepG2 cell lines
To determine the cytotoxic effects of LUT7G on HepG2 cells, the cells were exposed to various concentrations of LUT7G (50, 100, and 200 μM) for 24 h. Cells treated with 1% DMSO http://bmbreports.org were used as controls. As shown in Fig. 1A, LUT7G decreased cell viability in HepG2 cells in a dose-dependent manner. HepG2 cell proliferation was reduced by 39.8% after exposure to 200 μM LUT7G for 24 h. In addition, there was no cytotoxicity effect of LUT7G on normal cell lines (Huh7 cells) at the concentration tested (Fig. 1B). Microscopic image analysis revealed that LUT7G caused cell shrinkage with a condensed nucleus and a rough plasma membrane, which are indicative of apoptosis (Fig. 1C).
Induction of apoptosis by LUT7G in HepG2 cells
In order to determine whether the anti-proliferative effect of LUT7G was due to apoptosis, HepG2 cells were treated with LUT7G for 24 h, and nuclear Hoechst 33343 staining was performed. As shown in Fig. 2A, nuclei with condensed chromatin and apoptotic bodies, which are typical of apoptosis, were observed in HepG2 cells incubated with LUT7G. The number of apoptotic cells increased as the concentration of LUT7G increased. Next, we investigated DNA fragmentation in the nucleus using TUNEL staining. As shown in Fig. 2A, LUT7G significantly induced DNA fragmentation in HepG2 cells (yellow-green, TUNEL). Propidium iodide (PI) was used for the counterstain for all nuclei. In agreement with the results in Fig. 2A, LUT7G also increased DNA laddering in HepG2 cells in a dose-dependent manner (Fig. 2B). This suggests that the HepG2 cells may undergo apoptosis after LUT7G treatment, and there is a good correlation between the extent of apoptosis and the inhibition of cell growth.
LUT7G induces apoptosis in HepG2 via a caspase-independent pathway
Because apoptosis can proceed either via caspase-dependent or independent signaling pathways (20,21), the involvement of http://bmbreports.org BMB Reports caspases in LUT7G-induced HepG2 cell apoptosis was assessed. Expressions of the intracellular proteins related to apoptosis, such as PARP and caspase-3, -8, and -9, were investigated to understand the mechanisms by which LUT7G-induces apoptosis in HepG2 cells. As shown in Fig. 3A, the level of the PARP was decreased, and the level of cleaved PARP was increased in LUT7G-treated HepG2 cells. In contrast, the expression of caspase-3, -8 and -9 were not changed. These results suggest that the HepG2 cell apoptosis induced by LUT7G is not dependent on the activation of the caspase family of proteins.
To analyze other possible causes of growth inhibition, we examined the apoptotic effect of LUT7G on ROS generation and cell cycle arrest, which are known to be essential evi-dence of apoptosis. To investigate the intracellular levels of ROS, cell-permeable probe DCF-DA was utilized. Non-fluorescent DCF-DA, hydrolyzed to DCFH inside the cells, yields highly fluorescent DCF-DA in the presence of intracellular hydrogen peroxide and related peroxides (22). As shown in Fig. 3B, HepG2 cells treated with LUT7G for 24 h revealed ROS generation dose-dependently induced by LUT7G. Next, we investigate the effect of LUT7G on the cell cycle using flow cytometry. LUT7G treatment arrested HepG2 cells at the G2/M phase (Fig. 3C). The maximum G2/M phase percentage of 34.01% occurred with 200 μM LUT7G treatment for 24 h. Therefore, LUT7G induced the G2/M cycle arrest of HepG2 cells.
LUT7G induced apoptosis by JNK pathway
MAPKs are activated by various extracellular stimuli, and mediate the signal transduction cascades that play an important role in cell cycle arrest and cell apoptosis (23,24). Therefore, we next examined the effects of LUT7G on MAPK signaling. The expression of MAPK proteins such as JNK, ERK, and p38 were measured by western blotting. As shown in Fig. 4A, the quantification of band intensity showed that JNK was decreased after LUT7G treatment, but the expression levels of ERK and p38 were not affected. The phosphorylation level of JNK was increased by LUT7G treatment. Next, we investigated the possible roles of MAPKs in LUT7G-induced apoptosis. Cell viability was measured in the presence of specific MAPK inhibitors by intracellular ATP content. The cell viability was similar in the presence of Z-VAD-FMK, PD98059, and SB203580, while it was increased due to SP600125 treatment (Fig. 4B). These results indicate that LUT7G-induced apoptosis may be associated with the upregulation of the JNK pathways.
DISCUSSION
The proliferation inhibition and apoptotic induction of tumor cells are effective to prevent tumor growth and to eliminate cancers. Although numerous compounds possess antitumor activities, their applications as antitumor agents are greatly restricted by an unknown mechanism (25)(26)(27). In the present study, our results demonstrate that LUT7G inhibited growth and induced apoptosis in HepG2 liver carcinoma cells through a caspase-independent pathway, and LUT7G-induced apoptosis appears to be mediated via the induction of G2/M cell cycle arrest and ROS generation. Additionally, this form of cell death requires activation of the JNK pathway.
In the present study, we found that LUT7G was cytotoxic to the human hepatocarcinoma cell line (HepG2). We confirmed the effect on the morphologic features of HepG2 cells. HepG2 cells showed apoptotic body formation and DNA fragmentation with LUT7G treatment, which indicated apoptosis.
One of the main pathways of apoptosis is a caspase-dependent pathway. The caspases are cystein proteases that play key roles in the execution phase of apoptosis. Among the family of caspases, caspase-3 has been reported to be the most frequently activated caspase protease in apoptotic cells, indicating its crucial role in the cell death process (28). In the present study, our data show that LUT7G-induced apoptosis was not inhibited by the broad-spectrum caspase inhibitor Z-VAD-FMK, with no activation of caspase-3, suggesting a caspase-independent signal transduction pathway.
Because increasing evidence has associated apoptosis with ROS generation and cell cycle arrest, we examined the apoptotic effect of LUT7G on these features. ROS generation was effectively enhanced by LUT7G in HepG2 cells. Additionally, HepG2 cells treated with LUT7G were significantly arrested at the G2/M stage before apoptosis. Therefore, ROS generation and G2/M cell-cycle arrest probably contribute to LUT7G-in-duced HepG2 cell apoptosis.
The examination of several signaling molecules may help explain the mechanism of the apoptosis process caused by LUT7G in HepG2 cells. We found that LUT7G triggered phosphorylation of JNK in HepG2 cells.
The mitogen-activated protein kinases (MAPKs) family is one of the signal pathways that has been implicated in oxidative stress-induced cell death cascade (29,30). Among the MAPKs family, the activation of JNK is commonly linked to promoting cell apoptosis and cell death, and it is thus is also called a stress-activated protein kinase (SAPK) (31,32). For mammalian cells, researchers have reported that the accumulation of H2O2 can activate JNK pathways (33). Also, prolonged JNK activity promotes apoptosis, and can lead to accumulated ROS (34). In addition, JNK may be involved in cell cycle regulation. JNK is associated with the cell cycle with its target molecule c-Jun protein, which was reported to be involved with G1 phase progression (35). Previous studies reported an association between the JNK signaling pathway and the suppression of cell-cycle progression via the activation of cell cycle inhibitor proteins, including p21 and p27 (36)(37)(38).
The results of the present study significantly advance our understanding of the molecular actions of LUT7G. In particular, the novel findings reported here are that JNK activation underlies the anti-proliferative effects of LUT7G, such as G2/M cell-cycle arrest. Although the present data demonstrate the importance of JNK activation in LUT7G-induced growth inhibition in liver cancer cells, the mechanisms by which JNK regulates apoptotic factors in LUT7G treatment remain to be identified.
In conclusion, we demonstrated that LUT7G could affect the viability of human carcinoma cells. Furthermore, LUT7G induced apoptosis in HepG2 cells in a dose-dependent manner through caspase-independent pathways. With apoptosis, ROS accumulated, and cells were arrested in G2/M. ROS accumulation and G2/M cell cycle arrest contributed to the apoptosis process through the JNK pathway. These results provide further insight into LUT7G-induced apoptosis, and a new insight into the molecular mechanisms of LUT7G for cancer intervention.
Cell viability
Cells (1×10 5 cells/well) were added to duplicate 12-well plates and incubated for 4 h, then treated with various concentrations of LUT7G for 24 h. Cell viability was measured with CellTiter Glo (Promega, Madison, WI, USA). Cell viability is presented as the percentage of dead cells in each well.
Nuclear staining with Hoechst 33342
The nuclear morphology of the cells was studied using the cell-permeable DNA-specific dye Hoechst 33342. Approximately 2×10 5 HepG2 cells/well were treated with LUT7G at various concentrations for 24 h. Then, Hoechst 33342 was added to the culture medium at a final concentration of 1 μg/ml, and the plate was incubated for another 10 min at 37 o C. The stained cells were then observed under a fluorescence microscope (Carl ZEISS, Oberkochen, Germany) equipped with a SPOT digital camera to examine the degree of nuclear condensation.
TUNEL assay and DNA fragmentation analysis
Cells treated with LUT7G for 24 h were fixed with 4% formaldehyde for 20 min. Cells with fragmented nuclear DNA were detected according to the manufacturer's instructions. PI counterstain was performed at room temperature for all nuclei. The cells were analyzed using a fluorescence microscope. DNA laddering detection was performed according to the manufacturer's instructions (QIAGEN, Hilden, Germany).
Cell cycle analysis
Cells treated with LUT7G were harvested and collected by centrifugation at 1,500 rpm for 10 min and washed with ice-cold PBS. The cell pellet was suspended with 70% ethanol at −20 o C overnight, washed, and then incubated with PI/RNase A for 30 min staining in the dark at room temperature. Flow cytometry was used for detection (FACSCalibur, BD biosciences).
ROS generation analysis
For the microscopic detection of ROS formation, cells treated with LUT7G for 24 h were incubated with DCF-DA (25 μM) for 30 min at 37 o C in the dark. After several washings with PBS, cells were observed with a fluorescence microscope.
Western blot analysis
Cells were lysed in RIPA buffer (150 mM Sodium Chloride, 1% Triton X-100, 1% sodium deoxycholate, 0.1% SDS, 50 mM Tris-HCl, pH 7.5, and 2 mM EDTA) on ice for 30 min. After centrifugation at 4 o C for 20 min (12,000 × g), the supernatant was collected. Protein concentrations were determined by BCA assay (GenDEPOT, Barker, TX, USA). Equal amounts of cell extracts were separated by SDS-PAGE and transferred to polyvinylidene difluoride (PVDF) membranes (Bio-Rad, Hercules, CA, USA). The membranes were blotted with antibody, and detection was performed with an ECL system (Pierce, Rockford, IL, USA) according to the manufacturer's instructions.
Statistical analysis
Statistical analyses were performed with SPSS statistical software (version 12.0). The data represent the means ± SEM from 3 independent experiments, except where indicated. Statistical analyses were performed by student's t-test at a significance level of P < 0.05.
|
2018-04-03T00:43:22.365Z
|
2013-12-01T00:00:00.000
|
{
"year": 2013,
"sha1": "15d12b01fd69751f55ff00632ae91de2e037693d",
"oa_license": "CCBYNC",
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201301671903936&method=download",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "15d12b01fd69751f55ff00632ae91de2e037693d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
11673385
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between aberrant methylation and survival in non-small-cell lung cancers
The present study examined the relationship between methylation of five genes (p16INK4a, RASSF1A, APC, RARβ and CDH13) and patient survival in 351 cases of surgically resected lung cancers. While there was no relationship between the other genes and survival, p16INK4a methylation was significantly related to unfavourable prognosis in lung adenocarcinomas.
Lung cancer is the leading cause of cancer deaths in the world. Human lung cancers are classified into two major histologic types, small-cell lung cancer (SCLC) and non-small-cell lung cancer (NSCLC), the latter consisting of several subtypes. Previously, squamous cell carcinoma was the predominant form of NSCLC, but in the last few decades it has been replaced by adenocarcinoma. Moreover, adenocarcinoma is the most common type of lung cancer in women, never smokers and young subjects.
Aberrant methylation of CpG islands in the promoter region of tumour suppressor genes (TSGs) and tumour-related genes has become established as the major mechanism for gene silencing (Baylin et al, 1998). Inactivation of TSGs by DNA methylation is regarded as one of the fundamental processes for the development of human malignant tumours, including lung cancers . We studied the methylation status of five genes, p16 INK4a , RASSF1A, APC, RARb and CDH13, that are frequently methylated in lung cancers and that are considered to play an important role in the molecular pathogenesis of lung cancers (Toyooka et al, 2001b). We previously reported that smoke exposure, histologic type and geography-related differences in the methylation profiles of NSCLC (Toyooka et al, 2003). Some reports have described that methylation of specific TSGs are negative prognostic factors for lung cancer (Tang et al, 2000;Burbee et al, 2001). In this study, we collected the survival data of 351 cases of NSCLCs and correlated the methylation status of five genes (p16 INK4a , RASSF1A, APC, RARb and CDH13) with clinicopathological factors to investigate the effect of methylation of five genes on the survival of patients undergoing curative intent surgical resections for lung cancer.
MATERIALS AND METHODS
We studied frozen specimens of 351 tumours stored at À801C obtained from Japanese patients with primary NSCLCs treated by curative intent surgical resection between 1993 and 2000 in our institutions. The patients consisted of 234 males and 117 females and their median age was 65 years. Most of the tumours (325, 93%) were adenocarcinomas (199, 57%) or squamous cell carcinomas (126, 36%), while the remainder consisted of 19 large-cell carcinomas, six adenosquamous cell carcinomas and one unclassified type. In all, 169 patients had stage I disease, 60 stage II, 122 stage III. A total of 252 patients (72%) were ever smokers with a median smoke exposure of 45.9 pack years and 99 never smokers. Institutional Review Board permission and informed consent were obtained at each collection site.
Genomic DNA was isolated from frozen tumour tissue by SDS/ proteinase K (Life Technologies Inc., Rockville, MD, USA) digestion, phenol -chloroform extraction and ethanol precipitation. The methylation status of five genes reported to be frequently methylated and silenced in tumour but not in nonmalignant lung tissues (p16 INK4a , RASSF1A, APC, RARb and CDH13) (Toyooka et al, 2003(Toyooka et al, , 2004 was determined by methylation-specific PCR (polymerase chain reaction) (MSP) assay using gene-specific primers (Herman et al, 1996;Cote et al, 1998;Tsuchiya et al, 2000;Burbee et al, 2001;Toyooka et al, 2001a). Briefly, 1 mg of genomic DNA was modified by sodium bisulphite, which converts all unmethylated cytosines to uracils residues while methylated cytosines remain unchanged. Polymerase chain reaction amplification was performed with sodium bisulphate-treated DNA as template as described previously (Herman et al, 1996). The MSP assays were sensitive enough to detect one methylated allele in the presence of 10 3 -10 4 unmethylated alleles (Toyooka et al, 2003). DNA from peripheral blood lymphocytes and buccal mucosa brushes, each from 10 of nonsmoking healthy subjects, along with water blanks were used as negative controls for the methylated genes. DNA from lymphocytes healthy volunteer artificially methylated by treatment with Sss1 (New England BioLabs, Beverly, MA, USA) and then subjected to bisulphite treatment was used as a positive control for methylated alleles. Polymerase chain reaction products were visualised on 2% agarose gels stained with ethidium bromide. Results were confirmed by repeat MSP assays after an independently performed bisulphite treatment.
The overall survival was calculated from the date of surgery until the date of death or the last follow-up. Univariate analysis of overall survival was carried out by the Kaplan -Meier method using the log-rank test. Multivariate overall survival analysis was carried out by Cox's proportional-hazards model. The stepwise procedure was used to select independent variables with backward elimination method with P-values of entry of 0.10 and rejection of 0.12. All data were analysed with StatView for Windows (SAS Institute Inc., Cary, NC, USA).
RESULTS
The rates of methylation of five genes were determined in 351 cases of NSCLCs. Aberrant methylation was detected in 86 (25%) of 351 cases for p16 INK4a , 120 cases (34%) for RASSF1A, 131 cases (37%) for APC, 98 cases (28%) for RARb and 104 cases (30%) for CDH13. We correlated the methylation status of five genes and clinicopathological factors including gender, age, smoking status, histological differentiation and TNM stage with prognosis for patient. The analysis was performed on all 351 cases, and the major histological subtypes adenocarcinomas and squamous cell carcinomas. As adenocarcinomas are the predominant form of lung cancer in never smokers, the percentage of adenocarcinoma patients who smoked (55%) is lower than the figure for the overall patient population (72%). With regard to patient prognosis, there was no gene that was correlated with survival in all cases or squamous cell carcinomas. However, methylation of only one gene, p16 INK4a , was associated with poor survival by Kaplan -Meier survival analysis (P ¼ 0.020) in adenocarcinoma (Table 1a and Figure 1). The results of univariate and multivariate survival analysis for all variables are shown in Table 1a. Besides p16 INK4a methylation, T stage (P ¼ 0.0002), lymph node status (Po0.0001) and disease stage (Po0.0001) were correlated with poor prognosis on univariate analysis (Table 1a). On multivariate analysis, p16 INK4a methylation (RR ¼ 1.82, 95% CI ¼ 1.10 -3.00, P ¼ 0.019), T stage (RR ¼ 1.87, 95% CI ¼ 1.20 -2.92, P ¼ 0.006) and lymph node status (RR ¼ 4.60, 95% CI ¼ 3.00 -7.05, Po0.0001) were independently associated with adverse prognosis. Since a previous study indicated that RASSF1A methylation was significantly related to poor prognosis in stage I adenocarcinoma (Tomizawa et al, 2002), we analysed our 105 cases of stage I adenocarcinoma (Table 1b). Of the variables, only p16 INK4a methylation was significantly related to unfavourable survival by univariate and multivariate analysis in this cohort (RR ¼ 2.57, 95% CI ¼ 1.15 -5.76, P ¼ 0.022). There was no relationship between RASSF1A methylation (P ¼ 0.22) and survival in stage I adenocarcinoma or in all cases of adenocarcinomas (P ¼ 0.16) (Figure 1).
DISCUSSION
Using newer molecular biology technologies, many studies for genetic abnormality of lung cancers have been investigated in cancer pathogenesis and for their effects on clinical outcome. Among the frequent genetic changes in lung cancer, abnormality of p53 or K-ras has been extensively studied (Fukuyama et al, 1997;Mitsudomi et al, 2000) including its significant association with prognosis. Studies on the relationship between epigenetic changes and patient outcome are of more recent origin. In our analysis, only p16 INK4a methylation of the five genes that were frequently methylated in lung cancer was significantly correlated with poor prognosis in patients with lung adenocarcinoma. Furthermore, p16 INK4a methylation, along with lymph node status and T stage, was an independent prognostic factor by multivariate analysis. The p16 INK4a protein controls the transition from the G1 phase to the S phase in the cell cycle by inhibiting the phosphorylation of the retinoblastoma protein (Weinberg, 1995). Loss of p16 INK4a expression is frequently observed in lung cancers (Kratzke et al, 1996), and while inactivation may occur by other mechanisms such as point mutations or homozygous deletions, aberrant methylation is the most frequent method. Our results are consistent with other reports that loss of p16 INK4a expression was correlated with poor prognosis (Kratzke et al, 1996;Kawabuchi et al, 1999;Niklinski et al, 2001). In addition, Kim et al (2001) reported that p16 INK4a methylation was a risk factor predicting shorter survival after surgery. Reports with regard to RASSF1A are more contradictory (Burbee et al, 2001;Tomizawa et al, 2002;Endoh et al, 2003). We have previously reported an association between RASSF1A methylation and poor survival in resected Australian NSCLC cases (Burbee et al, 2001). In our present much larger study from Japanese cases, we were unable to demonstrate a relationship between RASSF1A methylation and survival despite analysis of the various subgroups. With regard to this issue, Tomizawa et al (2002) pointed out that RASSF1A methylation was correlated with poor prognosis in NSCLC patients at stage I. On the other hand, Endoh et al (2003) reported that there was no correlation. Some of these differences may be due to geographic, stage, subtype or other variables.
Our study represents the largest study correlating methylation and prognosis in lung cancer, and confirms and extends previous reports that p16 INK4a inactivation is a negative prognostic factor for NSCLC.
|
2014-10-01T00:00:00.000Z
|
2004-07-20T00:00:00.000
|
{
"year": 2004,
"sha1": "fbcff2d9ec308429ac6fa319a07d6cea59658436",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/6602013.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fbcff2d9ec308429ac6fa319a07d6cea59658436",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
14820105
|
pes2o/s2orc
|
v3-fos-license
|
Phase I vaccination trial of SYT-SSX junction peptide in patients with disseminated synovial sarcoma
Background Synovial sarcoma is a high-grade malignant tumor of soft tissue, characterized by the specific chromosomal translocation t(X;18), and its resultant SYT-SSX fusion gene. Despite intensive multimodality therapy, the majority of metastatic or relapsed diseases still remain incurable, thus suggesting a need for new therapeutic options. We previously demonstrated the antigenicity of SYT-SSX gene-derived peptides by in vitro analyses. The present study was designed to evaluate in vivo immunological property of a SYT-SSX junction peptide in selected patients with synovial sarcoma. Methods A 9-mer peptide (SYT-SSX B: GYDQIMPKK) spanning the SYT-SSX fusion region was synthesized. Eligible patients were those (i) who have histologically and genetically confirmed, unresectable synovial sarcoma (SYT-SSX1 or SYT-SSX2 positive), (ii) HLA-A*2402 positive, (iii) between 20 and 70 years old, (iv) ECOG performance status between 0 and 3, and (v) who gave informed consent. Vaccinations with SYT-SSX B peptide (0.1 mg or 1.0 mg) were given subcutaneously six times at 14-day intervals. These patients were evaluated for DTH skin test, adverse events, tumor size, tetramer staining, and peptide-specific CTL induction. Results A total of 16 vaccinations were carried out in six patients. The results were (i) no serious adverse effects or DTH reactions, (ii) suppression of tumor progression in one patient, (iii) increases in the frequency of peptide-specific CTLs in three patients and a decrease in one patient, and (iv) successful induction of peptide-specific CTLs from four patients. Conclusions Our findings indicate the safety of the SYT-SSX junction peptide in the use of vaccination and also give support to the property of the peptide to evoke in vivo immunological responses. Modification of both the peptide itself and the related protocol is required to further improve the therapeutic efficacy.
Results: A total of 16 vaccinations were carried out in six patients. The results were (i) no serious adverse effects or DTH reactions, (ii) suppression of tumor progression in one patient, (iii) increases in the frequency of peptide-specific CTLs in three patients and a decrease in one patient, and (iv) successful induction of peptide-specific CTLs from four patients.
Conclusions:
Our findings indicate the safety of the SYT-SSX junction peptide in the use of vaccination and also give support to the property of the peptide to evoke in vivo immunological responses. Modification of both the peptide itself and the related protocol is required to further improve the therapeutic efficacy.
Background
Synovial sarcoma is a relatively rare, high-grade malignant tumor of soft tissue, characterized by biphasic or monophasic histology, specific chromosomal translocation t(X; 18), and its resultant SYT-SSX fusion gene [1]. This tumor affects mostly adolescents and young adults. The 5year survival rates of patients with this localized disease have ranged from 66% to 80% in the current literature [2][3][4][5]. However, the majority of metastatic or relapsed diseases still remain incurable despite intensive multimodality therapy. Therefore there is a need for additional new therapeutic options other than conventional surgery, radiotherapy, and chemotherapy.
Vaccination of tumor antigenic peptide serves as a commonly accepted method in anti-cancer immunotherapy [6,7]. This is based on the rationale that T cells recognize antigenic peptide in the context of MHC molecules on the tumor cell or antigen presenting cells through the T cell receptor, which elicits subsequent anti-tumor immune responses. Identification of antigenic peptides recognized T cells enabled us to apply vaccination trials to a variety of tumors, except bone and soft tissue sarcomas [8].
Currently tumor specific chromosomal translocations are defined in leukemias, lymphomas, and sarcomas [9,10]. The fusion regions of translocation products are specifically expressed by its corresponding tumors, thereby serving as targets of great potential for tumor specific therapies, including immunotherapy [11,12]. We [13,14] found that SYT-SSX fusion gene-derived peptides can be recognized by circulating CD8+ T cells in patients with synovial sarcoma and elicit HLA-restricted, tumor specific cytotoxic responses by in vitro stimulations. In this study, we conducted a phase I pilot trial of vaccination of a SYT-SSX-derived junction peptide in elected synovial sarcoma patients.
Peptide
A 9-mer peptide (SYT-SSX B: GYDQIMPKK) spanning the SYT-SSX fusion region was synthesized under good manufacturing practice (GMP) conditions by Multiple Peptide Systems (San Diego, CA). The identity of the peptide was confirmed by mass spectral analysis, and was shown to have more than 98% purity when assessed by high pressure liquid chromatography analysis. The peptide was delivered us in the form of a freeze-dried, sterile white powder. It was dissolved in 1.0 ml of physiological saline (Otsuka Pharmaceutical Co., Ltd., Tokyo, Japan) and stored at -80°C until just before usage. The affinity of the peptide to HLA-A24 molecules and its antigenicity were determined in previous studies [13,14].
Eligibility
The study protocol was approved by the Clinical Institutional Ethical Review Board of the Medical Institute of Bioregulation, Sapporo Medical University, Japan. Eligible patients were those (i) who have histologically and genetically confirmed, unresectable synovial sarcoma (SYT-SSX1 or SYT-SSX2 positive), (ii) HLA-A*2402 positive, (iii) between 20 and 70 years old, (iv) ECOG performance status between 0 and 3, and (v) who gave informed consent. Exclusion criteria included (i) prior chemotherapy, steroid therapy, or other immunotherapy within the past 4 weeks, (ii) presence of other cancers that might influence the prognosis, (iii) immunodeficiency or a history of splenectomy, (iv) severe cardiac insufficiency, acute infection, or hematopoietic failure, (v) ongoing breast-feeding, (vi) unsuitability for the trial based on the clinical judgment of the doctors involved. This study was carried out at the Department of Orthopaedic Surgery, Sapporo Medical University Hospital from June 2003 until the end of September 2004.
Vaccination schedule
Vaccinations with SYT-SSX B peptide were administered subcutaneously into the upper arm six times at 14-day intervals. In order to set up a dose-escalation trial, the patients were separated into the two groups. Each group included three patients. Those from group 1 received 0.1 mg and group 2 participants received 1.0 mg.
Delayed-type hypersensitivity (DTH) skin test
Delayed-type hypersensitivity (DTH) skin test was performed at each vaccination. The peptide (10 µg) solution in physiological saline (0.1 ml) or physiological saline alone (0.1 ml) were separately injected intradermally into the forearm. A positive reaction was defined as a diameter of erythema of more than 4 mm, 48 hr after the injection.
Toxicity evaluation
Patients were examined closely for signs of toxicity during and after vaccination. Adverse events were recorded using the National Cancer Institute Common Toxicity Criteria (NCI-CTC).
Clinical response evaluation
Physical examinations and hematological examinations were monitored before and after each vaccination. Tumor size was evaluated by computed tomography (CT) scans before treatment, and again after three vaccinations, and then at the end of the study period. A complete response (CR) was defined as complete disappearance of all measurable diseases. A partial response (PR) was defined as a >= 50% decrease from the baseline in size of all measurable lesions (sum of products of maximal perpendicular diameters) lasting for a period of at least 4 weeks. Progressive disease (PD) was defined as an increase in the sum of the bi-dimensional measurements of all known disease sites by at least 25% or by the appearance of new lesions. No change (NC) was defined as the absence of matched criteria for CR, PR, or PD.
Tetramer staining
HLA-A24/peptide tetramers (HLA-A24/B, HLA-A24/ R49.2, and HLA-A24/HIV) were previously constructed [13][14][15]. Flowcytometric analysis was performed by taking peripheral blood mononuclear cells (PBMCs) from patients. PBMCs were taken at pre-vaccination and again one week after 1 st , 3 rd , and 6 th vaccination. Cells were stained with PE-labeled tetramers at 37°C for 20 min and a FITC-conjugated anti-CD8 mAb (Becton Dickinson) at 4°C for 30 min. Analysis of stained PBMCs was performed using FACScan (Becton Dickinson) and CellQuest software (Becton Dickinson). The frequency of CTL precursors was calculated as the number of tetramer positive cells / the number of CD8 + cells.
CTL induction
Cytotoxic T lymphocytes (CTLs) were induced from the PBMCs of patients using SYT-SSX B peptides according to the method described before [13,14]. The cytotoxic activity was evaluated by 6-h 51 Cr release assay [13]. As target cells, synovial sarcoma cell lines (Fuji, HS-SY-II, and SW982), an erythroleukemia cell line (K562), and a T-B Lymphoblast hybrid transfected with HLA-A*2402 (T2-A*2402) were used. Fuji and HS-SY-II were both HLA-A24 and SYT-SSX positive lines. SW982 and K562 were both HLA-A24 and SYT-SSX negative lines used as controls. T2-A*2402 cells were used to determine peptide-specific cytotoxicity by pulsation with SYT-SSX B or HIV peptide before labeling. The stimulated CD8 + T cells were mixed with the labeled target cells. After a 6-h incubation period at 37°C, the release of the 51 Cr label was measured by collecting the supernatant, followed by quantification in an automated gamma counter. The percentage of specific cytotoxicity was calculated as the percentage of specific 51 Cr release: [(experimental 51 Cr release -spontaneous 51 Cr release) / (maximum 51 Cr release -spontaneous 51 Cr release)] × 100. Maximum 51 Cr release was measured by incubating the labelled target cells with 2% NP-40, instead of the stimulated CD8 + T cells. CTL induction was determined as successful when specific cyotoxicity of 10% or more was achieved on Fuji, HS-SY-II, and SYT-SSX B peptide-pulsed T2-A*2402 cells.
Patient profiles
Six patients were enrolled in the study (Table 1). There were four men and two women with an average age of 34.7 years old (range 21-69 years). All patients had multiple metastatic lesions of the lung. A six-time vaccination schedule was completed in three patients, while the remaining three discontinued the vaccination regimen because of rapid disease progression. None of the treatment interruptions were due to the adverse effects of the
Clinical response
Recognized disease progression occurred in five out of six patients during the vaccination period (Table 1, Fig. 1). In contrast, one patient (case 5) showed no such rapid progression (Table 1, Fig. 2). These patients, except in case 1, had received systemic multidrug chemotherapy from one to four months before enrolling on this study.
Tetramer analysis and CTL induction
Peptide-specific immunological responses were evaluated in five patients by using HLA-A24/peptide tetramer analysis and in vitro CTL induction. As determined by flowcytometric analysis using HLA/peptide tetramers ( Table 2), frequencies of CTLs specific for SYT-SSX B peptide were shown to be at background levels (less than 0.1%) in three patients prior to vaccination. Those frequencies increased after the first (cases 4 and 6) and the third vaccination (case 2) (Fig. 3). In the remaining two patients (cases 3 and 5), SYT-SSX B peptide-specific CTLs existed beyond the background levels before vaccination. Of these, B peptide-specific CTL frequencies increased slightly in case 2 upon a series of vaccinations. On the contrary, the CTL frequencies in peripheral blood decreased to the background level after the third vaccination in case 5, whose metastatic diseases remained stable during the vaccination period. For comparison, tetramers with irrelevant peptides were constructed as internal controls and utilized in four patients. Notably, CTL frequencies reacting to those irrelevant tetramers remained under the background level during the course of vaccinations in all patients. Table 3 depicts the results of CTL induction by in vitro stimulations with SYT-SSX B peptide. Before vaccination, CTLs specific for SYT-SSX B peptide were successfully induced from one patient (case 2) who showed a high frequency of CTL precursors. After the first or third vaccination, CTLs were induced from four of five patients. Figure 4 represents the results of cytotoxicity assay. As shown, CTLs induced from the case 4 patient exhibited cytotoxic activities against T2-A*2402 cells pulsed with SYT-SSX B peptide, and synovial sarcoma cell lines expressing HLA-A24 and SYT-SSX (Fuji and HS-SY-II) in various effecter/ target ratios examined. In contrast, the cytotoxity was less than 10% against T2-A*2402 cells without peptide pulsation, those pulsed with irrelevant HIV peptide, and tumor cells lacking HLA-A24 and SYT-SSX (SW982 and K562). These findings suggest that induction of peptide-specific CT scan image of the lung of case 3 patient
A B
immune responses in patients with synovial sarcoma, who received the SYT-SSX junction peptide vaccine.
Discussion
The present study was designed to evaluate the in vivo immunological property of a 9-mer SYT-SSX junction peptide in patients with disseminated synovial sarcoma. A total of 16 vaccinations of the peptide in six patients revealed (i) no serious adverse effects in any case, (ii) suppression of tumor progression in one patient, (iii) increases in the frequency of peptide-specific CTLs in three patients and a decrease in one patient, and (iv) CT scan image of the lung of case 5 patient Frequency of CTLs analyzed by HLA-A24/peptide tetramers in the case 4 patient. Frequencies of each analysis were described in Table 2. Vaccination trials of fusion gene-derived peptides have been reported with BCR-ABL in 12 patients with chronic myelogenous leukemia [16], EWS-FLI1 in 12 patients with Ewing's sarcoma [17], and PAX3-FKHR in four patients Cytotoxicity of CTLs induced from the case 4 patient %Specific lysis with alveolar rhabdomyosarcoma [17]. In addition, Matsuzaki et al. [18] reported a case of a synovial sarcoma patient who were treated with autologous dendritic cells pulsed with a mixture of SYT-SSX junction peptides (8-16 mer). In these studies, tumor remission was noted only in one patient with Ewing's sarcoma where IL-2 was concomitantly administered. These findings together with the results of our present trial have indicated the limited therapeutic efficacy of natural junction peptides. In this regard, we discovered improved in vitro immunogenicity of the SYT-SSX junction peptide by the substitution of an HLA-A24 anchor residue (position 9) [14]. Clinical study of this anchor-substituted peptide is currently underway. Besides modification of the peptide itself, concurrent use of adjuvants and cytokines, and adoptive T cell or/and dendritic transfer should further improve the therapeutic efficacy [8,19]. Also, it is important to determine the appropriate timing of vaccination and proper endpoints of clinical studies.
To monitor the immunological responses, we used HLA/ peptide tetramer and in vitro CTL induction. Other monitoring procedures such as ELISPOT assay should provide further information. Nevertheless, the use of an internal control tetramer with HIV peptide added strength in this present comparative analysis. As shown in Table 2 and Fig. 4, reactivity of circulating T cells to HIV peptide tetramer remained under the background level throughout the vaccination period.
Due to the rarity and highly malignant nature of synovial sarcoma, three out of six patients failed to complete the six-time vaccination regimen. Such difficulty in continuation of vaccination has also been found in a trial of patients with Ewing's sarcoma and alveolar rhabdomyosarcoma [17]. Another limitation in the current study is lack of analysis on immunological significance of SYT-SSX variants (SYT-SSX1 and SYT-SSX2). This question should be addressed in a larger scale analysis.
Conclusions
This is the first clinical trial of SYT-SSX fusion genederived peptide in patients with synovial sarcoma. The present trial demonstrated the safety and immunogenic property of the peptide. Modification of both the peptide itself and the related protocol is required to further improve the therapeutic efficacy.
|
2016-05-12T22:15:10.714Z
|
2005-01-12T00:00:00.000
|
{
"year": 2005,
"sha1": "25d61abbc845923914ad26b75d0103d401001335",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/1479-5876-3-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "778d8c3e8e9861d9e52cfb8451d45de1a160e080",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210927272
|
pes2o/s2orc
|
v3-fos-license
|
Bioelectrical domain walls in homogeneous tissues
Electrical signaling in biology is typically associated with action potentials, transient spikes in membrane voltage that return to baseline. Hodgkin-Huxley and related conductance-based models of electrophysiology belong to a more general class of reaction-diffusion equations which could, in principle, support spontaneous emergence of patterns of membrane voltage which are stable in time but structured in space. Here we show theoretically and experimentally that homogeneous or nearly homogeneous tissues can undergo spontaneous spatial symmetry breaking through a purely electrophysiological mechanism, leading to formation of domains with different resting potentials separated by stable bioelectrical domain walls. Transitions from one resting potential to another can occur through long-range migration of these domain walls. We map bioelectrical domain wall motion using all-optical electrophysiology in an engineered cell line and in human induced pluripotent stem cell (iPSC)-derived myoblasts. Bioelectrical domain wall migration may occur during embryonic development and during physiological signaling processes in polarized tissues. These results demonstrate that nominally homogeneous tissues can undergo spontaneous bioelectrical symmetry breaking.
Introduction
In 1952, Hodgkin and Huxley introduced a mathematical model of action potential propagation in the squid giant axon, based on nonlinear dynamics of electrically coupled ion channels. 1 In the same year, Alan Turing proposed a model for biological pattern formation, based on diffusion and nonlinear reaction dynamics of chemical morphogens. 2 These two seemingly unrelated models have an underlying mathematical kinship: both are nonlinear reaction-diffusion equations, first order in time and second order in space. Thus, from a mathematical perspective, one expects parallel classes of solutions. These solutions can be organized by whether they are uniform or patterned in space, and stable or varying in time ( Fig. 1). All four combinations of spatial and temporal structure have been observed in chemical reaction-diffusion systems 3 , but only three of the four have been reported in nominally homogeneous systems governed by Hodgkin-Huxley-like conductance-based models. We thus sought to observe the fourth class of electrophysiological dynamics: spontaneous spatial symmetry breaking in a nominally homogeneous tissue to create patterns of membrane voltage that are static in time but that vary in space.
Spatial symmetry breaking might emerge during slow transitions in membrane potential, such as occur during embryonic development and in signaling processes in peripheral organs. While spontaneous pattern-forming processes in electrophysiology have been contemplated, 4-8 unambiguous observations with clear mechanistic interpretations have been lacking. Part of the experimental challenge comes from the difficulty of spatially mapping membrane voltage. Patch clamp measurements of membrane potential probe the voltage at only discrete points in space, and are thus ill-suited to mapping spatial structure. Recent advances in voltage imaging facilitate spatially resolved measurements 9,10 , and optogenetic stimulation offers the prospect to tune the electrophysiological state of a tissue and perhaps to drive it into a regime of spontaneous symmetry breaking.
Here, we explore these ideas experimentally in engineered cells expressing the inwardrectifying potassium channel K ir 2.1 and the channelrhodopsin CheRiff ( Fig. 2A,B). While this two-component cellular model is so simple as to appear almost trivial, we find that coupled ensembles of these cells show richly diverse transitional behaviors, including electrical bistability, bioelectrical domain walls, and noise-induced breakup into discrete electrical domains. We further show that similar dynamics occur in human induced pluripotent stem cell (iPSC)-derived myocytes during differentiation. Our results demonstrate bioelectrical pattern formation and domain wall motion as generic mechanisms by which tissues can switch from one membrane voltage to another.
Bistable membrane voltages
The lipid bilayer cell membrane behaves, electrically, as a parallel plate capacitor. Transmembrane protein channels can pass ionic currents which alter the intracellular charge, and hence the membrane voltage. In a single cell or a small isolated patch of tissue, the membrane voltage follows: where C m is the membrane capacitance and I is the current through all ion channels (outward positive). Channel gating dynamics can impose a nonlinear and history-dependent relation between I and V which causes complex dynamics in excitable cells.
Resting potential dV dt = 0 in most polarized cells is set by an inward rectifier potassium channel, K ir . The current through the K ir channel is I Kir = g K x ∞ V V − E K , where g K is the conductance (proportional to the number of channels in the membrane), and x ∞ (V) captures the voltage-dependent gating of the channel (shut at depolarized voltages, open at polarized voltages). 11 The term (V − E K ), where E K ∼ −90 mV is the potassium Nernst potential, accounts for the electrochemical driving force for ions to cross the membrane. The function I Kir (V) crosses the x-axis at the potassium reversal potential. Inward rectification implies a drop in K ir current at more positive potentials. Together these attributes give K ir channels a non-monotonic I-V relationship (Fig. 2C). 4,12 To a good approximation, the K ir conductance depends on present voltage only, not on history.
Cells typically have one or more leak conductances. We consider the simplest case: an Ohmic leak with reversal potential 0 mV and conductance g l , leading to a straight line I-V relation, I leak = g l V. Leak conductances can be gated by external variables, e.g. by chemical ligands or mechanical forces. Below we use a non-selective cation-conducting channelrhodopsin, CheRiff, as a leak conductance where the value of g l is readily tunable via blue light 9 .
The total current is the sum of the K ir and leak currents (Fig. 2C). When g l dominates, one has a single depolarized fixed point (I = 0) near 0 mV (P D ). When g K dominates, one has a single polarized fixed point near −90 mV (P P ). When g l and g K are approximately balanced, one has an N-shaped I-V curve which crosses the x-axis three times. This situation implies coexistence of stable fixed points P D and P P with an unstable fixed point (P U ) in between, leading to overall bistability. 13,14 From a dynamical systems perspective, this situation is analogous to the bistability observed in the famous E. coli Lac operon system. 15,16 We genetically engineered a HEK293 cell line that stably expressed K ir 2.1 and CheRiff (Methods). 17,18 We call these bistable-HEK cells (bi-HEKs). Patch clamp measurements on small clusters (~50 μm diameter) of bi-HEKs revealed a non-monotonic I-V curve, which could be driven through two saddle-node bifurcations by light ( Supplementary Fig. 1). We performed numerical simulations of a cell governed by Eq. 1, using the measured I-V curve (Methods). Under continuous variation in blue light the simulated membrane potential underwent sudden jumps at saddle node bifurcations where P U annihilated either P D or P P .
The jumps occurred at different values of blue light in the polarizing and depolarizing directions, leading to hysteresis (Fig. 2D), i.e. within the hysteretic region, the membrane voltage was bistable.
We used a far-red voltage-sensitive dye, BeRST1 19 , to report the membrane voltage in small clusters of bi-HEKs. Homogeneous blue illumination was slowly increased (0 to 10 mW/cm 2 over 75 s) and then decreased using a piecewise-continuous waveform comprising linear ramps alternating with 10 s intervals of constant intensity (Fig. 2E, top). The 10 s periods of constant intensity were ~10 3 -fold longer than the membrane electrical time constant (~10 ms), sufficient to ensure that the membrane voltage reached steady state. The optically recorded membrane voltage showed abrupt transitions and hysteresis, in close agreement with the numerical simulations (Fig. 2E, bottom). Furthermore, we did not detect drift in the membrane voltage during the periods of constant optogenetic drive, confirming that the dynamics were quasi-static. Thus, cells expressing K ir + leak exhibited a form of non-genetic electrophysiological memory: the steady-state membrane voltage was not uniquely specified by the ion channels alone. Rather, in the hysteretic regime the steadystate voltage depended on the history of ionic currents, which could in turn depend on the history of stimuli to the cell or, in principle, on the history of gene expression (e.g. whether the leak or the K ir channel was expressed first). 4
Bioelectrical domains in extended tissues
In an extended tissue, neighboring cells can be coupled by gap junctions. When the voltage on a cell deviates from the mean of its neighbors, ionic currents flow through the gap junctions to minimize this deviation. The dynamics then become: where G cxn is the sheet conductance due to gap junction channels. When the membrane potential is bistable (i.e. the ratio g l /g K is in the hysteretic portion of Fig. 2E), different regions of the tissue may sit at different resting potentials, P U and P D . 20 A domain wall then emerges at the interface between these regions (Fig. 3A). In a homogeneous tissue, the domain wall is stationary only when the K ir and leak conductances are perfectly balanced, i.e. when the areas of the positive and negative portions of the I-V curve between P U and P D are equal. 21 Otherwise the domain wall migrates to expand the territory of the stronger conductance (Supplementary Fig. 2A Differential equations of this form appear in many contexts, perhaps most famously to describe the dynamics of a physical pendulum (with time replacing position as the independent variable). 22 The domain wall profile is described by the separatrix solution which delineates the oscillatory from the rotatory solutions: The domain wall width scales as λ ∼ G cxn E K 2πA . The parameter A is a measure of the strength of the (non-gap junction) ionic currents, i.e. g K and g l in the more detailed biophysical model. Dual patch clamp measurements have shown that G cxn is maximal at zero voltage difference between adjoining cells and decreases when the intercellular voltage exceeds approximately ±40 mV. 23 Our simulations showed that in the domain walls the maximal nearest-neighbor voltage difference was < 1 mV, implying that the voltage dependence of G cxn could safely be neglected. This effect may be necessary to include when the width of the domain wall approaches the size of a cell.
In two dimensions, simulations predicted that bioelectrical domains nucleated at defects (e.g. cells that expressed only leak or only K ir ) and spread through the tissue (Fig. 3B, Supplementary Movie S1). In a tissue that was homogeneous but for an arbitrarily low density of nucleation points, the hysteresis vanished and the transition between depolarized and polarized states was abrupt (Fig. 3C). Thus the collective nature of the transition in an extended tissue was predicted to convert a gradual change in ionic currents into a highly sensitive phase change-like switch in membrane potential.
HEK cells endogenously express connexins 43 and 45 which mediate nearest-neighbor electrical coupling, 24,25 so we reasoned that confluent monolayers of bi-HEK cells might support bioelectrical domain walls. We performed optogenetic stimulation and voltage imaging experiments in confluent islands of bi-HEKs with dimensions ~2 × 2 mm, corresponding to ~4×10 4 cells (Fig. 3D). Initially (in the absence of optogenetic stimulation) the tissue was homogeneously polarized. Illumination with dim blue light led to nucleation of depolarized domains near the tissue boundaries. We extracted the mean fluorescence intensity profile across the domain wall. When scaled to match the voltage axis, the profile agreed closely with the predictions of both the numerical simulation and the analytical approximation (Fig. 3A).
As the blue light further increased, the domain walls migrated across the tissue, until the whole tissue was depolarized (Fig. 3E, Supplementary Movies S2 and S3). The fluorescence intensity of most regions in the island showed step-like depolarization with little hysteresis, consistent with the theoretical predictions. Domain wall formation and migration were observed in 8 of 8 independently prepared and measured islands ( Supplementary Fig. 3), though in some cases defects prevented depolarization of the entire island.
To test the stability of the domain walls, we applied a piecewise-continuous blue light waveform comprising linear ramps alternating with 10 s intervals of constant intensity, as in Fig. 2E. During each period of constant illumination the domain walls remained stationary. During each period of increasing or decreasing illumination, the domain walls advanced or retreated, respectively ( Supplementary Fig. 4). The 10 s periods of domain wall stability were ~10 3 -fold longer than the membrane relaxation time constant of ~10 ms, demonstrating the quasi-static nature of the electrical patterns. These experiments confirmed the existence of stable domain walls in a nominally homogeneous tissue, a hallmark of spontaneous spatial symmetry breaking.
We confirmed the role of gap junctions in domain wall migration by adding a gap junction blocker 2-aminoethyl diphenylborinate (2-APB, 50 μM) to island cultures of bi-HEKs. Before adding the gap junction blocker, a ramp of blue light caused the island to depolarize over a narrow range of blue light levels via domain wall migration, and the membrane potential showed little hysteresis. After adding the blocker, individual cells showed discrete hysteretic switching, each with its own transition points set by the cell-specific expression levels of K ir 2.1 and CheRiff ( Supplementary Fig. 5). Thus gap junctional coupling was necessary for the transition from zero-dimensional to two-dimensional behavior.
No tissue is perfectly uniform, so we explored via simulation tissues with cell-to-cell variations in expression of K ir or leak (Methods). Noisy ion channel expression introduced an effective friction for domain wall motion, stabilizing droplet-like domains of high or low voltage and broadening the transition in tissue-average voltage under a ramp in g l (Supplementary Fig. 6). Sufficiently strong heterogeneity led to stick/slip saltatory domain wall motion. The tissue-average voltage then showed Barkhausen-like fine-structure noise ( Supplementary Fig. 7). Tissue heterogeneity also restored some degree of hysteresis in the tissue-average voltage, and, when strong enough, broke the tissue into discrete domains that switched independently. The predicted voltage dynamics of coupled cells expressing leak + K ir thus exhibited many of the features found in magnetization of a disordered soft ferromagnet. 26 Signatures of noisy ion channel expression were observable in our experiments on island cultures. Due to regional variations in the depolarizing transition point, the voltage averaged over the whole island showed a broader transition than did local measurements ( Supplementary Fig. 7). The island-average voltage showed Barkhausen-like fine-structure noise, and a small amount of hysteresis, indicative of stick-slip domain motion. Numerical simulations of islands with noisy gene expression recapitulated these effects ( Supplementary Fig. 7). In cultures where transient transfection of K ir 2.1 led to enhanced cell-to-cell variability in g K (Methods), we observed breakup into regions in which the voltage showed hysteretic zero-dimension-like behavior and regions which showed smooth and reversible depolarization, in concordance with simulations ( Supplementary Fig. 7, Supplementary Movies S4 and S5).
Electrical bistability and hysteresis during myogenesis
Early embryonic tissue has a membrane voltage, V m ~ 0 mV 27 . During myogenesis, myoblast precursors polarize electrically, exit the cell cycle, and fuse into myocytes whose resting potential is ~−85 mV. 28 Expression of K ir 2.1 initiates this hyperpolarization. 29 In mammals, myoblast precursors couple transiently via gap junctions during differentiation and prior to fusion. 30 We thus hypothesized that bistability and bioelectric domain wall motion might occur during myogenesis.
We performed all-optical electrophysiology experiments in human induced pluripotent stem cell (hiPSC) derived myoblasts as they differentiated into myocyte fibers in vitro (Fig. 4A).
HiPSC myoblasts were seeded at low density, lentivirally transduced to expresses CheRiff, and then allowed to proliferate to form a confluent monolayer (Methods; Fig. 4A). The cells were then differentiated into myocytes. After one week of differentiation, cells expressed stained positive for myogenin, PAX7, and myosin heavy chain, and adopted an elongated fiber-like morphology, indicative of differentiation toward mature myocytes (Fig. 4B). RNAseq measurements on matched samples showed a significant increase in K ir 2.1 expression during the differentiation process (5.6-fold, p <0.001) and high expression of gap junction proteins Cx43 and Cx45 throughout (Methods). We performed voltage imaging under ramped wide-field optogenetic stimulation at two time-points during differentiation to test for signatures of electrical bistability in isolated cells and domain wall motion in confluent cultures.
In myoblast precursors that had not yet reached confluence (day 3), we observed heterogeneous responses to ramped optogenetic stimulation: Cells showed either a smooth response with saturation-like behavior and little hysteresis (67%, 34 of 51, Fig. 4C left) or a step-wise depolarization, which did not reverse upon cessation of the optogenetic stimulus (33%, 17 of 51, Fig. 4C middle). In immature myocytes mechanically dissociated from a confluent culture (day 6, 3 days after start of differentiation), we observed sub-populations with behavior similar to day 3 (smooth depolarization with no hysteresis: 47%, 42 of 89; step-wise, irreversible depolarization: 29%, 26 of 89). We also observed a new subpopulation comprising cells with closed hysteresis loops that resembled the bi-HEKs (24%, 21 of 89, Fig. 4C right, Supplementary Fig. 8, Methods).
These three seemingly disparate behaviors could all be explained by a simple model containing a leak, a channelrhodopsin, and K ir expression which increased on average between day 3 and 6 ( Fig. 4D, S8). At the lowest K ir level, the I-V curve was monotonically increasing, so channelrhodopsin activation shifted a single stable voltage fixed point along the I = 0 axis. This led to a continuous and reversible change in voltage (Fig. 4E). At intermediate K ir level, the I-V curve was N-shaped and crossed the I=0 axis three times in the absence of channelrhodopsin activation. Blue light drove step-wise depolarization via a saddle node bifurcation. The depolarized state remained stable in the absence of optogenetic drive, leading to non-recovering depolarization. At the highest K ir level, the hysteresis curve shifted to the right and the cells repolarized in the absence of optogenetic drive. Thus, a simple model with one tuning parameter captured the three qualitatively distinct single-cell responses to channelrhodopsin activation.
In confluent monolayers at day 6, we used patterned optogenetic stimulation to excite a portion of the tissue. The evoked action potentials propagated beyond the stimulated region, confirming the presence of gap junctional electrical coupling ( Supplementary Fig. 9). Under spatially homogeneous ramped optogenetic stimulation, we observed optogenetically induced domain wall propagation (Fig. 4F, Supplementary Movie S6). The presence of domain wall propagation was surprising, considering that only a minority (24%) of the isolated cells were bistable. Simulations showed that the due to strong electrotonic coupling, the global behavior of a tissue could be dominated by a minority of cells expressing K ir 2.1 ( Supplementary Fig. S6). As in the bi-HEKs, the whole-tissue average voltage showed little hysteresis as a function of optogenetic drive (Fig. 4G), consistent with depolarization via domain wall migration. These observations show that differentiating myoblasts exhibit electrical bistability when isolated and collective domain wall migration during an essential step of myogenesis.
In contrast to the bi-HEK cells, the myoblasts also supported propagation of regenerative action potential waves. These waves manifested as spikes in the whole-tissue fluorescence during a gradual optogenetic depolarization (Fig. 4G). The additional depolarizing drive associated with these spikes caused the waves to propagate rapidly across the tissue, without disruption from the defects which could pin the motion of domain walls.
Discussion
Quasi-static spatial variations in membrane potential are well known to arise in development (of animals [31][32][33] , plants 34 and protists 35 ), in wound healing 36 , and to persist in some mature tissues 37 . In most cases the ion channels responsible for these potentials are not known, but it is generally assumed that the membrane voltage in each patch of tissue is set by the locally expressed ion channels, i.e. that the bioelectrical patterns are 'baked in' to the tissue via conventional morphogen signaling pathways which govern ion channel gene expression. Our work shows that this need not be the case. Electrical instabilities can amplify minute (possibly undetectable) variations in ionic currents in an otherwise homogeneous tissue. Thus the conventional view that patterns of gene expression drive patterns of electrophysiology might, in some cases, be reversed. The pathways by which patterns of membrane voltage could affect patterns of gene expression remain a topic of much current research. 33,38 In vitro, muscle cells must be aggregated to differentiate, a phenomenon called the "community effect". 39 Our results show that electrical coupling can mediate community effects, i.e. that the collective electrical dynamics of coupled cells can be strikingly different from the individuals, even if all cells are identical. Domain wall migration mediates polarization in extended tissues, whereas isolated cells or small clumps must polarize all at once. Consequently, under ramped K ir 2.1 expression, an electrically coupled, extended tissue will polarize before an isolated cell or small patch, even if all other conditions are identical. K ir 2.1 expression is required for the expression of the myogenic transcription factors Myogenin (MyoG) and Myocyte Enhancer Factor-2 (MEF2). 29 Disruption of gap junction coupling is sufficient to disrupt myogenesis. 40 Together, these observations suggest that bioelectric domain walls might play a functionally important role in myogenesis. This prediction merits further mechanistic tests in cultured myocytes and in vivo. It will be interesting to relate the bioelectrical response properties of developing muscle to the shifts that occur as myocytes fuse and gap junctional coupling diminishes during maturation.
Many combinations of ion channels can produce N-shaped I-V curves and thereby mediate electrical bistability. For instance, the combination of a K + -selective leak current and the steady-state window current of T-type Ca V channels mediates plateau potentials in thalamocortical neurons. 41,42 In this case, one would expect electrical bistability to be accompanied by bistability in intracellular Ca 2+ concentration, which might then couple to downstream biochemical or genetic signaling pathways. The combination of a persistent voltage-gated sodium current and a K + -selective leak current can also drive bistability. Persistent Na V currents have been observed in striated cardiac and skeletal muscle and in many types of mammalian neurons. 43 Persistent sodium and persistent L-type calcium currents contribute to sustained activation in a spinal cord injury model. 44 Finally, persistent sodium currents are thought to play a role in propagating and amplifying the influence of distal synaptic inputs during dendritic integration. 45 In principle, any of these bistable scenarios could produce bioelectrical domain walls in electrotonically extended systems, though we are not aware of any such direct observations. Gap junctions are necessary for proper formation of many tissues during development, including in heart, liver, skin, hair, cartilage, bone, and kidney, 46,47 though the physiological roles of these gap junctions remain unclear. Our work suggests that gap junction-mediated bioelectrical domain wall motion may be an important feature in some of these systems. For instance, chondrocytes express K ir channels 48 , gap junctions 49 , and the ionotropic serotonin receptor 5-ht3a 50 which is a nonselective cation channel electrically similar to channelrhodopsin. These conditions suggest that the ingredients are present for regulation of membrane potential via domain wall migration. Wounding in endothelial monolayers has been shown to induce slowly migrating zones of depolarization 51 , suggesting that these cells might also support bioelectrical domain walls. The phenomenon of spreading cortical depression may also reflect a form of electrical bistability, though the underlying mechanism is likely far more complex than the phenomena studied here. 52 The presence of long-lived electrical bistability in tissues could provide a means to couple bioelectric patterns to biochemical and genetic signaling networks. [4][5][6] The shape of the I-V curve in our experiments is qualitatively captured by a cubic (Fitzhugh Nagumo-type) nonlinearity. 53,54 Models of this sort have been applied to describe similar dynamics (zero-dimensional hysteresis, domain nucleation, growth and disorder-driven breakup) in magnetic domain reversals in ferromagnets, 55 in the spread of forest fires, 56 in phase transitions, 21 and in expanding species ranges with a strong Allee effect. 57 Spontaneous spatial symmetry breaking and pattern formation are well established in neural field theories 58 and in models of cardiac arrhythmias. 59,60 In these systems, however, the membrane voltage varies with time, i.e. the systems are described by the lower right quadrant of Fig. 1B. Our work shows that the reaction-diffusion formalism can be applied to purely spatial symmetry breaking in electrophysiology.
In the Turing model, formation of quasi-periodic patterns requires interaction of two or more morphogens, often described as an activator and an inhibitor. Conductance-based models describe the dynamics of voltage, a scalar quantity. The spatial symmetry breaking studied in this report does not constitute a classical Turing pattern, in that voltage is only a onedimensional state variable. As a result, the patterns of membrane voltage did not have a characteristic finite spatial frequency. To achieve a classical Turing-like pattern would require coupling of voltage to another diffusible species, e.g. Ca 2+ . It is not known whether classical Turing-like patterns of membrane voltage can be created via purely electrophysiological means.
Numerical modeling of electrically bistable cells and tissues
In the conductance-based model, the voltage dynamics are governed by the equation: The gap junction sheet conductance is defined as G Cxn = g Cxn × l 2 , where g Cxn is the gap junction conductance between adjacent cells, and l is the linear dimension of a cell. If g Cxn is measured as an areal conductance (S/m 2 ), then the units of G Cxn are S (i.e. sheet conductance, which has no spatial dimension). If C m is measured as an areal capacitance (F/m 2 ), then the ratio G Cxn /C m has units of a diffusion coefficient (m 2 /s), making explicit the connection between the conductance-based model and the reaction-diffusion equation.
For convenience in simulations, units of space were 10 μm (corresponding to linear size of one cell), units of time were ms, and units of voltage were mV. We assumed the capacitance of a cell was 10 pF. Conductances were measured in nS/pF, ionic currents in pA/pF, and voltage in mV. Parameters used in all simulations are given in the Tables in the Model parameters section of the Methods.
Simulations were run in Matlab using custom software. Single-cell voltages in 0D were determined by finding fixed points of cell-autonomous current-voltage curves. Extended tissues were numerically simulated as two-dimensional grids of 100 × 100 cells with periodic boundary conditions and one grid-point per cell. Simulations and experiments thus had similar spatial scales. For simulations of nucleation events in homogenous tissues (Fig. 3B), 300 × 300 cell grids with no-flux boundary conditions were implemented to match experimental conditions. The discrete Laplacian was implemented using the matlab del2 function (with default spacing) and solutions were time-integrated using Euler's method with 10 kHz sampling.
The inward rectifying potassium current from K ir 2.1 was based on a model from ten Tusscher et al. 11 as: I K = g K x K∞ V V − E K with reversal potential at E K = −90 mV. The parameter x K∞ V is a time-independent rectification factor that depends on voltage, with the following form: The scaling factor x 0 = 100 was introduced to make x K∞ of order unity between −90 and −60 mV. In our simulations the conductance magnitude g K was the only parameter varied to mimic changes in expression of K ir 2.1. The variable leak was modeled as an Ohmic conductance with reversal potential 0 mV. For homogeneous tissues all cells had identical K ir 2.1 and leak conductances.
To introduce disorder into the tissue, a fraction n K of the cells were randomly assigned to express K ir 2.1 (all with conductance g K ) while the remaining cells had no K ir 2.1 expression ( Supplementary Figs. 6, 7). Spatial correlations in K ir 2.1 expression were introduced by assigning a random number to each cell, independently sampled from a uniform distribution on [0, 1]. The values where then smoothed with a two-dimensional Gaussian kernel of width d. A threshold was selected so that a fraction n K of the cells were above threshold. These cells were assigned to express K ir 2.1 with conductance g K and cells below threshold did not express. The extent of the spatial correlations was tuned by varying d. In the simulation for Supplementary Figs. 7G,H, K ir 2.1 and CheRiff expression were heterogeneous, and independent of each other. The distribution of CheRiff expression was calculated following the same procedure as for K ir 2.1, using the same smoothing parameter, d, but different thresholds n K, and n ChR .
Cells in simulated tissues were first initialized to their cell-autonomous resting potential, i.e. the resting potential in the absence of influences from the neighbors. Bistable cells were initialized to the fixed point which had the greater area under the curve between it and the unstable fixed point. Tissues were then time-evolved to generate an initial steady state voltage profile. To simulate bioelectrical dynamics under changes in parameters, conductances were gradually changed over timescales slower than any of the internal relaxation dynamics. For simulations of domain boundary velocities in homogeneous tissues with bistable I-V curves ( Supplementary Fig. 2), the left half of the tissue was initialized in a depolarized state and the right half in a hyperpolarized state.
Bi-HEK cell generation and culture
Genetic constructs encoding the inward rectifying potassium channel Kir2.1 and the blueshifted channelrhodopsin CheRiff were separately cloned into lentiviral expression backbones (FCK-CMV) and then co-expressed in HEK 293T cells along with the lentiviral packaging plasmid PsPAX2 (Addgene) and the envelope plasmid VsVg (Addgene) via polyethylenimine transfection (Sigma). Lentiviral particles were harvest at 36 and 72 hours post-transfection, and then concentrated 20-fold using the Lenti-X concentrator system (Takara).
For experiments where nominally homogeneous expression was the goal (Figs. 2, 3), HEK cells were incubated with both K ir 2.1 and CheRiff lentiviral vectors for 48 hours prior to measurement, and then passaged and replated onto poly-D-lysine coated glass-bottom tissue culture dishes (MatTek). For patch clamp measurements ( Supplementary Fig. 1), bi-HEKs were plated sparsely onto Matrigel coated dishes. For wide-field measurements (Fig. 3), adhesive islands were prepared by manually spotting poly-d-lysine onto MatTek plates. Bi-HEKs were plated onto these plates to create confluent patches of cells approximately 2 mm in diameter.
For experiments where disordered expression was the goal (Supplementary Fig. 7), K ir 2.1 and CheRiff constructs were transiently expressed (using Mirus 293T) in HEK cells which were then grown to confluence for an additional 72 hours prior to measurement. Highdisorder samples were not replated prior to measurement.
Immunostaining and imaging
Human iPSC-derived myocyte cultures were fixed for 20 minutes in 4% formaldehyde.
Cultures were rinsed three times in phosphate-buffered saline (PBS), followed by blocking buffer composed of PBS supplemented with 10% fetal bovine serum (FBS) and 0.1% Triton X-100. Primary antibodies were then diluted in blocking buffer and incubated overnight at 4 °C. Cultures were then washed three times with PBST (PBS supplemented with 0.5% Tween-20) and incubated with secondary antibodies conjugated with an AlexaFluor dye (Molecular probes) and DAPI (5 μg/mL) in blocking buffer for 2 h at room temperature. Cultures were washed with PBST followed by PBS, followed by imaging. Antibodies were: anti-PAX7 (Developmental Studies Hybridoma Bank, DSHB), anti-Myogenin (Santa Cruz, SC-576X) and embryonic anti-MyHC (DSHB, F1.652).
Transcriptomic profiling: Library preparation and sequencing
RNA was extracted from cells using Trizol (Invitrogen) or with the RNeasy Mini Kit (Qiagen). Libraries were prepared using Roche Kapa mRNA HyperPrep sample preparation kits from 100 ng of purified total RNA according to the manufacturer's protocol. The finished dsDNA libraries were quantified by Qubit fluorometer, Agilent TapeStation 2200, and RT-qPCR using the Kapa Biosystems library quantification kit according to manufacturer's protocols. Uniquely indexed libraries were pooled in equimolar ratios and sequenced on two Illumina NextSeq500 runs with single-end 75bp reads by the Dana-Farber Cancer Institute Molecular Biology Core Facilities.
Patch clamp and all-optical electrophysiology
All electrophysiological measurements were performed in Tyrode's solution, containing (in mM) 125 NaCl, 2 KCl, 2 CaCl 2 , 1 MgCl 2 , 10 HEPES, 30 glucose. The pH was adjusted to 7.3 with NaOH and the osmolality was adjusted to 305-310 mOsm with sucrose. Prior to measurements, 35-mm dishes were washed twice with 1 mL phosphate-buffered saline (PBS) to remove residual culture media, then filled with 2 mL Tyrode's solution. Spatially resolved optical electrophysiology measurements were performed using a homebuilt upright ultra-widefield microscope 66 Prior to measurement, cells were incubated with 1 μM BERST1 dye in phosphate buffered saline for 30 minutes in a tissue culture incubator. Samples were then washed and prepared in Tyrode's solution immediately before imaging.
Data Analysis and Image Processing
Optical recordings of voltage-sensitive BeRST1 fluorescence were acquired for isolated cells, small clusters of cells (4-12 cells), and extended tissues (>2 mm linear size). Recordings were processed using custom MATLAB software. Briefly, to minimize uncorrelated shot-noise, movies were subjected to 4×4 binning, followed by pixel-by-pixel median filtering in the time domain (9 frame kernel). A background signal was calculated from a cell-free region of the field of view and subtracted from the region containing the cells. Mean sample images were generated by measuring the average fluorescence of the tissue prior to optogenetic stimulation. Functional recordings were divided pixel-wise by this baseline to generate movies of ΔF/F. Plots of voltage-dependent fluorescence were generated by averaging the time-lapse movies over the relevant region of interest (e.g. small clusters for 0D data; localized spots within extended cell culture islands for 2D local measurements; and over entire cell culture islands for 2D mean measurements).
Statistical Analysis
Statistical analysis was performed on optical electrophysiology recordings from immature myocyte precursors to assess the significance of population-level differences in electrophysiological phenotypes at different time points. Measurements were performed on populations of isolated cells taken at 3 and 6 days in culture (see hiPSC myoblast and myocyte differentiation methods above). Fluorescent recordings of voltage were acquired for 100 ms prior to illumination, during a 40 s ramp of blue light (increasing from 0 to 10 mW/cm 2 for 20 s, then decreasing for 20 s), and then for 5 s after the blue light was turned off. Raw acquisitions were converted to ΔF/F for further analysis.
Electrophysiological phenotypes were parameterized by calculating the total endpoint hysteresis and the mean integral hysteresis for each identified cell ( Supplementary Fig. 8). Endpoint hysteresis was defined as the difference in ΔF/F between the 5 s at the end of the acquisition and the 100 ms at the start of the acquisition. Integral hysteresis was defined as the difference between the mean ΔF/F over the decreasing phase of the blue light ramp and the mean ΔF/F during the increasing phase of the blue light ramp (each averaged over the corresponding full 20 s ramp).
In each sample, individual cells which responded to CheRiff stimulation were first manually identified via an overall mean ΔF/F image. 51 and 89 cells were identified in the day 3 and day 6 measurements, respectively. Endpoint hysteresis and mean hysteresis were then calculated for each identified cell, and a cell was sorted as hysteresis-positive if it demonstrated a value greater than 0.3 for either measure (Supplementary Fig. 8). Cells which showed endpoint hysteresis > 0.3 were sorted into the endpoint hysteresis cluster, regardless of their integral hysteresis value. Cells with integral hysteresis > 0.3 and endpoint hysteresis < 0.3 were sorted into the integral hysteresis cluster.
Model parameters
Default model parameters were as follows:
Supplementary Material
Refer to Web version on PubMed Central for supplementary material. A) In chemical Turing patterns, a nonlinear chemical reaction coupled to diffusion leads to spontaneous formation of stable concentration patterns from homogeneous initial conditions, similar to ones seen in Nature. Here q = (q 1 , q 2 ) is the vector of reagent concentrations, R(q) is the nonlinear relation between concentration and reaction rate, and D is the vector of diffusion coefficients. B) Conductance-based models have the same structure as the Turing equation. Here V is the membrane voltage, C is the membrane capacitance, I k is the current through the k th ion channel, and G cx is the connexin conductivity. The chart shows possible solutions to an initially homogeneous conductance-based model, classified by variation in space and time. Spontaneous patterns that vary in space but not in time are a little-explored possibility in electrophysiology. Images of natural and simulated patterns adapted from Wikipedia and Kondo et al 3 .
|
2020-01-23T09:06:39.758Z
|
2019-11-29T00:00:00.000
|
{
"year": 2019,
"sha1": "7f42653a2d8cd510b17c93df680e27ec08ee7389",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8008956",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a9615410d51baf474b5fcebedb801f1732d8bdc7",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
243373117
|
pes2o/s2orc
|
v3-fos-license
|
Modeling and Learning Constraints for Creative Tool Use
Improvisation is a hallmark of human creativity and serves a functional purpose in completing everyday tasks with novel resources. This is particularly exhibited in tool-using tasks: When the expected tool for a task is unavailable, humans often are able to replace the expected tool with an atypical one. As robots become more commonplace in human society, we will also expect them to become more skilled at using tools in order to accommodate unexpected variations of tool-using tasks. In order for robots to creatively adapt their use of tools to task variations in a manner similar to humans, they must identify tools that fulfill a set of task constraints that are essential to completing the task successfully yet are initially unknown to the robot. In this paper, we present a high-level process for tool improvisation (tool identification, evaluation, and adaptation), highlight the importance of tooltips in considering tool-task pairings, and describe a method of learning by correction in which the robot learns the constraints from feedback from a human teacher. We demonstrate the efficacy of the learning by correction method for both within-task and across-task transfer on a physical robot.
INTRODUCTION
The abundant use of tools for a large range of tasks is a hallmark of human cognition (Vaesen, 2012). Design of new tools for accomplishing novel tasks, as well as improvisation in the absence of typical tools and use of tools in novel ways, are characteristics of human creativity. Consider for example, the design of a paperweight to hold a sheaf of papers, or the use of a paperweight to hammer in a nail if an actual hammer is not available. Both require reasoning about complex relationships that characterizes human cognition and creativity (Penn et al., 2008): The latter task, for instance, requires reasoning about the relationships among the force required to hammer in a nail, the surface of the nail's head, the surface of the paperweight bottom, the weight of the paperweight, and so on.
A robot situated in human society will also encounter environments and tasks suited for human capabilities, and thus it is important for a robot to be able to use human tools for human tasks (Kemp et al., 2007). While a robot may learn to complete a new task with a new tool via demonstrations by a human teacher (Argall et al., 2009;Rozo et al., 2013), the demonstration(s) provided for that tool cannot prepare the robot for all variations of that tool it is likely to encounter. These variations can range from different tool dimensions (e.g., different sized spoons, hammers, and screwdrivers) to tool replacements when a typical tool is not available (e.g., using a measuring cup instead of a ladle, or a rock instead of a hammer). An additional challenge is that tools are often used to manipulate other objects in the robot's environment. Given that the shape of a tool alters its effect on its environment (Sinapov and Stoytchev, 2008), a tool replacement may necessitate a change in the manipulation of that tool in order to achieve the same task goal (Brown and Sammut, 2012).
One aim of developing creative robots is to enable robots to exhibit creative reasoning in a similar manner as humans in order to enhance human-robot collaboration. Recently, Gubenko et al. (2021) have called for an interdisciplinary approach that synthesizes conceptual frameworks from diverse disciplines such as psychology, design, and robotics to better understand both human and robot creativity. In human cognition, creative reasoning is exemplified by improvised tool use; particularly, our ability to use analogical reasoning to identify replacement tools or methods that may be used to achieve the original goal, as well as reason over the differences between the original and replacement approaches in order to adapt the replacement to the task (Goel et al., 2020). In design, for example, there is the notion of intrinsic functions and ascribed functions (Houkes and Vermaas, 2010): In the latter, the user can use the object or tool for an ascribed function. Our goals for creative robots are similar: to be able to reason over the suitability of possible tool replacements when the original tool is unavailable, and reason over how the robot's execution of the task must be adapted for the replacement tool.
There are several key challenges in enabling robots to creatively use new tools. First, the robot must explore novel tool replacements that support the task constraints. Second, the robot must be able to evaluate a novel tool's suitability for a particular task, which involves learning a model of the interactions between the robot's gripper, the tool, objects in the robot's environment that are manipulated by that tool, and how those interactions affect the completion of the task goals. Finally, the robot must adapt its task model to the novel tool in order to fulfill these constraints. Prior work has addressed these first two challenges by constructing or identifying creative tool replacements (Choi et al., 2018;Sarathy and Scheutz, 2018;Nair and Chernova, 2020). In this paper, we identify and model the tooltip constraints that play a role in all three of these challenges. In particular, we focus on the third challenge of adapting a robot's task model to a novel tool. The contributions of this paper are as follows: 1) An exploratory analysis of the manipulation constraints that must be fulfilled when using a tool to complete three tasks in simulation. 2) Two models that represent the relationship between the orientation and position constraints when manipulating a tool. 3) An algorithm for training these models using interaction corrections provided by a human teacher, first proposed in Fitzgerald et al. (2019). 4) A discussion of the generalizability of these models when applied to new tools and/or tasks.
We organize the rest of this paper as follows. Section 2 presents a summary of related work in cognitive science, computational creativity, and robotic tool use. Section 3 defines the tool transfer problem in terms of constraints on the tooltip pose, which we then explore in Section 4 via an extensive evaluation of the effect of tooltip perturbations on task performance in simulation. In Section 5, we discuss how a robot may learn these constraints through corrections provided via interaction with a human teacher. Finally, we summarize this paper in Section 6.
Defining Creative Reasoning
What does it mean for a robot to be "creative"? Prior work in creative robotics has often fallen under one of two categories of creativity: 1) Producing a creative output involving creative domains such as music (Gopinath and Weinberg, 2016) and painting (Schubert and Mombaur, 2013), or 2) Invoking a creative reasoning process. Within the latter category, several criteria for creative reasoning have been proposed, such as autonomy and self-novelty (Bird and Stokes, 2006), in which the robot's creative output is novel to itself but not necessarily to an outside observer. Another definition of a creative reasoning process is one that emphasizes both the variation of potential solutions considered by the agent, as well as the process used to consider and select from those options (Vigorito and Barto, 2008).
Creative reasoning may also be defined in an interactive setting. Co-creativity is a process for creative reasoning in which an agent interacts with a human to iteratively improve upon a shared creative concept. In doing so, co-creativity fosters creative reasoning and may improve the quality of the resulting output (Yannakakis et al., 2014). In prior work, we have defined co-creative reasoning in the context of a robot that collaborates with a human teacher to produce novel motion trajectories, while also aiming to maximize its own, partial-autonomy (Fitzgerald et al., 2017). In the context of a robot reasoning over how it may execute a task in a new environment, this co-creative process allows the robot to obtain the contextual knowledge needed to adapt its task model to meet the constraints of the novel environment.
Creative reasoning has been defined in other relevant domains, such as design creativity. Analogical reasoning is said to be a fundamental process of creativity in design (Goel, 1997). In design by analogy, a new design is created by abstracting and transferring design patterns from a familiar design to a new design problem, where the design patterns may capture relationships among the abstract function, behavior, structure, and geometry of designs. Design also entails discovery of problem constraints (Dym and Brown, 2012) including making implicit constraints in a design problem more explicit (Dabbeeru and Mukerjee, 2011). Fauconnier and Turner (2008) introduced conceptual blending as another process for creative reasoning. This approach addresses analogical reasoning and creativity problems by obtaining a creative result from merging two or more concepts to produce a new solution to a problem. Abstraction is enabled by mapping the merged concepts to a generic space, which is then grounded in the blend space by selecting aspects of either input solution to address each part of the problem. Applied to a robotic agent that uses this creative process to approach a new transfer problem, the robot may combine aspects of several learned tasks to produce a new behavior.
Overall, these methods for creative reasoning highlight two important components of creative reasoning: The exploration of novel solutions to a problem, and an evaluation of each candidate solution's effectiveness. Prior work in creative reasoning (e.g., analogical reasoning, interactive co-creativity, and conceptual blending) have addressed these challenges, but not yet in the context of creative tool use by an embodied robot. This domain requires additional considerations, in that it is grounded in a robot's action and perception (Fitzgerald et al., 2017). First, the robot has imperfect perception of its environment and/or tools, and thus may not have a complete model of the tool(s) it may use. Second, its solution must be in the form of a motion trajectory that utilizes the tool to achieve the task goals. As a result, not only is the choice of tool a creative one, but the usage of that tool is creative as well. We now review relevant literature that addresses these challenges within the robotic tool use domain.
Identifying Novel Tool Candidates
Existing work typically focuses on identifying the affordances of potential tool candidates. Affordances represent the "action possibilities" that result from the relationship between an object and its environment (Gibson, 1979). Once the affordances of candidate tools have been identified, a robot can reason over the most suitable tool for a particular task and integrate it into its motion plan (Agostini et al., 2015;Choi et al., 2018). However, identifying tool affordances is a non-trivial challenge. Recent work in computer vision has applied deep neural networks to this problem in order to visually predict the affordances for a particular tool (Do et al., 2018). The UMD Part Affordance Dataset (Myers et al., 2015) is intended to support further work on visual affordance detection. This dataset contains RGB-D images for 105 tools, grouped into 17 object categories. Each tool is photographed at roughly 75 orientations, each of which corresponds to a pixel-wise labeling according to 7 possible affordances (e.g., cutting, grasping, pounding). Other, physics-based features such as the dimensions or material of an object may also be used to judge their effectiveness as potential tools, such as when identifying a pipe as a makeshift lever to pry open a door (Levihn and Stilman, 2014). Prior work has shown that, in addition to using demonstrations to learn a task, a robot may also use demonstrations to learn to recognize the affordance-bearing subparts of a tool such that it can identify them on novel objects (Kroemer et al., 2012).
When a suitable tool replacement is not already available in the robot's environment, it may be necessary to assemble one (Sarathy and Scheutz, 2018). Choi et al. (2018) extends the ICARUS cognitive architecture to assemble virtual tools from blocks. Nair et al. (2019) describes a method for tool construction by pairing candidate tool parts and then evaluating each pair by the suitability of the shape and attachability of the two parts. Later work (Nair and Chernova, 2020) integrates this process into a planning framework such that the task plan includes both the construction and use of the required tool.
While candidate tool identification is not the focus of this article, it is an essential step in our eventual goal of creative tool use. Overall, prior work on this topic demonstrates the taskspecific requirements for identifying novel tool candidates, and the importance of identifying the salient features of a tool within the context of the current task. We now consider how these features affect the tool's suitability when evaluating them for a particular task.
Evaluating Novel Tool Candidates
The shape of a tool alters its effect on its environment (Sinapov and Stoytchev, 2008), and thus a tool replacement may necessitate a change in the manipulation of that tool in order to achieve the same task goal (Brown and Sammut, 2012). For tasks involving the use of a rigid tool, the static relationship between the robot's hand and the tooltip is sufficient for controlling the tool to complete a task (Kemp and Edsinger, 2006;Hoffmann et al., 2014). These methods assume a single tooltip for each tool, and that this tooltip is detected via visual or tactile means. For tasks involving multiple surfaces of the tool, the task model can be explicitly defined with respect to those segments of the tool, and repeated with tools consisting of similar segments (Gajewski et al., 2018). However, this assumes a hand-defined model that represents the task with respect to pre-defined object segments, and that these object segments are shared across tools. Given enough training examples of a task, a robot can learn a success classifier that can later be used to self-supervise learning task-oriented tool grasps and manipulation policies for unseen tools (Fang et al., 2018). We similarly aim to situate a new tool in the context of a known task, but eliminate the assumptions that 1) the new tool is within the scope of the training examples (which would exclude creative tool replacements) and 2) that the tool features relevant to the task are observable and recorded by the robot.
Adapting Task Models to Novel Tools
The aim of transfer learning for reinforcement learning domains is typically to use feedback obtained during exploration of a new environment in order to enable reuse of a previously learned model (Taylor and Stone, 2009). In previous work, we have shown how interaction can be used to transfer the high-level ordering of task steps to a series of new objects in a target domain (Fitzgerald et al., 2018). Similarly, the aim of one-shot learning is to quickly learn a new task, often improving learning from a single demonstration by adapting previous task knowledge. Prior work in this space focuses on learning a latent space for the task in order to account for new robot dynamics (Srinivas et al., 2018) or new task dynamics (Fu et al., 2016;Killian et al., 2017). "Metalearning" approaches have succeeded at reusing visuomotor task policies learned from one demonstration (Chelsea et al., 2017) and using a new goal state to condition a learned task network such that it can be reused with additional task objects (Duan et al., 2017). We address the problem of a robot that has not yet been able to explore these relationships, aiming to enable rapid adaptation of a task model for unseen task/parameter relationships. The tool transform models learned by our approach are not specific to any task learning algorithm or representation, and thus can compliment or bootstrap methods for reinforcement, one-shot, and meta learning.
Summary of Related Work
Through prior work, we have identified three key steps for creative tool use: Exploring novel tools, evaluating novel tools, and adapting task models to novel tools. These stages are not entirely separable from each other, as evaluating reflects how well the robot anticipates being able to adapt its task model for a particular tool, and exploration results in a set of tools that meet some criteria such that they may be evaluated in the context of the task. A common theme through all three steps is the importance of constraints (e.g., tool shape, segments, or visual features) that dictate how a task model may be adapted to a particular tool, and as a result, play a role in the exploration and evaluation steps as well.
In the rest of this paper, we focus on this challenge of identifying and modeling constraints, and demonstrate how these constraints may be used in the evaluating and adapting steps of creative tool use. While we do not explicitly address creative tool exploration, we aim for this work to support future research on identifying these constraints visually to enable this exploration.
TOOLTIPS AS CONSTRAINTS
Suppose that a robot has learned a trajectory T a [p (0) a ,p (1) a , ...,p (n) a ] consisting of end-effector poses p (i) a for a particular task using tool a, and now must complete the same task using a different tool b. Our goal is to transform each pose individually for tool b. Representing an original pose for tool a in terms of its 3 × 1 translational vector t a and 4 × 1 rotational vector r a , we transform it into a pose p b for tool b as follows: Here, r a ·r refers to the Hamilton product between the two quaternions. This definition relies on a known transform between tools a and b, which requires knowledge of the appropriate "reference" point for both tools such that their transform can be computed. Neither reference point is initially known by the robot, however, nor can it be extracted from the trajectory which is represented according to the robot's end-effector, and not according to any point on the tool itself.
Identifying the "reference point" for a tool is non-trivial. While prior work has addressed the problem of identifying affordance regions of a tool, these regions are too broad to characterize the transform between two tools. Figure 1 illustrates examples of these labeled affordance regions based on the UMD Part Affordance Dataset (Myers et al., 2015). While this dataset is relevant to identifying similar regions on two separate tools, it does not address the problem of specifying the equivalent points of a tool that may be used to transform the trajectory for a particular task from one tool to another. For example, the full blade of a knife may be labeled as enabling the "cutting" affordance ( Figure 1), even though a cutting task is likely to be performed with respect to only the edge of the blade. Furthermore, since affordance data is presented in the form of pixel-wise image labels, it does not provide any data concerning the kinematic implications of using this tool. Since the tool is observed and labeled from a static, overhead perspective, affordance data is only available along a single 2D plane, and thus does not indicate the orientation at which each affordance is or is not valid. This is essential for manipulating the tool properly; even if the robot were to determine that the relevant surface of a knife is located along the edge of its blade, the blade must still be oriented carefully with respect to the cutting target for the task to be completed successfully. We refer to the acting surface of the tool (e.g., a singular point along the edge of the knife blade, or a singular point on a mallet's pounding surface) as a tooltip that is defined by a pose containing both the position and orientation of that tooltip. In summary, we expect that successful task completion relies on the robot having a model of the composite transform between 1) the end-effector, 2) its grasp of the tool (highlighted in red in Figure 1), and 3) the tooltip position and orientation.
FIGURE 1 | Affordance regions may be broad, spanning multiple possible tooltips. As a result, predicting the affordance region is not sufficient to plan with respect to that tool's tooltip. For example, the full blade surfaces of the saw and knife are labeled as enabling the "cutting" affordance (highlighted in green) and the "grasping" affordance (highlighted in red); however, cutting is only performed using the edge of the blade, and requires that the blade be oriented toward the cutting target. Similarly, different points of a hammer head may enable different tasks (e.g., pounding versus prying), and thus detecting a task-independent affordance region (highlighted in purple) is not sufficient to plan a task trajectory.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 674292 4 While we may mathematically represent a tooltip as a singular pose, practically, however, there are likely many possible tooltips that may lead to successful task execution. Additionally, the constraint over the tooltip may also differ depending on the context in which it is used: The orientation of a hammer is constrained along two axes when hammering a nail, but the hammer may still be rotated around the nail (e.g., its "yaw" rotation) without affecting task performance. This example supports the notion of a one-to-many relationship between 1) a tooltip and 2) the tool poses that enable that tooltip to be used.
In the remainder of this paper, we explore this one-to-many relationship. In Section 4, we demonstrate how a single tooltip can be expanded into a set of effective tool poses, thus highlighting the challenges of learning tooltip constraints. In Section 5, we consider this relationship in the opposite direction, and present two models for deriving a single tooltip from a set of valid poses demonstrated by a human teacher.
CHARACTERIZING TOOL CONSTRAINTS
We first explore the effect of tooltip constraints by expanding a single tooltip into a set of tool poses that result in successful task execution. To do so, we transform a trajectory that results in successful task execution (and thus the tooltip is implicitlydefined) such that the tooltip's trajectory is perturbed slightly. In doing so, we can evaluate the effect of that perturbation on task performance, and ultimately model the constraints that dictate which poses result in successful use of the tooltip.
In this section, we address two key research questions: 1) How do changes in tool pose affect task performance?
2) How do the constraints on tool pose differ across tools and/or tasks?
Evaluating Tool-Task Constraints in Simulation
We address these research questions by evaluating the performance of a large set of trajectory perturbations using a simulated 7-DOF Kinova Gen3 robot arm situated on a round table in a Gazebo simulated environment. We evaluated the effect of trajectory perturbations on three tools: A hammer, a mug, and a spatula ( Figure 2). We fixed the robot's grasp as a static transform between the robot's gripper and the tool, and thus did not evaluate the effects of the robot's grasp strength or stability on tool use. For each tool, we provided a demonstration of three tasks: Hooking ( Figure 3A), lifting ( Figure 3B), and sweeping ( Figure 3C). Each demonstration was provided in a Gazebo simulator as a set of end-effector keyframes. Depending on the tool being demonstrated, this resulted in 5-7 keyframes for hooking, 4-6 for lifting, and 13-18 for sweeping. These endeffector keyframes were then converted to keyframe trajectories represented in the robot's joint-space. We used the MoveIt (Coleman et al., 2014) implementation of the RRTConnect planner to plan between joint poses during trajectory execution. We simulated a trajectory perturbation by altering the rigid transform between the robot's gripper and the tool itself, according to a pre-determined set of position and orientation alterations that are consistent across all tools and tasks. As a result, each trajectory perturbation is identical with respect to the robot's end-effector, but differ with respect to the trajectory of the tool itself. This allowed us to use the same joint-space trajectory for all perturbations of a single tool-task pairing, thus reducing the likelihood of planning errors across all perturbations and also minimizing any changes in the robot's joint motion that might affect task performance. Despite the same trajectory being executed across all perturbations of a single tool-task pairing, planning errors may still occur when a perturbation results in the tool colliding with its environment, thus preventing the rest of the trajectory from being executed. Each perturbation resulted from a unique permutation of changes applied to the tool's demonstrated position along the x, y, and z axes and demonstrated orientation along the roll, pitch, and yaw axes. The tool's x, y, and z positions were each configured at one of three distances from the demonstrated tool position: [ − 0.01, 0, 0.01] meters. The tool's roll, pitch, and yaw rotations were each configured at one of three angles from the demonstrated tool orientation: [− π 16 , 0, π 16 ] radians. These position and orientation perturbations were empirically chosen such that, when combined, their effect on task performance can be observed on a spectrum. We observed that larger ranges of pose or orientation changes would be less likely to result in completion of any aspect of the task, whereas smaller ranges may not fully explore the range of successful perturbations. However, as we note later in Section 4.3, we observe that different tools vary in their sensitivity to these perturbations, and thus a more fine-grained set of perturbations should be explored in future work.
Overall, the permutation of these configurations resulted in a total of 3 6 729 perturbations for each tool-task pairing. We executed each perturbation twice in simulation (to account for the non-deterministic effects of the simulator dynamics) and recorded the average performance of the two trials, with performance being measured according to task-specific measures. All performance metrics were scaled to a 0-1 range. In the hooking task, performance was measured as the distance (in meters) between the box and the robot's base, with less distance correlating to higher performance. The initial and goal states of this task are shown in Figure 3A. In the lifting task, the robot's performance was measured as the green bar's height above the table (in meters). A small number of trials resulted in the bar being removed from the support structure entirely. In these cases, we recorded the performance as that of the task's initial state (i.e., a failure case). Figure 3B shows the initial and goal states of this task. In the sweeping task, performance is measured as the number of spheres that were swept off the table, with maximum performance being 16 spheres. The initial and goal states of this task are shown in Figure 3C.
Results
Our evaluation measured how sensitive each tool-task pairing is to perturbations of the tooltip's trajectory: The more sensitive the tool-task pairing is to perturbations, the more likely that a perturbation will lead to a task failure. Low task performance may be caused by the tooltip no longer contacting any relevant objects in the task (and thus leaving the task in its initial state), or by collisions between the tool's new configuration and its environment that prevent the robot from executing the full FIGURE 5 | Performance distributions over all tool-task pairings, with all trials with performance ≤ 0.05 excluded. X-and Y-axes are consistent across all graphs.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 674292 7 trajectory. We set a threshold performance of 0.05 (on a 0-1 scale), and report the percentage of perturbations that fail to exceed this threshold in Figure 4.
We include only the set of perturbations that exceed this threshold in the histograms in Figure 5, which illustrate the performance distributions over the set of perturbations exceeding this threshold. Since the original, unperturbed pose is already known to achieve near-optimal task performance, these graphs illustrate how many perturbations of that original pose still fulfill the tooltip constraints and result in high performance (i.e., the perturbations resulting in the peak observed near x 1.0 on each graph). We report the mean and variance over these performance results in ( Figure 4B). Figure 6 shows the distribution over the mean performance over all three tasks; that is, the performance metric for each perturbation is the average of its performance on the sweeping, hooking, and lifting tasks. We again only consider datapoints above a performance threshold > 0.05 in order to focus on the set of valid tooltip constraints for each tool.
Discussion
Research Question #1: How do changes in tool pose affect task performance? The relationship between performance and tool pose may be non-linear. If this relationship were linear, we would expect Figure 5 to primarily contain Gaussian-like performance distributions, such that as the robot evaluates trajectory perturbations further from the original trajectory, its performance resulting from those perturbations decreases proportionally. While this is the case in some tool-task pairings (e.g., all tools used for the sweeping task, and the lifting task using the hammer), other performance distributions appear to be bimodal in nature (e.g., using the hammer in the hooking task or using the spatula for lifting) or contain several peaks (e.g., using the mug for hooking). This suggests that there is a non-linear relationship between changes in the tool pose, and its resulting effects on task performance. Note that in our evaluation, we applied trajectory perturbations according to the single tooltip that was demonstrated for each tool-task pairing. An opportunity for future research is the identification of alternate tooltips based on the tool's shape or structure.
Research Question #2: How do the constraints on tool pose differ across tools and/or tasks? Tools differed in their sensitivity to pose changes. For example, using the spatula tool resulted in the highest percentage of failed trials (35.11-35.8%) across all three tasks, while the mug resulted in the lowest (3.29-4.25%) across all three tasks. One hypothesis for this performance difference is that since the mug was the smallest tool, changes in the tool pose had a smaller effect on its tooltip pose in comparison to the taller tools (spatula and hammer). We observed widely varying failure rates when using the hammer, ranging from 9.19 to 10.01% on the hooking and sweeping tasks, respectively, and 45.27% on the lifting task. One reason for this performance difference may be that a different tooltip was used for the lifting task compared to the hooking and sweeping tasks. In the former, the robot uses a "corner" of the hammer to lift the bar ( Figure 3B), whereas the hooking and sweeping tasks use a wider surface area of the hammer as a tooltip. This may provide more tolerance to pose perturbations. Overall, this suggests that the sensitivity of tooltip constraints depends on the surface of the tool being used. Figure 6 also supports this hypothesis. These distribution graphs reflect the consistency in tooltip constraints across tasks. While the geometry of the tool itself remains constant across tasks, the same tooltip is not necessarily used across tasks (e.g., using separate surfaces of the hammer for sweeping vs lifting). The reduced performance shown in these graphs (in comparison to Figure 5) indicates that the tooltip constraints applied to one task may not be generalizable to other tasks using the same tool.
We now consider the challenge of how a robot may quickly learn these constraints in the context of a new tool, and whether we can model the instances in which a robot can reuse a learned tooltip model in the context of another task. While a robot can learn to use a tool through demonstrations, the one-to-many mapping between tooltip constraints and the set of tool poses that meet those constraints means that there are many possible demonstrations that a robot may receive for a tool/task pairing. Learning the underlying tool constraint is therefore a challenge, as the teacher is providing demonstrations that sample from an unknown, underlying relationship between the end-FIGURE 6 | Mean performance distributions using each tool for all tasks, with all trials with mean performance ≤ 0.05 excluded. X-and Y-axes are consistent across all graphs.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 674292 8 effector and the tooltip. In the next section, we explore how a robot can utilize corrections in order to model and learn the underlying tooltip constraint.
LEARNING CONSTRAINTS FROM INTERACTIVE CORRECTIONS
In the previous section, we evaluated the one-to-many mapping between tooltips constraints and end-effector poses that meet those constraints. In order to adapt the robot's task model to a novel tool, however, we also need to analyze this mapping in the reverse direction: inferring the underlying tooltip constraint that has resulted in a set of corresponding endeffector poses.
We address this challenge in the context of a robot that learns from demonstrations by a human teacher who is familiar with the task and tool that the robot aims to use. By comparing two trajectories, each using a separate tool to complete the same task, we aim to model the relationship between the two tooltips constraints such that it can be reused in the context of another task.
While a robot can quickly receive demonstrations (Argall et al., 2009;Chernova and Thomaz, 2014) using a new tool, these demonstrations may not be sufficient to learn the underlying tooltip constraints. Due to the unstructured nature of task demonstrations, the two demonstrations (each provided using a different tool) may vary in ways that do not reflect how the task should be adapted based on which tool is used. For example, the teacher may choose a different strategy for completing the task with the second tool, or the robot may be starting from a new arm configuration when the teacher demonstrates the task with the second tool. For these reasons, we utilize corrections of the robot's behavior, which have been shown to be effective interface for adapting a previously-learned task model (Argall et al., 2010;Sauser et al., 2012;Bajcsy et al., 2018). Rather than have the teacher provide a new demonstration using the new tool, the robot attempts to complete the task on its own and is interrupted and corrected by the teacher throughout its motion. As a result, this interaction results in a series of correction pairs, where each pair represents the robot's originally-intended end-effector pose and its corresponding, corrected pose that was indicated by the teacher.
Our research questions are as follows: 1) How can we model a tooltip constraint using data provided via sparse, noisy corrections? 2) Under what conditions can the tooltip constraints learned from corrections on one task be used to adapt other task models to the same replacement tool? What characteristics of the tool and task predict whether a previously-learned tooltip constraint can be applied?
In the following sections, we address these research questions using the Transfer by Correction algorithm, which we first described in Fitzgerald et al. (2019).
Problem Definition
We assume that each demonstration consists of a series of keyframes (Akgun et al., 2012). The robot receives corrections by executing a trajectory planned using the original task model, pausing after a time interval defined by the keyframe timings set during the original demonstration. The teacher then moves the robot's gripper to the correct position, after which the robot resumes task execution for the next time interval, repeating the correction process until the entire task is complete. Each resulting correction at interval i consists of the original pose C i a (using tool a) and the corrected pose C i b (using new tool b) at keyframe i. A collection of K corrections (one for each of K keyframes) results in a K x 2 correction matrix: Each corrected pose C i b provides a sample of the transfer function value with the original pose C i a at keyframe i as input, plus some amount of error from the optimal correction pose: We assume ϵ is sampled from a Gaussian noise model for each axis n ∈ [1. . .6] of the 6D end-effector pose. Our aim is to learn a transfer function ϕ that optimally reflects the tooltip constraints, using a correction matrix C.
Approach: Transfer by Correction
Given a task trajectory T for tool a consisting of a series of t poses in task space such that T [p 0 , p 1 , . . ., p t ], we transform each pose individually for tool b. Representing an original pose for tool a in terms of its 3 × 1 translational vector t a and 4 × 1 rotational vector r a , we transform it into a pose p b for tool b as follows: Here, r a ·r refers to the Hamilton product between the two quaternions. The goal is now to estimate the optimal rotationalr and translationalt transformation components from the corrections matrix C, and then apply these transformations to the trajectory T. Our approach addresses this goal by (1) modeling C, particularly the relationship between each correction's translational and rotational components, 2) sampling a typical translational transformationt and rotational transformationr from this transform model, and 3) applyingt andr to transform each pose in the task trajectory according to Equation 4.
Task Constraints
We observe that corrections indicate constraints of the tooltip's position and/or orientation, and that these constraints are reflected in the relationship between the translation and rotation components of each correction. Broadly, each correction may primarily indicate: • An unconstrained point in the trajectory, and thus should be omitted from the tool transform model. • An orientation constraint, where the rotation of the tooltip (and thus the end effector) is constrained more than its position (e.g., hooking a box is constrained more by the orientation of the hook than its position, as in the left of Figure 7). • A center-of-rotation constraint, where the position of the tooltip is constrained more than its rotation (e.g., sweeping a surface with a brush). Note that the tooltip position is the center of this constraint rather than the end-effector itself, and thus the range of valid end-effector poses forms an arc around the tooltip, and its orientation remains angled toward the tooltip (e.g., Figure 7B).
We define two tool transform models, first presented in Fitzgerald et al. (2019), each reflecting either orientation or center-of-rotation constraints. We fit the corrections matrix to each tool transform model, using RANSAC (Fischler and Bolles, 1981) to iteratively estimate the parameters of each model while discarding outlier and unconstrained correction data points. Each iteration involves 1) Fitting parameter values to a sample of n datapoints, 2) Identifying a set of inlier points that also fit those model parameters within an error bound of ϵ, and 3) Storing the parameter values if the inlier set represents a ratio of the dataset > d. The RANSAC algorithm relies on a method for fitting parameters to the sample data, and a distance metric for a datapoint based on the model parameters. These are not defined by the RANSAC algorithm, and so we specify the parameterization and distance metric according to the tool transform model used, which we describe more in the following sections. We define an additional method to convert the best-fitting parameters following RANSAC completion into a typical transform that can be applied to poses.
Linear Tool Transform Model
Based on the orientation constraint type, we first consider a linear model for correction data, where corrections fitting this model share a linear relationship between the translational components of the corrections, while maintaining a constant relationship between the rotational components of corrections ( Figure 8A). We model this linear relationship as a series of coefficients obtained by applying PCA to reduce the 3D position corrections to a 1D space.
RANSAC Algorithm Parameters
The RANSAC algorithm is performed for k iterations, where we use the estimation with desired confidence p 0.99 and estimated inlier ratio w 0.5. Additional parameters are as follows: n 2 is the number of data points sampled at each RANSAC iteration, ϵ 0.01 is the error threshold used to determine whether a data point fits the model, and d 0.5 is the minimum ratio between inlier and outlier data points in order for the model to be retained.
Model Parameter Fitting
Model fitting during each iteration of RANSAC consists of reducing the datapoints to a 1D model using PCA, returning the mean translational correction and the coefficients for the first principal component of the sample S: where p t is the 3 × 1 translational difference indicated by the correction p, S is the subset of the corrections matrix C sampled during one iteration of RANSAC such that S ⊂ C, and θ u is the eigenvector corresponding to the largest eigenvalue of the covariance matrix Σ 1 |S| S T t S t .
Error Function
Each iteration of RANSAC calculates the total error over all data points fitting that iteration's model parameters. We define the error of a single correction datapoint p as the sum of its reconstruction error and difference from the average orientation correction, given the current model parameters θ: where x + indicates the Moore-Penrose pseudo-inverse of a vector, p n is the unit vector representing the orientation difference indicated by the correction p, q n is a unit vector in the direction of the average rotation sampled from the model (defined in the next section), and c is the weight assigned to rotational error (c 1 in our evaluations).
Sampling Function
After RANSAC returns the optimal model parameters and corresponding set of inlier pointsÎ ⊂ C, the rotation and translation components of the transformation are sampled from the model. We define the sampling function according to the estimated "average" rotation q: The solution to q for this maximization problem is the eigenvector corresponding to the largest eigenvalue of M (Markley et al., 2007). The sample translation t is the 3D offset corresponding to the mean value z from the 1D projection space:
Rotational Tool Transform Model
We now consider a model for corrections reflecting a center-ofrotation constraint, in which we make the assumption that corrections indicate a constraint over the tool tip's position.
Since the tool tip is offset from the end-effector, the position and rotation of the end-effector are constrained by each other such that the end-effector revolves around the tool tip ( Figure 8B). We model this relationship by identifying a center-of-rotation (and corresponding rotation radius) for the tool tip, from which we can sample a valid end-effector position and rotation.
RANSAC Algorithm Parameters
We use the same parameters for k, w, d as in the linear model. We sample n 3 points at each iteration, and use the error threshold ϵ 0.25. We define functions for model parameterization, error metrics, sampling, and variance in the following sections.
Model Parameter Fitting
We define the optimal model parameters for each iteration of RANSAC as the center-of-rotation (and corresponding rotation radius) of that iteration's samples S: where θ c is the position of the center-of-rotation that minimizes its distance from the intersection of lines produced from the position and orientation of each correction sample: where a i and n i are the position and unit direction vectors, respectively, for sample i in S: Here, q 1 · q 2 refers to the Hamilton product between two quaternions, and q′ is the inverse of the quaternion q: We solve for the center-of-rotation by adapting a method for identifying the least-squares intersection of lines Traa (2013). We consider each sample i to be a ray originating at the point a i and pointing in the direction of n i . The center-of-rotation of a set of these rays is thus the point that minimizes the distance between itself and each ray. We define this distance as the piecewise function: where d is the distance between a and the projection of the candidate centerpoint c on the ray: We solve for θ c using the SciPy implementation of the Levenberg-Marquardt method for non-linear least-squares optimization, supplying Equation 14 as the cost function. We then solve for the radius corresponding to θ c :
Error Function
We define the error of a single data point p as its distance from the current iteration's center-of-rotation estimate: Where d p is defined in Equation 15.
Sampling Function
After RANSAC returns the optimal model parameters and corresponding set of inlier pointsÎ ⊂ C, the rotation component of the transformation is first sampled using the "average" rotation q c fromθ c to all inlier points: Where r p is the quaternion rotation betweenθ c and the position of p, defined by normalizing the quaternion consisting of the scalar and vector parts: The optimal q c is the eigenvector corresponding to the largest eigenvalue of M; this represents the sampled rotation fromθ c .
We then sample t by projecting the point at distanceθ r fromθ c in the direction of q c : Where x 1‥3 indicates the 3 × 1 vector obtained by ommitting the first element of a 4 × 1 vector x. Finally, we return the sample consisting of the translation t and the normalized rotation q between t andθ c :
Best-Fit Model Selection
The linear and rotational tool transform models represent two different relationships between the translational and rotational components of corrections. We now define a metric for selecting between these two models based on how well they fit the correction data: WhereÎ l ,θ l ,Î r ,θ r represent the optimal inlier points and parameter values from the linear and rotational models, respectively. The fit of the linear model is calculated as its range of values z projected in the model's 1D space: The fit of the rotational model is calculated as the range of unit vectors in the direction of each inlier point as measured from the center-of-rotation: where r p is defined in Equation 19.
Evaluation
We evaluated the transfer by correction algorithm results on a 7-DOF Jaco2 arm equipped with a two-fingered Robotiq 85 gripper and mounted vertically on a table-top surface ( Figure 9D). Each evaluation configuration consisted of one task that was 1) demonstrated using the original, "source" tool, and 2) corrected to accommodate a novel, replacement tool. We describe data collection for each of these steps in the following sections.
Demonstrations
Three tasks (Figure 9) were demonstrated using three prototypical, "source" tools ( Figures 10A-C), resulting in a total of nine demonstrations. Demonstrations began with the arm positioned in an initial configuration, and with the gripper already grasping the tool. Each tool's grasp remained consistent across all three tasks. Objects on the robot's workspace were reset to the same initial position before every demonstration. We provided demonstrations by indicating keyframes (Akgun et al., 2012) along the trajectory, each of which was reached by moving the robot's arm to the intermediate pose. At each keyframe, the 7D end effector pose was recorded; note that this is the pose of the joint holding the tool, and not the pose of the tooltip itself (since the tool-tip is unknown to the robot). We provided one keyframe demonstration for each combination of tasks and source tools in this manner, each demonstration consisting of 7-12 keyframes (depending on the source tool used) for the sweeping task, 10-11 keyframes (depending on the source tool used) for the hooking task, and 7 keyframes for the hammering task. We represented each demonstration using a Dynamic Movement Primitive (DMP) (Schaal, 2006;Pastor et al., 2009). A DMP is trained over a demonstration by perturbing a linear spring-damper system according to the velocity and acceleration of the robot's end-effector at each time step. By integrating over the DMP, a trajectory can then be generated that begins at the end-effector's initial position and ends at a specified end point location. Thus, after training a DMP, the only parameter required to execute the skill is the desired end point location. By parameterizing the end point location of each DMP skill model according to object locations, the overall task can be generalized to accommodate new object configurations. We re-recorded the demonstration if the trained DMP failed to repeat the demonstration task with the source tool.
Corrections
Following training, the arm was reset to its initial configuration, with the gripper already grasping a new tool ( Figures 10D,E). Note that these replacement objects have several surfaces that could be utilized as a tooltip (depending on the task). For example, any point along the rim of the mug ( Figure 10D) would serve as the prototypical tooltip during a scooping or pouring task. In the context of the hooking and hammering tasks used in our evaluation, however, the bottom of the mug serves as a tooltip. Alternatively, the side of the mug provides a broad surface to perform the sweeping task. This range of potential tooltips on a single object highlights the benefit of using corrections to learn task-specific tooltips, rather than assume that a prototypical tooltip is appropriate for all tasks.
Objects on the robot's workspace were reset to the same initial position as in the demonstrations; this allowed us to ensure that any corrections were made as a result of the change in tool, rather than changes in object positions. The learned model was then used to plan a trajectory in task-space, which was then converted into a joint-space trajectory using TracIK (Beeson and Ames, 2015) and executed, pausing at intervals defined by the keyframe timing used in the original demonstration. When execution was paused, it remained paused until the arm pose was confirmed. If no correction was necessary, the pose was confirmed immediately; otherwise, the arm pose was first corrected by moving the arm to the correct position. Note that this form of corrections assumes that each keyframe constitutes a statically stable state. For tasks involving unstable states, another form of interaction may be used to provide post-hoc corrections, such as critiques (Cui and Niekum, 2018).
Two poses were recorded for each correction: 1) the original end-effector pose the arm attempted to reach (regardless of whether the goal pose was reachable with the new tool), and 2) the end-effector pose following confirmation (regardless of whether a correction was given). Trajectory execution then resumed from the arm's current pose, following the original task-space trajectory so that pose corrections were not propagated to the rest of the trajectory. This process continued until all keyframes were corrected and executed, resulting in the correction matrix C (Equation 2).
Measures
For each transfer execution, we measured performance according to a metric specific to the task: • Sweeping: The number of pom-poms swept off the surface of the yellow box. • Hooking: The final distance between the box's target position and the closest edge of the box (measured in centimeters). • Hammering: A binary metric of whether the peg was pressed any lower from its initial position.
Results
We highlight two categories of results: Within-task and acrosstask performance.
Within-Task Transfer
Within-task performance measures the algorithm's ability to model the corrections and perform the corrected task successfully. Transfer was performed using the transform model learned from corrections FIGURE 9 | (A) hooking task, (B) sweeping task, (C) hammering task, and (D) the experimental setting.
FIGURE 10 | Tools (A-C) were used to demonstrate the three tasks shown in Figure 9, later transferred to use tools (D,E). These tools exhibit a wide range of grasps, orientations, dimensions, and tooltip surfaces.
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 674292 on that same tool-task pairing. For example, for the sweeping task model learned using the hammer, corrections were provided on the replacement tool (e.g., a mug) and then used to perform the sweeping task using that same mug. For each source tool, we evaluated performance on all three tasks using each of the two replacement objects, resulting in 18 sets of corrections (one for each combination of task, source tool, and replacement tool) per tool transform model (linear and rotational).
Using the better-performing model resulted in ≥ 85% of maximum task performance in 83% of cases. The betterperforming model was selected using the best-fit metric in 72% of cases. Figure 11 lists the percentage of transfer executions (using the best-fit model) that achieve multiple performance thresholds, where best-fit results were recorded as the performance of the model returned by Equation 23.
We scaled the result of each transfer execution between 0 and 1, with 0 representing the initial state of the task and 1 representing maximum performance according to the metrics in Section 5.10. Figure 12 reports the performance distribution aggregated over all tasks, transferred from each of the three source tools to either the scrub-brush ( Figure 10E, results in Figure 12A) or mug (pictured in Figure 10D, results in Figure 12B) as the replacement tool. The mean performance results are reported in Figure 13A, with darker cells indicating better performance. Overall, the transform returned using the best-fit metric resulted in average performance of 6.9x and 5.9x that of the untransformed trajectory when using the scrub-brush and mug, respectively, as replacement tools.
Across-Task Transfer
Across-task transfer performance measures the generalizability of corrections learned on one task when applied to a different task using the same tool, without having received any corrections on that tool-task pairing. For example, the hooking task was learned using the hammer, and transferred to the mug using corrections obtained on the sweeping task. We evaluated 36 total transfer executions (one per combination of demonstration task, source tool, correction task (distinct from the demonstration task), and replacement tool) per tool transform model (linear and rotational). Figure 14 reports the performance distribution aggregated over all tasks, transferred from each of the three source tools to either the scrub-brush ( Figure 14A) or mug ( Figure 13B) as the replacement tool. The mean performance results are reported in Figure 13B, with darker cells indicating better performance. Overall, the transform returned using the best-fit metric resulted in average performance of 1.6x and 0.94x that of the untransformed trajectory when using the scrub-brush and mug, respectively, as replacement tools. The performance distribution is improved when using the transform learned from corrections, resulting in 2.25x as many task executions achieving ≥ 25% of maximum task performance.
In order to understand the conditions under which a transform can be reused successfully in the context of another task, we also report the mean performance results for a subset of the across-task executions ( Figure 13C). This subset consists of only the task executions where the relative orientation is the same between 1) the source tool's tooltips used for the source and target FIGURE 11 | Percentage of within-task transfer executions (selected by best-fit model) and untransformed trajectories achieving various performance thresholds (defined as the % of maximum performance metric for that task, described in Section 5.10). Our proposed models result in a higher percentage of transfer executions that complete the task to a high performance threshold (e.g., sweeping ≥ 85% of the objects off the table). Furthermore, while the untransformed baseline produces all-or-nothing performance behavior, our models degrade gracefully, resulting in partial task completion (represented by lower % performance thresholds) even when the learned transform is non-optimal.
FIGURE 12 | Aggregate performance results for within-task transfer using the scrub-brush (A) and mug (B) as the replacement tool. Performance was measured for each task according to the metrics in Section 5.10, and are scaled between 0-1. These results highlight the need for multiple tool transform models; while both models greatly outperform the baseline task performance (when no transform is used), note that neither model results in the best performance over all tasks and replacement tools. Using the best-fit metric to select the more appropriate model for each tool-task pairing resulted in the best overall performance. tasks and 2) the replacement tool's tooltips used for the same two tasks. This subset consisted of 10 executions for the scrub-brush, and 12 for the mug. Overall, for this subset of executions, the transform returned using the best-fit metric resulted in average performance of 12.6x and 1.7x that of the untransformed trajectory when using the scrub-brush and mug, respectively, as replacement tools.
Discussion
Our within-task transfer evaluation tested whether we can model the transform between two tools in the context of the same task (represented by the solid blue arrow in Figure 15) using corrections. Our results indicate that one round of corrections typically is sufficient to indicate this relationship between tools; collectively, the linear and rotational models achieved ≥ 85% of maximum task performance in 83% of cases. Individually, the models selected by the best-fit metric achieved this performance threshold in 72% of cases. This indicates that, in general, the fit of the model itself can be used to indicate the relationship between end-effector position and orientation for a given tool/task combination. Aside from analyzing high task performance, we are also interested in whether our approach enables graceful degradation; even if the robot is unable to complete the task fully with a new tool, ideally it will still have learned a transform that enables partial completion of the task. The results shown in Figure 11 demonstrate that Transfer by Correction offers robust behavior such that even when it results in sub-optimal performance, it still meets lower performance thresholds in nearly 90% of cases. In contrast, the untransformed baseline does not meet lower performance thresholds, and thus produces all-or-nothing results that lack robustness.
The primary benefit of modeling corrections (as opposed to re-learning the task for the new tool) is two-fold: First, the robot learns a transformation that reflects how the task has changed in response to the new tool, which is potentially generalizable to other tasks (as we discuss next). We hypothesize that in future work, this learned transform could be parameterized by features of the tool (after corrections on multiple tools). Second, since we do not change the underlying task model, but instead apply the learned transform to the resulting trajectory, the underlying task model is left unchanged. We expect that this efficiency benefit would be most evident when transferring a more complex task model trained over many demonstrations; rather than require more demonstrations with the new tool in order to re-train the task model, the transform would be applied to the result of the already-trained model.
We have also explored how well this transform generalizes to other tasks. Different tooltips on the same tool may be used to achieve different tasks, such as how the end and base of the paintbrush are used to perform sweeping and hammering tasks, respectively, in Figure 15. While we do not explicitly model the relationship between tooltips on the same tool (represented by the top grey arrow in Figure 15), they are inherent to the learned task models. A similar relationship exists for the replacement tool (represented by the bottom grey arrow in Figure 15). Our acrosstask evaluation seeks to answer whether the relationship between tools in the context of the first task (solid blue arrow) can be reused for a second task (represented by the dashed blue arrow) without having received any corrections on that tool/task combination (tool 2 and task 2). While we see lower performance in across-task evaluations compared to the within-task evaluations, it does improve transfer in 27.8% of across-task transfer executions (in comparison to the untransformed trajectory).
In the general case, our results also indicate that we cannot necessarily reuse the learned transformation on additional tasks, as average performance in across-task transfer is slightly worse than that of the untransformed trajectory when the mug is used as a replacement tool. This presents the question: Given a transform between two tools in the context of one task, under what conditions can that transform be reused in the context of another task without additional corrections or training? We do see that across-task performance is best when considering only the subset of cases where the relationship between the tooltips used in either task is similar for the source and replacement tools (in our evaluation, this is 10 of 18 executions using the brush, and 12 of 18 executions using the mug). Within this subset, acrosstask transfer improves performance in 41% of transfer executions. From this we draw two conclusions: 1) the transform applied to a tool is contextually dependent on the source task, target task, and tooltips of the source and replacement tool, and 2) a transform can be reused when the relationship between tooltips used in either task is similar for the source and replacement tools.
Overall, our evaluation resulted in the following key findings: Insight #1: Corrections provide a sample of the constrained transform between the tooltip and the robot's end-effector. This underlying constraint is task-dependent; our best-fit model results indicate that multiple constraint types should be modeled and evaluated for each task, with the best-fitting model used to produce the final transform output.
Insight #2: While the tooltip transform is task-specific, it can be applied to additional tasks under certain conditions. This is dependent on a second transform: the transform between FIGURE 15 | Corrections indicate the transform from tool 1 to tool 2 for the same task (indicated by the solid blue arrow). Our within-task transfer evaluation tested whether we can use corrections to sufficiently model this relationship. Different tasks may use different tooltips from the same tool (such as the different tooltips used to complete tasks 1 and 2). Our across-task evaluation tests whether the transform learned from corrections (solid blue arrow) can be reused as the transform between the two tools for another task (indicated by the dashed blue arrow).
Frontiers in Robotics and AI | www.frontiersin.org November 2021 | Volume 8 | Article 674292 multiple tooltips on the same tool. A tooltip transform can be reused for an additional task when the transform between the tooltips used to complete 1) the corrected task and 2) the additional task are similar for the two tools.
CONCLUSION
Tool use is a hallmark of human cognition and tool improvisation is a characteristic of human creativity. As robots enter human society, we expect human-like tool improvisation from robots as well. This paper makes three contributions to robot creativity in using novel tools to accomplish everyday tasks. First, it presents a high-level decomposition of the task of tool improvisation into a process of tool exploration, tool evaluation, and adaptation of task models to the novel tool. Second, it demonstrates the importance of tooltip constraints in guiding successful tool use throughout this process. Third, it describes a method of learning by correction: repeating a known task with an unknown tool in order to record a human teacher's corrections of the robot's motion. We focused on how the relationship between the robot's gripper and the tooltip dictates how the robot's action model should be adapted to the new tool. A challenge in identifying this relationship is that 1) there are many candidate tooltips on each tool, and 2) for each tooltip, there exists a one-to-many relationship between the tooltip and end-effector poses that fulfill the tooltip constraint.
In this paper, we validated this one-to-many mapping through a simulated experiment in which we demonstrate a relationship between pose variations and task performance. Our experimental results indicate that the sensitivity of tooltip constraints depends on the surface of the tool being used, and that as the tool pose deviates from these constraints, the resulting effect on task performance is nonlinear.
We then examined the opposite mapping: A many-to-one mapping between pose feedback provided by a human teacher, and the optimal, underlying tooltip constraint. We developed the Learning by Correction algorithm, and demonstrated that a human teacher can indicate the tooltip constraints for a specific tool-task pairing by correcting the robot's motion when using the new tool. We modeled the underlying tooltip constraint in two ways, using a linear and rotation model, and also present a metric for choosing the better-fitting model for a set of corrections. We demonstrated how this model of the tooltip constraint can then be used to successfully plan and execute the task using that tool with high task performance in 83% of task execusions. We also explored how this tooltip constraint model can be generalized to additional tasks using the same novel tool, without requiring any additional training data.
Overall, we expect that a focus on identifying novel tools, evaluating novel tools, and adapting task models to novel tools in accordance to tooltip constraints is essential for enabling creative tool use. Our results indicate that successful task adaptation for a new tool is dependent on the tool's usage within that task, and that the transform model learned from interactive corrections can be generalized to other tasks providing a similar context for the new tool. Put together, these results provide a process account of robot creativity in tool use (tool identification, evaluation and adaptation), a content account (highlighting the importance of tooltips), as well as an algorithmic account of learning by correction.
Open Questions
In this paper, we have presented a corrections-based approach to sampling and modeling the transform resulting from a tool replacement. In doing so, we model a single, static transform for a particular tool/task pairing. We have evaluated how well this model transfers to other tasks using the same tool replacement. An extension of this work would consider transfer across tools.
We envision that a robot could not only model the transform samples obtained by interactive corrections, but also learn to generalize that model to other, similar tools. For example, after receiving corrections for one ladle for a scooping task, the robot would ideally be able to model those corrections such that it would apply to ladles of different shapes or proportions as well. We anticipate that a robot could learn an underlying relationship between visual object features (such as dimensions or concavity) and the resulting transform for that tool.
Meta-learning has been successfully applied to learning problems in computer vision domains and fully-simulated reinforcement learning problems (Duan et al., 2017;Chelsea et al., 2017). When applied to the domain of tool transfer, meta-learning would ideally enable a robot to use extensive background training to learn the common relationships between visual features and tooltips that are shared by tools within their respective categories (e.g., cups, knives, scoops). When presented with a novel category of tools, the robot would then only need demonstrations using a small number of tools within the new category in order to learn the relationship between visual features and tooltips within that category. However, as demonstrated in this paper, tooltips are task-specific; within a single tool, the tooltip used to complete one task (e.g., the surface of a hammer used to hammer a nail) is not necessarily the same as the tooltip used to complete another task (e.g., the side of the hammer may be used to sweep objects off a surface, or the claw-end of the hammer may be used to remove a nail). This lack of task-specific training data presents a challenge for future work, as relying on a dataset containing a single, canonical tooltip for each tool would fail to capture the taskcontextual nature of tool use.
Finally, this paper has explored one method of interaction to enable a human teacher to provide corrections to the robot. However, in human-in-the-loop learning problems, the ideal interaction type is dependent on the teacher's role in the learning system, and the context in which the robot is used (Cui et al., 2021). For example, the teacher may not have time to correct every step of the robot's action, or may instead prefer to provide corrections only after the robot has tried and failed to complete a task. We anticipate that future work may enable a robot to obtain correction data from a broader set of interaction types.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusion of this article will be made available by the authors, without undue reservation.
|
2021-11-05T13:33:10.498Z
|
2021-11-05T00:00:00.000
|
{
"year": 2021,
"sha1": "0d256162df5423e35f1e6c0f3f2dbf08a3a40986",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/frobt.2021.674292/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0d256162df5423e35f1e6c0f3f2dbf08a3a40986",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": []
}
|
13397544
|
pes2o/s2orc
|
v3-fos-license
|
Angiogenesis and Breast Cancer
Angiogenesis is an essential step for breast cancer progression and dissemination. The development of new blood vessels in cancer setting (angiogenesis) is conducted by numerous physiological and pathological stimuli, where the main stimulus is hypoxia. The knowledge of different molecular pathways regulating angiogenesis is constantly growing. An increased and complex scenario of angiogenesis is nowadays available in breast cancer, specifically, and permits not only to understand most of the important phases of neoplastic growth but also offer an exciting perspective for new therapeutic proposals based on blocking new blood vessels sprouting. This review focused on historical and recent understanding of angiogenesis occurrence in breast cancer.
Introduction
The association of angiogenesis and cancer has been credited to the visionary pioneer Judah Folkman (1933Folkman ( -2008, who firstly stated that tumour growing was directly dependent the blood vessel network development [1]. The discovery of angiogenic molecules at earlier seventh's, prompt stimulated several works addressed to answer a number of questions related to the cancer development and regulation dependent on blood vessels vascularisation.
Angiogenesis is a central part of many normal homeostatic processes and nonneoplastic diseases. Regarding malignant neoplasia, it is now evident that tumours have a very limited capacity to grow without vascular support; therefore, formation of blood vasculature is obligatory step to sustain the influx of essential nutrients to the cancer mass. Blood neovascularisation is a complex phenomenon that involves several molecular players and cells. Interaction between stromal and epithelial components is importantly enhanced, and most of the events observed in wound repair are maintained [2].
Some previous historical observations credited to Folkman and colleagues already figured out the crucial role of angiogenesis in cancer setting [1]. The observation that the tumour growing largely depends on angiogenic sprout, indeed, has been studied for more than six decades in several in vivo models [3], and the maximum values of 1 to 2 mm were recognized as the limit for neoplastic expansion without new blood vessels formation [1].
Molecular players of angiogenesis have been characterized since the early years of angiogenic studies, and one of the most prominent stimulating growing factors is certainly the vascular endothelial growth factor family. The most prominent member of this family, vascular endothelial growth factor (VEGF, VEGF-A) is the foremost controller of physiological and pathological angiogenesis. Accordingly, numerous VEGF inhibitors have been approved by the North American Food and Drug Administration (FDA) for the treatment of advanced cancer and neovascularisation related to the macular degeneration [4].
There are several molecules and signalling pathways that drive the new formation and assembly of blood vessels. Further than the well-known angiogenic factors and their receptors, such as VEGF and its receptors (VEGFR), Angiopoietin-Tie, Ephrin-EphRs, and Delta-Notch that play the major regulator processes of angiogenesis in humans [5], there are also many other molecules directly or indirectly related to the new vessels sprout, which include Fibroblast Growth Factor (FGF) and Thrombin receptors among others [6]. The consequence of so many physiologic and pathologic options to the occurrence of blood vessels sprout is the obvious consideration to create a plethora of antagonists that should be able to block the angiogenic growth, which is received from oncologists enthusiastic support to treat breast cancer [7]. This is important because angiogenic activity has been shown to be crucial to breast cancer progression. Therefore, the blockage of VEGF action is supposed to be a very promising therapeutic alternative, mainly if associated to the ordinary chemotherapy. Nevertheless, all results until now reported are, indeed, incipient, which maintain the motivation for further investigation to a more comprehensive understanding of the accurate role of anti-VEGF therapy [7]. Figure 1 resumes the role of the principal molecular players involved with breast cancer progression. Block of the pathways that drive these molecular signalling is the rationale basis to anti-angiogenic therapies. Antiangiogenic therapy is a very exciting topic of the modern oncology because most of the angiogenic ligands and receptors are functionally active in tumour mass progression and can share some combinative actions with lymphatic vessels growth. Consequently, the rationale for anti-angiogenic therapy can also favour the obstruction of lymphatic vessels development, which potentially hampers the metastatic budding of the tumors [8].
Due to the complexity of neovascularisation phenomena in cancer scenario, this paper will highlight the main courses of angiogenesis studies related to the breast cancer, and describe the principal findings in four areas: experimental studies with VEGF expression in breast cancer (that include fundamental information about the importance of VEGF family in breast cancer development); the meaning of blood microvessel density that embrace diagnostic/prognostic parameters of angiogenesis; the role of angiopoietins and Tie-2 receptor in breast cancer angiogenesis and clinical approach of anti-angiogenic therapies. Experimental studies of VEGF expression in breast cancer development will introduce the theme, in order to report important statements that support angiogenesis studies in cancer; secondly, the value of blood vessel density (BVD) assessment will be discussed because there is a significant correlation between high BVD and worse prognosis in many, but not all, cancers; also, there are disputable data related to the BVD meaning in breast cancer behaviour. The final part will be dedicated to the role of angiopoietins and Tie-2 receptor in breast cancer angiogenesis because there are preclinical evidence that these receptors can directly influence the blood vessel sprout in breast cancer and also be implicated as a potential therapeutic target. This section is a connection to link the two previous themes and the final section, which will explore the clinical evidence of anti-angiogenic therapies.
Experimental Studies with VEGF Expression in Breast Cancer
The role of VEGF has been intensely tested in pre-clinical conditions to support the introduction of anti-angiogenic drugs in the clinical setting. VEGF and its receptors have been intensively studied in cultured cells in order to establish the algorithms to be tested in breast cancer therapy. Recent experimental results have endorsed the premises that angiogenic sprout in solid tumours, particularly in breast carcinomas, is regulated not only by VEGFR-2, a VEGF receptor, but also by VEGFR-3, a VEGFC lymphangiogenic receptor that is importantly expressed in tumour mass mediating blood vessel proliferation [9]. These pre-clinical assays have showed that the overexpression of VEGFR-2 and VEGFR-3 is found in both blood and lymphatic conduits, which imply that the major clinical mechanism of action of VEGF signalling inhibitors probably occurs more importantly in tumour vessels rather than tumour cells. Additionally, the upregulation of VEGFR-3 observed in cancer blood vessels point out the possibility to add in anti-angiogenic therapy the dual combination of VEGFR-2/VEGFR-3 target [9].
In clinical settings, for example, anti-VEGF antibody bevacizumab has been tested as adjuvant therapy, maintenance therapy, or in combination with both chemotherapy and other targeted agents such as the epidermal growth factor receptor kinase inhibitor erlotinib. Moreover, ramucirumab and IMC-18F1, monoclonal antibodies that target the VEGF receptors VEGFR-2 and VEGFR-1, have been also tested as well as the aflibercept, a peptide-antibody fusion targeting VEGF ligand. Presently, its recognized that is essential to targeting other angiogenic signalling pathways Journal of Oncology 3 such as platelet-derived growth factor-C (PDGF-C), bombina variegata peptide 8 (Bv8, also known as prokineticin-2), and VEGFR-3, which might enhance the therapeutic response in anti-VEGF resistant tumors [8].
A very recent and challenging novelty concerned the current knowledge about VEGF family. Nowadays it is known that VEGF-A has alternatively spliced isoforms that inhibit neovascularisation and tumour growth [10]. Interestingly, the acidic microenvironment changes are responsible for the VEGF alternative splicing.
Furthermore, it was also reported that the existence of a splice variant of the gene encoding vascular endothelial growth factor receptor-2 (VEGFR-2) that encodes a soluble protein nominated VEGFR-2 (sVEGFR-2), inhibits lymphangiogenesis, but not angiogenesis, by blocking VEGF-C function. Thus, the modulation of VEGFR-2 might have therapeutic effects in treating tumour lymphangiogenesis among other diseases related to the lymphatic proliferation [12]. Additionally, in vitro model using MCF-7 cells has shown that VEGFR2 repression is supposed to be also related to 17 β-Estradiol (E2) activity [13]. Indeed, recent evidence has shown that VEGFR2 expression in MDA-MB-231 and MCF-7 breast cancer cells is low, whilst VEGFR1 expression is constantly abundant, and NRP1 expression is variable. VEGFR1 expression knockdown by siRNA (siVEGFR1) significantly decreased the survival of breast cancer cells through downregulation of protein kinase B (AKT) phosphorylation, although VEGFR2 or NRP1 knockdown has no effect on the survival of these cancer cells [14].
VEGF expression in normal glandular structures is assumed to be constantly lower than in breast lesions, with the highest expression in ductal tumours when compared with lobular lesions. However, there is no clear evidence that VEGF expression correlates with the microvascular density [15]. This is interesting because no significant differences have been reported between vascular densities of the two types of invasive carcinoma, although VEGF protein and VEGF mRNA expressions are significantly superior in invasive ductal than in invasive lobular carcinoma, which suggest that VEGF is important in angiogenesis in invasive ductal carcinoma, but that other angiogenic factors are essential in invasive lobular carcinoma angiogenesis [16].
Recent experimental data suggested an intrinsic relationship between hormonal status of breast tumour cells and angiogenesis. VEGF released by activated stroma, for example, increases the growth of ER-positive malignant epithelial cells and of adjacent normal epithelium. Interestingly, the alteration of the phenotype of breast cancers from oestrogendependent to oestrogen-independent growth is associated to the failure of antiestrogenic tumour therapies [17]. Furthermore, the overexpression of VEGF by oestrogendependent MCF-7 breast cancer cultured cells could hamper estrogen-dependent tumour growth in mice submitted to the ovarian ablation [18]. Finally, mutations in BRCA1, via their interaction with ER-α, promote carcinogenesis through the hormonal regulation of mammary epithelial cell proliferation and affect the regulation of VEGF function, which may lead to cancer growth and angiogenesis [19].
The Meaning of Blood Microvessel Density
The real importance of blood microvessel density (MVD) is still controversial. Most of the available data have same degree of discrepancy related to the significant correlation between high MVD and poor breast cancer prognosis [20]. Impressive promising data emerged with Folkman's findings, suggesting the MVD assessment as an independent predictor of metastatic disease either in axillary lymph nodes or at distant sites (or even both). Therefore, the evaluation of breast cancer MVD was assumed to select patients with early breast carcinoma for aggressive therapy [21].
Data emerged from studies that highlighted the blood MVD as a prognostic factor to breast cancer was initially accepted as a powerful parameter to identify the more aggressive phenotypes of breast cancer [21]. However these initial results were not confirmed and different findings obligate the revision of primary concepts [20]. The new vessels developed in tumor setting are not adequately assembled and these fragile conduits are demonstrated to be collapsed in intratumour mass. Accordingly, the intra-tumour blood vessels newly formed are faint or even not functional [6].
Presently, the assessment of MVD by the blood and lymphatic markers is credited to be a significant unfavourable prognostic factor for long-term survival in breast cancer [6,22] besides being a likely therapeutic target for antiangiogenic therapy. However, there are no robust evidences yet to ascertain how important it is to block lymphangiogenesis activity to prevent lymphatic spread [23]. The most intriguing finding recently reported is the participation of the most powerful angiogenic growth factor, VEGF-A, in neoplastic lymphangiogenesis as well. Also, the growth of the lymphatic vasculature in the sentinel lymph node is initiated before cancer cells arrived at these loci, suggesting that VEGF-A (and also VEGF-C, a specific lymphangiogenic factor) secreted by the tumour cells is drained to the lymph nodes, inducing lymphangiogenesis there [23].
There are important changes occurring in the neoplastic microenvironment during the different morphological alterations of hyperplasic and pre-invasive breast lesions. Interestingly, angiogenesis is observed before any significant alteration in tumour microenvironment in preinvasive breast lesions. A phenotype combination characterized by highly expressed VEGF in epithelial cancer cells and smooth muscle actin positive/CD34 negative reaction in stromal cells is predominantly identified in intermediate and high grade ductal carcinoma in situ (DCIS). Altogether, these findings are supposed to predict the progression of DCIS to invasive carcinoma, and also helpful to plan therapeutic strategies using both anti-angiogenic factors and factors that selectively target the components of tumour stroma [24]. This is important because the expression of endothelin (ET)-1, a vasoactive peptide primarily produced in endothelial, vascular smooth muscle, and epithelial cells that have been demonstrated significantly increased in numerous human malignancies including breast cancer. ET-1 and its receptors (ETAR and ETBR) increased expressions are associated with increased VEGF expression and higher MVD of breast carcinomas, suggesting that ET-1 and its receptors are involved in the regulation of breast cancer angiogenesis [25]. In addition, increased expression of ETAR in breast carcinomas is associated with resistance to chemotherapy, which indicates that the determination of ETAR status should be used as predictive marker for identifying patients less likely to be responsive to conventional chemotherapy [26]. Tumour microenvironment is also the scenario for the enhanced infiltration of tumour-associated macrophage (TAM) that is significantly associated with both the high VEGF expression and high MVD, which advocate a prognostic relation of TAM infiltration and tumour angiogenesis [27,28]. Both MVD and VEGF expression are significantly correlated with tumour grade and lymph-node invasion, and TAMs correlates with mitotic activity index in ductal breast carcinoma [24]. However, augmented expression of VEGF and high MVD were not found associated with the lobular carcinoma prognosis [29].
Pigment epithelium-derived factor (PEDF), is a secreted glycoprotein recognized to be important to angiogenesis inhibition. The specific mechanism by which PEDF acts is still obscure, but PEDF is currently considered as a candidate antitumor agent. Decreased intra-tumour expression of PEDF is associated with a higher microvessel density (MVD) and poorer clinical outcome. Low PEDF expression significantly correlate with higher MVD and is assumed as an independent prognostic factor [30].
The Role of Angiopoietins and Tie-2 Receptor in Breast Cancer Angiogenesis
Angiogenesis development also involves the participation of angiopoietin (Ang-1 and Ang-2), an endothelial growth factor found to be a ligand for the endothelium-specific tyrosine kinase receptor Tie-2. Ang-1 is recognized to play an essential role in maintaining and stabilizing mature vessels by promoting the interaction between endothelial cells and surrounding support cells, whereas Ang-2 is thought to antagonize the stabilizing action of Ang-1. Under malignancies, Ang-1 and Ang-2 expressions are both elevated in tumour cells, whereas the Ang-2 expression is more commonly upregulated than Ang-1 or Tie2 expressions. The ratio of Ang-1 and Ang-2 expressions clearly favours Ang-2 that appears to be significantly associated with angiogenesis in the tumour tissues [31]. Not surprisingly, the Ang-2 expression was demonstrated to be closely correlated with VEGF expression and MVD in breast cancer as well; high MVD has been frequently found in invasive ductal carcinoma of the breast with a high expression of VEGF and Ang-2.
Altogether, these markers are also supposed to show a major correlation with poor survival rates that therefore represent strong prognostic impact in breast cancer [32].
A new function of Tie2 in osteoclastogenesis and osteolytic bone invasion of breast cancer was recently reported. Tie-2 receptor has a critical participation on breast cancer development related to the bone metastasis frequently associated to the breast cancer progression. The expression of Tie2 is considerably increased in human breast cancer tissues as compared with normal and benign breast tumours and it is also present in hematopoietic stem/precursor cells. Evidence that genetic deletion of Tie2 (or neutralization) importantly impaired osteoclastogenesis in an embryonic stem cell model has emerged. Conversely, deletion of Tie2 has no effect on osteoblastogenesis. Neutralization of Tie2 activity in vivo significantly inhibited osteolytic bone invasion and tumour development in a mammary tumour model that correlates Journal of Oncology 5 with reduction of osteoclasts and tumour angiogenesis. Importantly, Tie2 was also indentified as a therapeutic target for controlling the tumour angiogenesis and as well as the osteolytic bone metastasis in breast cancer [33].
Clinical Approach of Antiangiogenic Therapies
Breast cancer is paramount in terms of specific targets for cancer therapy. A number of novelties have emerged since the promising introduction of trastuzumab, an antiepidermal growth factor receptor (HER) used worldwide. Currently, there are different options for specific therapies that include monoclonal antibodies such as pertuzumab that bind to receptors on the cell surface, and tyrosine kinase inhibitors such as lapatinib, which target intracellular pathways as the epidermal growth factor receptor. Combination with different targets became common. These comprise the monoclonal antibody bevacizumab, which blocks the activity of VEGF receptor, and multitargeted tyrosine kinase receptors with anti-angiogenic and antiproliferative activities, for instance sunitinib. The antibodies combination targeting both the HER family and angiogenic pathways (e.g., trastuzumab plus bevacizumab) is also valuable for clinical setting [34]. Despite of hopeful evidence, judicious evaluation to indicate specific treatment should be taken because most of the breast cancers have a particular phenotype that can preclude the efficiency of monoclonal therapy. Bevacizumab, for example, is highly recommended for triple negative phenotype highly proliferative tumors, with enhanced angiogenesis that supports rapid growth and early metastases and have been found to have high levels of VEGF [35]. However, the breast cancer MVD assessed by endoglin (CD105) does not help to indicate the histopathologic phenotype prone to be referred to anti-angiogenic therapy [36]. Interestingly, endoglin (CD105), a coreceptor in the TGF-beta receptor complex, is believed to be a useful target for anti-angiogenic therapy. Endoglin vaccine activates the antigen-presenting dendritic cells, mediated by CD8+ T cells against endoglin-positive target cells. Curative vaccine may contribute to breast cancer therapy [35]. Similarly, a DNA vaccine against murine transcription factor Fos-related antigen 1, which is overexpressed in aggressively proliferating D2F2 murine breast carcinoma have been used against breast cancer development and metastatic progression combining the action of immune effector cells with suppression of tumor angiogenesis [37]. The use of minigene vaccines has grown and is competing with antimonoclonal therapies. Recently, a VEGFR-2 vaccine was successfully tested in animal model. An oral minigene DNA vaccine against murine vascular endothelial growth factor receptor-2 (FLK-1), the most important receptor in angiogenesis, protected against tumors of different origin in syngeneic BALB/c mice. Notably, the minigene vaccine has similar efficacy as a vaccine encoding the whole FLK-1 gene [38].
The translational approach between experimental and clinical treatment based on some innovative anticancer therapies require robust evidence for cancer reduction and metastasis prevention without major drug toxicities. Basically, breast cancer is a heterogeneous disease with different molecular players of the cell-matrix cross-talk that regulate growth, survival and, consequently, response to therapy. Therefore, the management of this tumor involve an ample comprehension of breast cancer heterogeneity, the biological nature of any given tumor as well the existence of better personalized management options [38] that take in account mechanisms of inherent/acquired resistance to cancer treatment.
Potentially, all promoters of angiogenesis could be blocked by a specific anti-angiogenic therapy. Hypoxiainducible factor-1 production, for example, that leads to an augmented VEGF transcription is a primordial target to avoid angiogenesis. However, the blockage of promoter factors involves a complex and redundant mechanisms of angiogenesis that are difficult to be fully understood. Moreover, hypoxia can also induce the overexpression of other pro-angiogenic molecules such as nitric oxide synthase, platelet-derived growth factor (PDGF), transforming growth factors alpha and beta, basic fibroblastic growth factor (bFGF), and a class of protein growth factors called the angiopoietins [11]. The blockage could be also considered for specific receptors of angiogenesis stimulation. Most of the recognized receptors are receptor tyrosine kinases (RTKs) present on the surface of different types of cells. These include PDGF receptors (PDGFRα and PDGFRβ), VEGF receptors (VEGFR1, VEGFR2 and VEGFR3), stem cell factor receptor (KIT), Fms-like tyrosine kinase 3 (FLT3), colony stimulating factor receptor type 1 (CSF-1R) and the glial cell-line-derived neurotrophic factor receptor RET [11]. Table 1 summarizes the principal inhibitors of angiogenesis currently used.
|
2014-10-01T00:00:00.000Z
|
2010-10-07T00:00:00.000
|
{
"year": 2010,
"sha1": "24b1301427523fceaf840ed2c9e373bd6d79fd9b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jo/2010/576384.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b31a7b75a1d0272c3f06b533f4ca9d2ed00eb7d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
145941335
|
pes2o/s2orc
|
v3-fos-license
|
Reputation effect of the moral hazard on contract farming market development : Game theory application on rice farmers in Benin
A good reputation is the basis for rice farmers to survive and gain trust from buyers in a competitive business environment. However, due to the existence of information asymmetry between buyers and rice farmers, the moral hazard problem is the key obstacle that impedes the benefits of related shareholders and hinders the efficiency of contract farming negotiations. It is crucial to design a control mechanism to avoid the negative impact of the moral hazard. This paper studies the principal and agent relationship between rice farmers and buyer in contract farming negotiation. Because of the influence of information asymmetry, many buyers have suffered from being cheated by rice farmers who fail to comply with the terms of the contract or provide fraudulent products in practice. These frequent cases will function to deteriorate any long-term relationships between rice farmers and buyers. The study focuses on the analysis of the causes of moral risks and the effect of reputation on moral risk utilizing repeated game theory. The purpose of this paper is to help both rice farmers and buyers effectively avoid moral hazards and achieve a win-win situation in contract farming negotiation. The result show that the rice farmer in contract farming practices has the incentive to maintain his reputation in order to gain more profits in the future. That also accounts for the reasons why the rice farmer will invest more to improve the customer’s service level, caring about the quality of product and the comments of finished contractor customer, to keep a longer farmer-buyer relationship. The rice farmer in contract farming practices has the incentive to maintain his reputation in order to gain more profits in the future and this means that contract farming can be developed with great success in Benin.
customs duties when transporting agricultural products to the market and the payment of market taxes are factors that influence the profitability of production.To address this situation, producers could use contract farming (Arouna et al., 2015).Contract farming is seen as a potential solution to overcome agricultural production constraints for resource-poor farmers (Arouna et al., 2017).Nevertheless, for a long time there has been one serious problem impeding the development of contract farming, that is, the lack of trust between farmers and buyers.There are many factors that influence the relationship between the farmers and buyers in contract farming practice.One of them is the moral hazard, which refers to the egoistic behaviors of farmers after making a deal with the buyers.Buyers do not have any insurance that the contract is flawless.Moreover, the insurance process is not well developed in the agricultural sector in developing countries, particularly in Benin, where buyers depend on farmers as the buyers usually forgo the common sense step of taking some precautionary measures.
In contract farming negotiation, buyers and farmers have a motivating force to take part in social contracts to build up volumes exchanged and to lessen the vulnerability that builds exchange costs which further decreases interest in esteemed included resources (Bezabeh Ali, 2018).This is most obvious among firms giving extension services and ranch input supply to farmers (Anim, 2010).The farmers who will adulterate the agreement and deliberately commit bribery are the root cause of the moral hazard.The underlying reason for the moral hazard is information asymmetry, which means the rice farmers have more information about the quality and cost of the rice, while buyers know less.In the practice of contract farming, the rice farmers usually will exploit their knowledge of the quality of product, production and transportation costs, and so on to take advantage of buyers.There are two types of information asymmetry: The first is adverse selection which occurs before the coalition between buyer and farmers, whereas the other is the moral hazard which happens after the deal.This paper will focus on defining the problem of the moral hazard between the rice farmers and buyers in contract farming practice and on a potential solution to the problem.One popular way is to introduce the concept of establishing a corporate reputation to track the past behavior of the rice farmers.A corporate reputation is an overall evaluation that reflects the extent to which people see the farming as substantially "good" or "bad" (Dowling, 2004).A good reputation is valuable because it can enhance trust and confidence so that the buyer feels that it is safe to buy products and service from this farmer.This outcome can also benefit the farmers in their markets and various researches have also shown that farmers with good reputations are better able to attain and sustain superior profits over time.
The primary research question in this paper examines the expected profits of the farmers and the buyers that depend on two factors.One is the type of farmers, and the other is the reputation of the farmers with the buyer.For example, does the farmer always benefit from cheating or not?To answer this research question, we will examine the contract farming practice where the reputation mechanism exists and check the influential mechanism.In this paper, we will set the reputation model of the farmer in contract farming practice.We first characterize the situation that the type of farmer is not common knowledge and, then, demonstrate that, even though cheating has a direct benefit to the farmer, it can sometimes hurt the farmer, buyer, or both if the contract continues in the long run.Furthermore, we show the impact of reputation.In addition, we illustrate that the farmer will always choose to be honest when the mechanism of reputation works.In a typical gametheoretic view of the relationship between farmer and buyer, each player acts in order to maximize his own profit (rational player) without taking into account the overall optimal relationship.Thus, incentive is offered to influence the behavior of the other player.Such an incentive is reputation.
LITERATURE REVIEW
In contract farming, the buyer and farmers commit in advance to exchange the product.In addition, the buyer can provide credit, inputs, monitoring, or is directly involved in part of the production process.Contract farming has been claimed to have a positive impact on local economies by improving the welfare of rural households, but the relationship between farmers and buyer could be switched (Arouna et al., 2017).
Apart from the problem of direct observability of possible frauds by farmers, reputation mechanisms and the activation of bilateral sanctions by individual farmers do not have any chance to deter such abuses (Mazé, 2009).As a potential motivation, reputation could encourage the farmer to improve the quality of his practice during the contract process.Since the time of Adam Smith, reputation has been considered to be a very important mechanism to ensure the implementation of a business contract, but only recently, it has been widely used in combination with game theory (He and Sommer, 2006).In management practice, the motivation of reputation is also very popular and has brought new management thinking to the creation and maintenance of a good reputation.The farmer who cares about his reputation will be responsible for his behavior, even when there is no explicit motivational contract.Farmers would work hard to increase the level of reputation, hoping that they would gain more in the future.
Some researchers have pointed out the important effect of reputation on incentive mechanisms and have begun to associate the farmer"s reputation and incentives to build a complete model (Cai and Weng, 2014).
According to Watanabe et al. (2017), the assentions in the contract farming may be ensured by trusted and rumored social standards that provide self-enforcement, leading to the desired behavior.Such research points to the idea that the reputation of the agricultural market could be used as a replacement for an explicit contract.
Reputation was first introduced by Fama (1980).Following this, Kreps, Milgrom, Roberts, and Wilson established the KMRW reputation model based on the repeated game (Kreps and Wilson, 1982;Milgrom and Roberts, 1982).When both parties in the game only care about the immediate benefits, the optimal strategy is to not return the product because it is not beneficial for either party.In the setting of the repeated game, reputation provides implicit motivation for contracts; the player would like to compromise by giving up short term benefits to choose coordinate equilibrium.Zheng (2013) and Lyu et al. (2016) also proves that, when the payoff of one player is not known by the other, this player has incentive to build good reputation to exchange for long run profits.
Thus, we specifically develop a model to investigate the effect of reputation on the profit of the rice farmers.
CONSTRUCTION OF THE MODEL
Within the context of a repeated game, we consider a market in which both the farmers and the buyers are clients, which is quite popular in the real exercise.There are two probable types of farmers: probability p indicates he has a respectable reputation and 1− p probability indicates that his reputation is immoral.The selling price of the rice is and the unit cost is C; the value of the rice to the buyer is denoted as , as ; otherwise, the buyer does not have the incentive to buy the product (rice).Moreover, there are two arrangements which the farmers could make regardless of which type it is, which are either to provide an honest deal or a dishonest deal.
The cost of rice farmers with a respectable reputation or an immoral reputation to act honestly or dishonestly is designated as follows: CHR and CDR, CHI and CDI."H" denotes the rice farmer who chooses to be honest while "D" denotes the rice farmer who chooses to be dishonest."R" denotes the type of rice farmer who is respectable, while "I" denotes the type of rice farmer who is immoral.The rice farmer of low reputation will have more management costs and more future risk; additionally, the rice farmer with an immoral reputation is more familiar with cheating the buyer, therefore, The information asymmetry in contract farming application is reflected by the fact that the rice farmer knows his own type, while the buyer lacks this knowledge.As shown in Figure 1, if the rice farmer with a respectable reputation chooses to be honest, and the buyer thinks that the rice farmer will not cheat him, the buyer will, therefore decide to make a deal.The revenue of the rice farmer is: Ps − C − CHR, and the revenue of the buyer is Vb − Ps.If the buyer thinks that the rice farmer is cheating him, and the buyer decides not to make a deal with the rice farmer, then the rice farmer with a respectable reputation will suffer from loss: −CHR.Similarly, we could conclude the payoff of buyer and rice farmer when the type of rice farmer is immoral in Figure 2.
Assumption 2: Suppose the unit value of the product provided by the seller within some periods values T, which is a function of rice farmer"s service level λ, the rice farmer"s real strength θ and the uncertainty in contract farming market application, so we have: , where λ is the private information of the rice farmer, T is the common knowledge of both the rice farmer and buyer, besides θ and μ following nominal distribution, with means equal to 0 and variance equals and respectively.
Assumption 3: If the times that the buyer makes a contract with the rice farmer is kept at a constant φ, then the profits of the buyer is .
Assumption 4: The sequence is as follows: first, the buyer will decide how many times to contract with this farmer, then the rice farmer will decide the deal level.
The rice farmer mainly profits from the commission from purchasing times φ, which implies that βφ, which is the cost of the service provided by the rice farmer is c(λ), c(λ) 0, c(λ) 0. c(λ) = (b )/2, while the income of the rice farmer is ( ) ( ) .
MODEL ANALYSIS
The introduction of the deal level of a rice farmer aims to diminish the risk of the buyer, to keep the benefits of the buyer and guarantee the efficiency of the contract market.Therefore, the optimal deal level to maximize the total profits in the contract farming market should be: Since the first decision, the buyer is to choose the contract times from a specific rice farmer, and, the next time, the rice farmer will decide the deal level.Rice farmer will take the following arrangements: When the contract deal is a first time contract, and the farmer knows that the probability to sign another contract scheme with the buyer another time is low, the rice farmer will choose dishonesty to maximize his own profits, regardless of whether he is generally honest or dishonest.Moreover, the buyer will not make a deal with the rice farmer after considering that; thus, this contracting market does not exist.Nevertheless, in the case of repeated contract application whereby the rice farmer signs a contract with the same buyer, the buyer will make the decision based on past contract experience.As the repeated game changes the restriction mechanisms, the payoff for both parties will be divergent, so a new equilibrium will exist.
In the first time contracting, when the buyer thinks that the rice farmer has a respectable reputation, the expected payoff of the buyer is: With P1 the probability that the rice farmer was regarded to have a respectable reputation at the first time, only when P1 PS / Vb, will counted in the next contract scheme application.This is especially true if the rice farmer has an immoral reputation, and will cheat the first time, then his payoff is high as Ps -CDI, yet this also induces the buyer to confirm the type of rice farmer.Now, if at the next contracting scheme, the rice farmer will choose to be honest after considering the behavior of the buyer, then -CDI -CHI.The total payoff of the rice farmers is: X1= (Ps -CDI) (1+Z) + (-CHI) Considering the case when the seller of immoral reputation first tries to hide his type to gain the credibility of the buyer, in order to garner more profits in the following contract scheme, then the strategy of the buyer is (Contract deal, Contract deal), and the total payoff of the rice farmer is: When the rice farmer chooses to not cheat at the first deal contract, then X2 X1, and we have: then the threshold value of rice farmer with immoral reputation when deciding which strategy to take is: We could also calculate the corresponding threshold value of the rice farmer with a respectable reputation when he decides which strategy to follow.
From the assumption that CDR -CHR CDI -CHI, we could conclude that .As long as there exists one , whatever the type, the rice farmer will choose to be honest in order to gain long term profit.
DISCUSSION
The study inferred that the rice farmer in contract farming practices has the incentive to maintain his reputation in order to gain more profits in the future.That also accounts for the reasons that the rice farmer will invest more to improve the customer"s service level, caring about the quality of product and the comments of finished contractor customer, to keep a longer farmer-buyer relationship.If a farmer has to continue with contractual rice production and marketing relations, this will depend on his attitude and reputation.Bad behavior reflects a bad reputation and has an effect on the survival of the contractual relationship.These results confirm Bartling et al. (2008) study.The author explores in his study how an agent"s record, that is, his performance with other principals in the past, affects the actual and optimal design of contracts in one-shot interactions; and have shown that information about past behavior can have a crucial effect on optimal contract design.
Jackson and Kalai (1998), in the study titled "False reputation in a society of players", lead to the conclusion that the agents can observe the play in all previous periods.This would mean that previous behaviors in a previous relationship are determinative in future decisions and the preservation of trust.Kim and Park (2013) concluded in their study that only good reputation can win the trust of buyers.According to these two authors, trust had significant effects on purchase and word-of-mouth intentions; and depends on the reputation of agricultural companies.The rice farmer in contract farming practices has the incentive to maintain his reputation to preserve the trust of the buyers.
Conclusion
As there is lack of a well-designed evaluation system targeted at the contract farming practice market, the problem of the moral hazard cannot be avoided or resolved.The integrity between trade partners is the basis of contract item, so it is necessary to appeal to all partners participating in contract farming, both buyers and rice farmer, as well as the government, to work methodically to push for the development of an evaluation system based on reputation to connect the profits of farmers with their reputations, and to increase the cost of irregular actions in the contract farming practice market.The rice farmer in contract farming practices has the incentive to maintain his reputation in order to gain more profits in the future and this means that contract farming can be developed in Benin with great success.
Figure 1 .
Figure 1.The payoffs of the rice farmer with respectable reputation R.
Figure 2 .Figure 1 :Figure 1 :
Figure 2. The payoffs of the rice farmer with the immoral reputation I.
|
2019-05-07T13:41:01.519Z
|
2019-03-28T00:00:00.000
|
{
"year": 2019,
"sha1": "d70009db930222739a2e98af09e92c56841174fe",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/C1E94ED60545.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d70009db930222739a2e98af09e92c56841174fe",
"s2fieldsofstudy": [
"Economics",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
106303324
|
pes2o/s2orc
|
v3-fos-license
|
Assessing the impact of immersing teeth in fresh orange juice and commercial orange juice on enamel hardness: an in vitro study
The pH of orange juice is below the critical pH of the enamel. Lately, consumption of orange juice has considerably increased because of its commercial availability. This study aims to investigate the impact of immersion time and juice types, both fresh and packaged, on enamel hardness. 60 premolars were immersed in fresh orange juice (n = 30) and commercial juice (n = 30) for 30 and 60 min and assessed the obtained data using repeated analysis of variance, Friedman test, Mann–Whitney U-test, and independent t-test. Enamel hardness decreased (P < 0.05) at every assessed time, and the commercial juice immersion significantly decreased the enamel hardness (P < 0.05). This study infers that enamel hardness is adversely affected by the pH, citric acid concentration, and immersion time of the orange juice.
Introduction
Hydroxyapatite is a naturally occurring mineral component in the enamel, dentine, and cementum. Apparently, minerals in the tooth structure are affected by the stability of the oral environment, especially the pH of the mouth. Hydroxyapatite has a critical pH, and any decrease in the pH below 5.5 increases the dissolution rate of hydroxyapatite. In addition, the pH of the mouth below 5.5 results in a progressive interaction between the acid ions and the phosphate group of hydroxyapatite, causing the crystal on the tooth surface to dissolve partially or completely. Demineralization is the loss of tooth mineral, which can be caused by acids that are either affected or not affected by bacteria [1].
Acids that are affected by bacteria are released by easily fermented materials such as monosaccharides and disaccharides. When easily fermented ingredients accumulate and interact with bacteria, such as Streptococcus mutans, in the oral cavity, a fermentation process occurs that produces an organic acid. This organic acid fermentation production could cause demineralization of teeth. In contrast, acids that are not affected by bacteria come from acidic food or drinks such as carbonated beverages and fruit juices. In the case, while bacteria play no direct role in demineralization, the environment on the tooth surface remains acidic because of the presence or absence of cariogenic bacteria on tooth surfaces. Thus, high consumption of acidic beverages or food increases the concentration and strength of acid ions on tooth surfaces, which could accelerate the demineralization process. Besides, food and beverages, acidic substances that are not affected by bacteria, can come from stomach acid in case of digestive disorders that cause vomiting [1]. Reportedly, demineralization adversely affects tooth enamel hardness; in fact, enamel hardness decreases as the demineralization 2 1234567890 ''"" process increases [2]. Lately, consumption of commercial or packaged fruit juices in Indonesia has reached 30% [3] implying equally higher production of packaged fruit juices. Notably, the label of packaged fruit juices states the citric acid composition of a juice. Exposure of hydroxyapatite to citric acid causes an ion exchange between hydroxyapatite and citric acid, releasing calcium and phosphate ions from hydroxyapatite, thus accounting for demineralization [4].
This study aims to investigate the impact of immersion time and juice types, both fresh and packaged, on enamel hardness. We believe the result of this experiment would be crucial in raising the awareness of the public regarding the consumption of both packaged and fresh fruit juices and their impact on the oral health.
Methods
This study was conducted at the Dental Material Laboratory Faculty of Dentistry Universitas Indonesia and Metal Laboratory of Bandung Technological Institute, from October 2013 to November 2013. The study materials and tools included packaged orange juice (Buavita brand), fresh orange juice squeezed from Pontianak oranges (Citrus nobilis var. microcarpa), aquadest, 60 adult premolar teeth, decorative resin, hardener, Vaseline, Carborundum Disc, mikromotor, low-speed handpiece, mold, grinding and polishing machine, sandpaper number 1500, velvet, alumina 1 µ, Knoop hardness tester, preparation glass, plastisin, press tool, pH digital meter, medicine pot, and cotton.
The specimens for this study were prepared using 60 adult premolar teeth, marked by the formation of completed roots in the cervical portion to facilitate the separation of the crown and root with lowspeed micromotor and Carborundum Disc. Crowns were placed at the base and at the center of the mold from a pralon pipe (height, 20 mm; diameter, 15 mm) that was polished with Vaseline, and the bottom was covered with a sticker with the buccal portion facing the base of the mold and attached to the sticker. Then, the mixture of the decorative resin and hardener were poured into the mold, which took approximately 30 min to solidify. After hardening, the specimen was removed from the mold, and the part of the specimen that contained the tooth was ground in the grinding and polishing machine with sandpaper number 1500 until the tooth was exposed. Of note, grinding did not exceed 1 mm. Then, the ground specimens were polished in the same machine with velvet and 1-µ alumina polish material for 30 min. All specimens were observed under the Knoop hardness tester lens; specimens were re-polished in case of scratches. Then, non-scratched specimens were inserted into the medicinal pot, and the polished specimen parts were covered with cotton so that it did not clash with the medicine pot wall when the pot was shaken. Finally, specimens were randomly divided into two equal large groups.
The hardness of all specimens was measured before treatment to obtain initial hardness values. The Knoop hardness tester was set to load for indenting 100 g for 10 s. All specimens were glued onto a preparatory glass with plastic and then pressed on the press tool. Next, specimens were placed under the lens on the Knoop hardness tester, and the focus was adjusted until the surface of specimens was clearly visible. In addition, the indent button was pressed so that the specimen could be indentated with the diamond penetrator for 10 s. After observing specimens for 10 s with the lens, lesions were visible in the form of a rhombus. Figure 1 shows the black line visible on the slide. By pressing a button on the Knoop hardness tester, we obtained the hardness number in Knoop hardness number. Of note, we performed indentation three times for each measurement, and the average of the three values was considered for its recording.
A digital pH meter was used to measure the pH and temperature of both types of orange juices used in this study. All specimens were immersed in each orange juice for 30 min, following which specimens were removed from the juice and rinsed with running water and dried with tissue paper. Then, the hardness value was measured. After that, each specimen was again immersed in orange juice for 30 min, and its hardness was measured until it reached the total time of immersion of 60 min.
The data were tested for normality using the Kolmogorov-Smirnov method. In addition, the difference in normality of enamel hardness data was tested. The difference data were searched to assess the magnitude of enamel degradation at each measurement time. After determining the data distribution, independent t-test was performed to assess the significance of the difference in the early enamel hardness in both treatment groups. When the decline in enamel hardness after immersing in packaged or fresh juice was significant, the enamel hardness was tested with repeated analysis of variance (ANOVA) and Friedman test. Then, the significance of enamel hardness immersed in packaged orange juice and fresh orange juice was compared using the independent t-test and Mann-Whitney U-test. Finally, the significance of difference in the decrease in enamel hardness between the two treatment groups was assessed using the independent t-test. The p-value was set at 0.05 and with 95% confidence interval. Table 1 summarizes the mean values obtained before, after 30 min, and 60 min of immersion, standard deviations, and confidence intervals (Figure 1). The data of all groups were normally distributed, except for the 60-min immersion in fresh orange juice. We subtracted the results of the 30-min immersion from the hardness before immersion and from that of the 60-min immersion to obtain the magnitude of the decline. After observing decline in data, all groups tested for normality exhibited the normal distribution. The first analysis of assessing the significance of the initial hardness of the freshly squeezed orange juice group and the packaged orange juice group using the independent t-test revealed no statistical significance (P = 0.503), implying that the initial hardness of both groups was the same. Next, the significance of enamel hardness discrepancies was assessed before, 30 min, and 60 min after immersion using repeated ANOVA for packaged orange juice group and the Friedman test for the fresh orange juice group. The results revealed a statistical difference (P < 0.05) in all data groups, suggesting that the more prolonged immersion in orange juice significantly decreased the enamel hardness.
Results
In addition, the independent t-test was performed to assess the enamel hardness 30 min after immersion and the Mann-Whitney U-test 60 min after immersion in both groups. The results revealed a significant difference between both groups and at all times of measurement, suggesting a significant decline in the enamel hardness of those immersed in packed orange juice (P < 0.05), compared with those immersed in fresh orange juice. Furthermore, the independent t-test was performed to assess the significance of the difference in decreasing the enamel hardness of teeth immersed in packaged orange juice and fresh juice, which revealed a significant difference (P < 0.05) in both groups. Thus, it could be concluded that a decline in enamel hardness is higher with packaged orange juice than with fresh orange juice.
Discussion
This study investigated the impact of packaged and fresh orange juice on the decline in enamel hardness. In this study, all specimens were sharpened, and the outer surface of the tooth was not exposed; this is acceptable because demineralization can occur beneath the tooth surface in clinical conditions [5]. The 60-min immersion time was chosen because it can release mineral that can be assessed with a profilometer [6]. Previously, as some studies have also selected the immersion time of 15-60 min, the 60-min immersion is acceptable [5,7]. In addition, the 60-min immersion time simulates 1 month of juice consumption with a 2-min drinking assumption, per Stephan [8], who asked his participants to rinse their oral cavity with glucose solution for 2 min.
Apparently, enamel hardness decreases with time. In this study, the first treatment group, which was immersed in fresh orange juice, exhibited a significant difference in enamel hardness and a significant decline in enamel hardness at each measurement time. The results were repeated in the second treatment group as well, implying that the longer the teeth are exposed to acids, the higher the decline in enamel hardness.
A study reported that consuming soft drinks might cause demineralization and decrease enamel hardness, also known as tooth erosion [9], which is the loss of minerals present in the tooth because of dissolution by acids [10]. In this study, the pH of fresh orange juice was 4.34 and that of packaged orange juice was 3.78. Thus, the pH value of juice in this study was below the critical pH of 5.5, rendering it capable of causing demineralization [1]. Besides the pH of acidic substances that affect demineralization, several factors such as buffer capacity, exposure frequency, duration of exposure, amount, and acidity of the substance affect demineralization [11]. In contrast, remineralization process in the oral cavity can be induced by two factors: increase in the pH of the oral cavity and the availability of a Ca 2+ and PO4 2− ions. Apparently, increasing pH in the mouth is affected by the salivary buffer capacity; after the pH increases, salivary ions can improve soluble acidic minerals [1]. Acids can be classified into extrinsic and intrinsic acids. While stomach acid due to vomiting, bulimia, and anorexia are intrinsic acids, acidic food and drinks constitute extrinsic acids [1]. Reportedly, acidic drinks might stimulate demineralization, acting as a factor that causes dental caries when demineralization is more dominant than remineralization [12]. Benjakul et al. reported the highest decline in enamel hardness in the longest enamel immersed in an acidic soup from Thailand (Kangsom) [13]. Thus, it could be inferred that the consumption of orange juice in large quantities and for an extended time could cause demineralization and increase the risk of caries.
Both fresh and packaged orange juice contain citric acid at different amounts. While fresh orange juice comprises 9.6 g/L, packaged orange juice comprises 16.8 g/L [14]. Typically, orange juice often represents drinks containing citric acid, and some studies have reported the potential of demineralization in beverages containing citric acid. Scaramucci et al. [15] reported that the enamel immersed in 1% citric acid solution with pH 3.8 exhibited the highest decline in hardness compared with that immersed in artificial juice and some other brands of juice [15]. In addition, Barbour et al. reported that teeth immersed in a solution containing citric acid had a significant decrease in enamel hardness compared with teeth immersed in a solution without citric acid, proving that citric acid plays an essential role in decreasing tooth enamel [16].
The concentration of citric acid in a solution affects the magnitude of mineral loss from the tooth structure of the enamel (enamel, dentine, and cementum). Shellis et al. [17] reported significant differences in the solubility rate of enamel in specimens immersed in 1% and 0.3% citric acid solution at pH 2.45 and 3.2, respectively, where the solubility of enamel specimens immersed in 1% citric acid solution was higher than that of those immersed in 0.3% citric acid solution. In addition, they observed an insignificant difference in the solubility of enamel specimens immersed in 1% citric acid solution with specimens immersed in 0.3% citric acid solution at pH 3.9, but suggested a higher enamel solubility in specimens immersed in 1% citric acid solution [17]. Furthermore, Misra [4] reported ion exchange when hydroxyapatite mineral is immersed in a citric acid solution as follows: HCit 2− + 2H 2+ (from solution to surface) H2PO4 1− + HPO4 2− + 1.5Ca 3+ (from surface to solution) These ion changes demonstrated that the tooth surface loss of phosphate and calcium ions occurred because of nitric acid interaction. In addition, the loss exhibits the presence of demineralization on teeth. Thus, enamel hardness decreases as the demineralization process increases [2], suggesting that citric acid can cause a decline in enamel hardness.
As mentioned earlier, higher exposure of the teeth enamel to acidic substances results in higher loss of minerals and, thus, lesser hardness. Thus, teeth immersed in packaged orange juice exhibited a high decline in enamel hardness compared with those immersed in fresh orange juice because packaged orange juice is more acidic and has a higher citric acid content than fresh orange juice.
Conclusion
The decline in tooth enamel hardness increases with the duration of immersion in orange juice. The highest decrease in enamel hardness occurs in tooth specimens immersed in packaged orange juice because it has a lower pH and higher citric acid content. For further research, artificial saliva could be used to approach the clinical state of the oral cavity.
|
2019-04-10T13:12:02.343Z
|
2018-08-01T00:00:00.000
|
{
"year": 2018,
"sha1": "41fb917a18c2a7b3263b381f55732667622ffad4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1073/3/032018",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ea12b69b82a324e308df4847d9f9d2061afb676f",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
}
|
257353882
|
pes2o/s2orc
|
v3-fos-license
|
A Higher-Order Language for Markov Kernels and Linear Operators
Much work has been done to give semantics to probabilistic programming languages. In recent years, most of the semantics used to reason about probabilistic programs fall in two categories: semantics based on Markov kernels and semantics based on linear operators. Both styles of semantics have found numerous applications in reasoning about probabilistic programs, but they each have their strengths and weaknesses. Though it is believed that there is a connection between them there are no languages that can handle both styles of programming. In this work we address these questions by defining a two-level calculus and its categorical semantics which makes it possible to program with both kinds of semantics. From the logical side of things we see this language as an alternative resource interpretation of linear logic, where the resource being kept track of is sampling instead of variable use.
Introduction
Probabilistic primitives have been a standard feature of programming languages since the 70s. At first, randomness was mostly used to program so called random algorithms, i.e. algorithms that require access to a source of randomness. Recently, however, with the rise of computational statistics and machine learning, randomness is also used to program statistical models and inference algorithms.
Programming languages researchers have seen this rise in interest as an opportunity to further study the interaction of probability and programming languages, establishing it as an active subfield within the PL community.
One of the main goals of this subfield is giving semantics to programming languages that are both expressive in the regular PL sense as well as in its abilities to program with randomness. One particular difficulty is that the mathematical machinery used for probability theory, i.e. measure theory, does not interact well with higher-order functions [2].
Currently, there are two classes of models of probabilistic programmingin its broad sense -that have found numerous applications: models based on linear logic and models based on Markov kernels. Since each kind of semantics has peculiarities that make them more or less adequate to give semantics to expressive programming languages, it is an important theoretical question to understand how these classes of models are related.
Linear Logic for Probabilistic Semantics The models of linear logic that have been used to give semantics to probabilistic languages are usually based on categories of vector spaces where programs are denoted by linear operators. We highlight two of them: -Ehrhard et. al [11,10,9] have defined models of linear logic with probabilistic primitives and have used the translation of intuitionistic logic into linear logic A → B =!A ⊸ B, where !A is the exponential modality, to give semantics to a stochastic λ-calculus. -Dahlqvist and Kozen [8] have defined an imperative, higher-order, linear probabilistic language and added a type constructor ! to accommodate nonlinear programs.
The main advantage of models based on linear logic is that programs are denoted by linear operators between spaces of distributions, a formalism that has been extensively used to reason about stochastic processes, as illustrated by Dahlqvist and Kozen who have used results from ergodic theory to reason about a Gibbs sampling algorithm written in their language, and by Clerc et al. who have shown how Bayesian inference can be given semantics using adjoint of linear operators [7].
Unfortunately, these insights are hard to realize in practice, since languages based on linear logic enforce that variables must be used exactly once, making it hard to use it as a programming language. The usual way linear logic deals with this limitation is through the ! modality which allows variables to be reused.
The problem with the exponential modality, when it comes to probabilistic programming, is that they are usually difficult to construct, do not have any clear interpretation in terms of probability, making the linear operator formalism not applicable anymore and, more operationally, through its connections with callby-name (CBN) semantics [18], makes it mathematically hard to reuse sampled values.
Ehrhard et al. have found a way around this problem by introducing a callby-value (CBV) let operator that allows samples to be reused [11,24]. In the discrete case this operator is elegantly defined by a categorical argument which is unknown to scale to the continuous case, which they deal with by making use of an ad-hoc construction that is unclear if it can be generalized to other models of linear logic. Therefore, our current understanding of models of linear logic does not provide a uniform way of reusing samples.
The difference between CBV and CBN can be illustrated by the program let x = coin in x + x, where coin is a primitive that outputs 0 or 1 with equal probability. In the CBN semantics each use of x corresponds to a new sample from coin, whereas in the CBV semantics the coin is only sampled once.
A subtler problem of probabilistic models based on linear logic is that they are ill-equipped to program with joint distributions. For instance, the language proposed by Ehrhard et. al can be easily extended with product types which, under their semantics, would make the type R × R be interpreted as MR × MR, where MR is the set of distributions over R -which is isomorphic to the set of independent distributions over R 2 . Dahlqvist and Kozen deal with this issue by adding primitive types R n to their language which are interpreted as the set of joint distributions over R n . However, since they are not defined using the type constructors provided by the semantic domain, programs of type R n can only be manipulated by primitives defined outside the language.
Markov Kernel Semantics Markov kernels are a generalization of transition matrices, i.e. functions that map states to probability distributions over them. They are appealing from a programming languages perspective because their programming model is usually captured by monads and Kleisli arrows, a common abstraction in programming languages semantics, and have been extensively used to reason about probabilistic programs [1,22,3]. By being related to monadic programming they differ from their linear operator counterpart by being able to naturally capture a call-by-value semantics which, as we argued above, is the most natural one for probabilistic programming.
Unfortunately, even though these semantics can be generalized to continuous distributions, they are notoriously brittle when it comes to higher-order programming. Only recently, with the introduction of quasi Borel spaces [15] and its probability monad, it is possible to give a kernel-centric semantics to higher-order probabilistic programming with continuous distributions.
However, due to quasi Borel spaces being a different foundation to probability theory, it is unclear which theorems and theories can be generalized to higher-order. For instance, martingale theory has been used in Computer Science to reason about termination of probabilistic programs [6,20,16]. In order to generalize these ideas to higher-order functions it would be necessary to define a quasi Borel version of martingales and prove appropriate versions of the main theorems from martingale theory, a non-trivial task.
Our Work: Combining both Kinds of Semantics Though both styles of semantics provide insights into how to interpret probabilistic programming languages (PPL), it is still too early to claim that we have a "correct" semantics which subsumes all of the existing ones. Both approaches mentioned above have their advantages and drawbacks.
In this work we shed some light into how both semantics relate to one another by showing that it is possible to use both styles of semantics to interpret a linear calculus that has higher-order functions, looser linearity restrictions, a uniform way of dealing with sample reuse and better syntax for programming joint distributions while still being close to their kernel and linear operator counterparts. Interestingly, we identify the joint distribution problem described above to be a consequence of linear logic requiring the non-linear product to be cartesian. In order to tackle this problem we build on categorical semantics of linear logic and on recent work on Markov categories, a suitable categorical generalization of Markov kernels defined using semicartesian products.
We bridge the gap between these semantics by noting that the regular resource interpretation of linear logic, i.e. A ⊸ B being equivalent to "by using one copy of A I get one copy of B" is too restrictive an interpretation for probabilistic programming. Instead, we should think of usage as being equivalent to sampling. Therefore the linear arrow A ⊸ B should be thought of as "by sampling from A once I get B", which is the computational interpretation of Markov kernels.
We realize this interpretation through a multilanguage approach: we have one language that programs Markov kernels, a second language that programs linear operators and add syntax that transports programs from the former language into the latter one. To justify the viability of our categorical framework we show how existing probabilistic semantics are models to our language and show how, under mild conditions, this semantics can be generalized to commutative effects.
Our contributions are: • We define a multi-language syntax that can program both Markov kernels as well as linear operators.( §3) • We define its categorical semantics and prove certain interesting equations satisfied by it. ( §4) • We show that our semantics is already present in existing models for discrete and continuous probabilistic programming. ( §5) • We show how our semantics can be generalized to commutative effects. ( §6)
Mathematical Preliminaries
We are assuming that the reader is familiar with basic notions from category theory such as categories, functors and monads.
Probability Theory
Transition matrices are one of the simplest abstractions used to model stochastic processes. Given two countable sets A and B, the entry (a, b) of a transition matrix is the probability of ending up in state b ∈ B whenever you start from the initial state a ∈ A and every row adds up to 1. Definition 1. The category CountStoch has countable sets as objects and transition matrices as morphisms. The identity morphism is the identity matrix and composition is given by matrix multiplication.
Though transition matrices are conceptually simple, they can only model discrete probabilistic processes and, in order to generalize them to continuous probability we must use measurable sets and Markov kernels.
Definition 2.
A measurable set is a pair (A, Σ A ), where A is a set and Σ A ⊆ P(A) is a σ-algebra, i.e. it contains the empty set and it is closed under complements and countable unions.
Given two measurable sets (A, Σ A ) and (B, Σ B ) it is possible to define a σ-algebra over A × B generated by the sets X × Y which we denote by Σ A ⊗ Σ B , where X ∈ Σ A and Y ∈ Σ B . Furthermore, every pair of distributions µ A and µ B over A and B respectively, can be lifted to a distribution Definition 6. The category Kern has measurable sets as objects and Markov kernels as morphisms. The identity arrow is the function id A (a, A) = 1 if a ∈ A and 0 otherwise and Composition is given by (f • g)(a, C) = f (−, C)d(g(a, −)).
Markov Categories
The field of categorical probability was developed in order to get a more conceptual understanding of Markov kernels. One of its cornerstone definitions is that of a Markov category which are categories where objects are abstract sample spaces, morphisms are abstract Markov kernels and every object has "contraction" and "weakening" morphisms which correspond to duplicating and discarding a sample, respectively, without adding any new randomness.
Definition 7 (Markov category [12]). A Markov category is a semicartesian symmetric monoidal category (C, ⊗, 1) in which every object A comes equipped with a commutative comonoid structure, denoted by copy X : X → X ⊗ X and delete X : X → 1, where copy satisfies The category being semicartesian means that the monoidal product comes equipped with projection morphisms π 1 : A ⊗ B → A and π 2 : A ⊗ B → B, but it is not Cartesian because the equation (π 1 • f, π 2 • f ) = f does not hold in general which, intuitively, corresponds to the fact that joint distributions might be correlated.
Theorem 1 ([12]). CountStoch is a Markov category.
The monoidal product is given by the Cartesian product and the monoidal unit is the singleton set. The copy X morphism is the matrix X × X × X → [0, 1] which is 1 in the positions (x, x, x) and 0 elsewhere, and the delete X morphism is the constant 1 matrix indexed by X.
Theorem 2 ([12]). Kern is a Markov category.
This category is the continuous generalization of CountStoch and the monoidal product is the Cartesian product with the product σ-algebra and the monoidal unit is the singleton set { * }. The copy X morphism is the Markov kernel copy X : Its delete morphism is simply the function that given any element in X, returns the function which is 1 on the measurable set { * } and 0 on the empty measurable set.
Linear Logic and Monoidal Categories
We recall the categorical semantics of the multiplicative fragment of linear logic (MLL): We denote the monoidal product as ⊗ and the space of linear maps between objects X and Y as X ⊸ Y , ev : ((X ⊸ Y ) ⊗ X) → Y is the counit of the monoidal closed adjunction and cur : is the linear curryfication map. We use the triple (C, ⊗, ⊸) to denote such models. Figure 8 (in Appendix B) commute.
If ǫ and µ X,Y are isomorphisms we say that F is strong monoidal.
One key observation of this paper is that there are many lax monoidal functors between Markov categories and models of linear logic that can interpret probabilistic processes.
Syntax
In this section we will design a syntax that reflects the fact that linearity corresponds to sampling, not variable usage. We achieve this by making use of a multi-language semantics that enables the programmer to transport programs defined in a Markov kernel-centric language (MK) to a linear, higher-order, language (LL).
Our thesis is that in the context of probabilistic programming, linear logic, through its connection with linear algebra, departs from its usual Computer Science applications of enforcing syntactic invariants and, instead, provides a natural mathematical formalism to express ideas from probability theory, as shown by Dahlqvist and Kozen [8].
Therefore, since many probabilistic programming constructs, such as Bayesian inference and Markov kernels, can be naturally interpreted in linear logic terms, we believe that our calculus allows the user to benefit from the insights linearity provides to PPL while unburdening them from worrying about syntactic restrictions by making it possible to also program using kernels.
We use standard notation from the literature: Γ ⊢ t : τ means that the program t has type τ under context Γ , t{x/u} means substitution of u for x in t and t{ − → x / − → u } is the simultaneous substitution of the term list − → u for a variable list − → x in t.
Both languages will be defined in this section and, for presentation's sake, we are going to use orange to represent MK programs and purple to represent LL programs.
A Markov Kernel Language
We need a language to program Markov kernels. Since we are aiming at generality, we are assuming the least amount of structure possible. As such we will be working with the internal language of Markov categories, as presented in Figure 1 and Figure 4 1 . Note that we are implicitly assuming a set of primitives for the functions f . By construction, every Markov category can interpret this language, as we show in Figure 6, with types being interpreted as and the contexts are interpreted using × over the interpretation of the types. However, as it stands, it is not very expressive, since it does not have any probabilistic primitives nor does it have any interesting types since 1 × 1 ∼ = 1.
When working with concrete models (c.f. Section 5) we can extend the language with more expressive types as well as with concrete probabilistic primitives. For instance, in the context of continuous probabilities we could add a R datatype and a · ⊢ uniform : R uniform distribution primitive.
Note that even though this language does not have any explicit sampling operators, this is implicitly achieved by the let operator. For instance, the program
A Linear Language
Our second language is a linear simply-typed λ-calculus, with the usual typing rules shown in Figure 5 in Appendix A, which can be interpreted in every symmetric monoidal closed category as shown in Figure 7, also in Appendix A, with types interpreted by and the contexts are interpreted using ⊗ over the interpretation of the types. Once again, we are aiming at generality instead of expressivity. In a concrete setting it would be fairly easy to extend the calculus with a datatype N for natural numbers and probabilistic primitives such as · ⊢ coin : N that flips a fair coin.
The idea behind the particular linear logic models that we are interested in is that, by integration, Markov kernels can be seen as linear operators between vector spaces of probability distributions. As such, an LL program x : N ⊢ LL t : N will be denoted by a linear function between distributions over the natural numbers. Therefore, from a programming point of view, variables are placeholders for probability distributions, i.e. computations, not values, and sampling occurs when variables are used.
Combining Languages
The main drawback of the linear calculus above is that the syntactic linearity restriction makes it hard to program with it, while the main drawback of the Markov language is that it does not have higher-order functions. In this section we will show how we can combine both language so that we get a calculus with looser linearity restrictions while still being higher-order.
As we will show in Section 5, when looking at concrete models for these languages we can see that the semantic interpretations of variables in both languages are completely different: in the MK language variables should be thought of as values, i.e. the values that were sampled from a distribution, whereas in the LL language, variables of ground type are distributions. In order to bridge these languages we must use the observation that Markov kernels -i.e. open MK terms -have a natural resource-aware interpretation of being "sample-once" stochastic processes and, by integration, can be seen as linear maps between measure spaces -i.e. open LL terms. The combined syntax for the language is depicted in Figure 3.
We now have a language design problem: we want to capture the fact that every open MK program is, semantically, also an open LL term. The naive typing rule is: x 1 : τ 1 , · · · , x n : τ n ⊢ MK M : τ x 1 : Mτ 1 , · · · , x n : Mτ n ⊢ LL MK(M ) : Mτ The problem with this rule is that it breaks substitution: the variables in the premise are MK variables whereas the ones in the conclusion are LL variables.
We solve this problem by making the syntax reflect a common idiom of PPLs: compute distributions (elements of Mτ ), sample from it and then use the result in a non-linear continuation. This is captured by the following syntax: sample t 1 , · · · , t n as x 1 , · · · , x n in M Note that we are sampling from LL programs t i (possibly an empty list), outputting the results to MK variables x i and binding them to an MK program M . When clear from the context we simply use sample t i as x i in M . Its corresponding typing rule is: As the typing rule suggests, its semantics should be some sort of composition. However, since we are composing programs that are interpreted in different categories, we must have a way of translating MK programs into LL programs -as we will see in Section 4 this translation will be functorial. The operational interpretation of this rule is that we have a set of distributions {t i } defined using the linear language -possibly using higher-order programs -we sample from them, bind the samples to the variables {x i } in the MK program M where there are no linearity restrictions. Note that the rule above looks very similar to a monadic composition, though they are semantically different (cf. Section 4).
With this new syntax we can finally program in accordance with our new resource interpretation of linear logic, allowing us to write the program sample coin as x in (x = x), which flips a coin once and tests the result for equality with itself, making it equivalent to true.
This combined calculus enjoys the expected syntactic properties 2 .
The following example illustrates how we can use the MK language to duplicate and discard linear variables. Example 1. The program which samples from a distribution t and then returns a perfectly correlated pair is given by: Similarly, the program that samples from a distribution t and does not use its sampled value is represented by the term · ⊢ LL sample t as x in unit : M1 As we explain in the introduction, Dahlqvist and Kozen must add many primitives to their language to work around their linearity restrictions. For instance, in order to write projection functions R n → R m , n > m they must add projection primitives to the language.
By having compositional type constructors that can represent joint distributions , i.e. M(τ × τ ), it is possible to write the program sample t as x in (π 1 x, π 3 x) which samples from a distribution over triples and returns only the first and third components by only using the syntax of products in MK.
Unfortunately there are some aspects of this language that still are restrictive. For instance, imagine that we want to write an LL program that receives two "Markov kernels" MN⊸MN and a distribution over N as inputs, samples from the input distribution, feeds the result to the Markov kernels, samples from them and adds the results. Its type would be
(MN⊸MN)⊸(MN⊸MN)⊸MN⊸MN
Even though the program only requires you to sample once from each distribution, it is still not possible to write it in the linear language.
We will show in Section 4 how the type constructor M actually corresponds to an applicative functor [19], and the limitation above is actually a particular case of a fundamental difference between programming with applicative functors compared to programming with monads. Remark 1. We now have two languages that can interpret probabilistic primitives such as coin. However, every primitive M in the MK language can be easily transported to an LL program by using an empty list of LL programs: sample _ as _ in M . Therefore it makes sense to only add these primitives to the MK language.
Categorical Semantics
As it is the case with categorical interpretations of languages/logics, types and contexts are interpreted as objects in a category and every well-typed program/proof gives rise to a morphism.
In our case, MK types τ are interpreted as objects τ in a Markov category (M, ×) and well-typed programs Γ ⊢ MK M : τ are interpreted as an M morphism Γ → τ , as shown in Figure 6. Similarly, LL types τ are interpreted as objects τ in a model of linear logic (C, ⊗, ⊸) and well-typed programs Γ ⊢ LL t : τ are interpreted as a C morphism Γ → τ , as shown in Figure 7.
To give semantics to the combined language is not as straightforward. The sample rule allows the programmer to run LL programs, bind the results to MK variables and use said variables in an MK continuation. The implication of this rule in our formalism is that our semantics should provide a way of translating MK programs into LL programs. In category theory this is usually achieved by a functor M : M → C.
However, we can easily see that functors are not enough to interpret the sample rule. Consider what happens when you apply M to an MK program x : τ 1 , y : τ 2 ⊢ MK N : τ : To precompose it with two LL programs outputting Mτ 1 and Mτ 2 we need a mediating morphism µ τ1,τ2 : Mτ 1 ⊗ Mτ 2 → M(τ 1 × τ 2 ). Furthermore, if N has three or more free variables, there would be several ways of applying µ. Since from a programming standpoint it should not matter how the LL programs are associated, we require that µ τ1,τ2 makes the lax monoidality diagrams to commute. Therefore, assuming lax monoidality of µ we can interpret the sample rule: In case it only has one MK variable, the semantics is given by t ; M N and in case it does not have any free variables the semantics is ǫ; M N .
The equational theory of the LL languages is the well-known theory of the simply-typed λ-calculus and the MK equational theory has been described, in graphical notation, by Fritz [12]. Something which is not obvious is understanding how they interact at their boundary. This is where M being a functor becomes relevant, since from functoriality it follows the two program equivalences: The expected compositionality of the semantics also holds: Theorem 7. Let x 1 : τ 1 , · · · , x n : τ n ⊢ t : τ and Γ i ⊢ t i : τ i be well-typed terms.
Proof. The proof can be found in Appendix D.
From this theorem we can conclude: Corollary 1. The Subst rule shown above is sound with respect to the categorical semantics.
Lax monoidal functors, under the name applicative functors, are widely used in programming languages research [19]. They are often used to define embedded domain-specific languages (eDSL) within a host language. This suggests that from a design perspective the Markov kernel language can be thought of as an eDSL inside a linear language.
We have just shown that M being lax monoidal is sufficient to give semantics to our combined language, but what would happen if it had even more structure? If it were also full it would be possible to add a reification command 3 : where MΓ is notation for every variable in Γ being of the form Mτ ′ , for some τ ′ . The semantics for the rule would be taking the inverse image of M. As we will show in the next section, there are some concrete models where M is full and some other models where it is not. Computationally, fullness of M can be interpreted as every program of type Mτ ⊸ Mτ ′ being equal to a Markov kernel.
A property which is easier to satisfy is faithfulness, which is verified by both models in the next section. In this case the translation of the MK language into the LL language would be fully-abstract in the following sense:
Concrete Models
In this section we show how existing models for both discrete as well as continuous probabilities fit within our formalism.
Discrete Probability
For the sake of simplicity we will denote the monoidal product of CountStoch as ×.
The probabilistic coherence space model of linear logic has been extensively studied in the context of semantics of discrete probabilistic languages [9].
Definition 10 (Probabilistic Coherence Spaces [9]). A probabilistic coherence space (PCS) is a pair (|X|, P(X)) where |X| is a countable set and P(X) ⊆ |X| → R + is a set, called the web, such that: -∀a ∈ X ∃ε a > 0 ε a · δ a ∈ P(X), where δ a (a ′ ) = 1 iff a = a ′ and 0 otherwise, and we use the notation ε a = ε(a); -∀a ∈ X ∃λ a ∀x ∈ P(X) x a ≤ λ a ; We can define a category PCoh where objects are probabilistic coherence spaces and morphisms X ⊸ Y are matrices f : |X| × |Y | → R + such that for Definition 11. Let (|X|, P(X)) and (|Y |, P(Y )) be PCS, we define Proof. The first two points are obvious, as the Dirac measure is a subprobability measure and every subprobability measure is bounded above by the constant function µ 1 (x) = 1.
This lemma can be used to give semantics to probabilistic primitives. For instance, a fair coin is interpreted as a function coin : N → [0, 1] which is .5 at 0 and 1 and 0 elsewhere and is an element of P(N). Lemma 2. Let X → Y be a CountStoch morphism. It is also a PCoh morphism.
Proof. The functor is defined using the lemmas above. Functoriality holds due to the functor being the identity on arrows. The lax monoidal structure is given by ǫ = id 1 and µ X,Y = id X×Y Lemma 3. If µ ∈ {x ⊗ y | x ∈ M(X), y ∈ M(Y )} ⊥ then for every x ∈ X and y ∈ Y , µ(x, y) ≤ 1.
Lemma 4. Let X and Y be two countable sets, then Proof. By the lemma above it follows that if we have a joint probability distributionμ over X × Y and an element µ ∈ {x ⊗ y | x ∈ M(X), y ∈ M(Y )} ⊥ then µ(x, y)μ(x, y) ≤ μ(x, y) ≤ 1.
Proof. Since ǫ is the identity morphism, it is trivially an isomorphim. The morphisms µ X,Y being an isomorphism is a direct consequence of the lemmas above.
Theorem 11. The functor M is full.
Both results above can be directly used to enhance the syntax of the combined language. From Theorem 10 we can conclude that elements of type M(τ 1 × τ 2 ), by projecting their marginal distributions, can be manipulated as if they had type Mτ 1 ⊗ Mτ 2 . Something to note is that when we do this marginalization process we lose potential correlations between the elements of the pair.
Continuous Probability
In order to accommodate continuous distributions we can use regularly ordered Banach spaces, whose detailed definition goes beyond the scope of this paper.
Definition 12 ([8]). The category RoBan has regularly ordered Banach spaces as objects and regular linear functions as morphisms.
Proof. The functor acts on objects by sending a measurable space to the set of signed measures over it, which can be equipped with a RoBan structure. On morphisms it sends a Markov kernel f to the linear function M(f )(µ) = f dµ.
The monoidal structure of RoBan satisfies the universal property of tensor products and, therefore, we can define the natural transformation µ X,Y : M(X) ⊗ M(Y ) → M(X × Y ) as the function generated by the bilinear function M(X); M(Y ) ⊸ M(X × Y ) which maps a pair of distributions to its product measure. The map ǫ is, once again, equal to the identity function.
The commutativity of the lax monoidality diagrams follows from the universal property of the tensor product: it suffices to verify it for elements µ A ⊗ µ B ⊗ µ C .
In RoBan the uniform distribution over the interval [0, 1] is an element of MR, meaning that it can soundly interpret a · ⊢ LL uniform : MR primitive.
Even though M looks very similar to the discrete case, it follows from a wellknown theorem from functional analysis that the functor is not strong monoidal, meaning that there are joint probability distributions (elements of M (A × B)) that cannot be represented as an element of the tensor product M(A) ⊗ M(B) and, as such, programs of type M (A × B) must be manipulated in MK language, as shown in Example 3.
Beyond Probability
We have seen that this new resource interpretation is present in different models of linear logic models for probabilistic programming. In this section we show that this model can be generalized to commutative effects, i.e. effects where the program equation Commutativity below holds. Categorically, these effects are captured by monoidal monads 4 . Due to length issues, we will not fully detail the definition of monoidal monads, but we suggest the interested reader to read Seal [23]. For probability monads the transformation κ corresponds to forming the product probability distribution and, more generally, this can be thought of a program that runs both of its (effectful) inputs and pairs the outputs.
Every monad give rise to the interesting categories C T and C T which are, respectively, the Kleisli category and Eilenberg-Moore category. The objects of C T are the same as C and morphisms between A and B are C morphisms A → T B, with the identity morphism being equal to the unit η of the monad and composition is given by f ; g = f ; T g; µ.
The objects of the category C T are pairs (X, x), where X is a C object and x : T X → X is a C morphism such that µ; x = T x; x and η; x = id X , and morphisms between objects (X, x) and (Y, y) are C morphisms f : X → Y such that x; f = T f ; y.
For every monad T there is a canonical inclusion functor ι : C T → C T which maps X to (T X, µ) and f : X → Y to T f ; µ Y . Theorem 13 ([5]). The functor ι is full and faithful.
As we explain in Appendix C, assuming enough structure on the category C we can show that the triple (C T , C T , ι) is a model to the MK+LL language and we can bring our new resource interpretation of linear logic to other commmutative effects.
An illustrative example is the powerset monad P : Set → Set which is monoidal and since Set has the necessary structure, the triple (C P , C P , P) is a model to our language and can be used to give semantics to non-deterministic computation.
In the context of commutative effects other than randomness, the syntax sample t as x in M does not make as much sense, in which case we can use the syntax observe t i as x i in M instead. Once again, operationally, the programs t i are fully executed, the values are bound to x i in M which is then executed.
Furthermore, other effects have other relevant effectful operations and, therefore, we can assume that there is a set of operations in the MK language that are interpreted in the Kleisli category and can be transported to LL using observe, similar to how it was done in the probabilistic case.
For the non-deterministic case we can assume the existence of typing rules for non-deterministic choice and failure: satisfying the expected equations and interpreted using set-theoretic union and the empty set, respectively.
A similar connection between linear logic and monoidal monads has been made by Benton and Wadler [4], where they want to relate Moggi's monadic λ-calculus with linear logic by showing that if a monad is monoidal and the category has equalizers and coequalizers, then the Eillenberg-Moore category is a model of linear logic.
Related Work
Semantics of Probabilistic Programming Ehrhard et al. [11,10] have defined a model of linear logic CLin which can be used to interpret a higher-order probabilistic programming language. They have used the call-by-name translation of intuitionistic logic into linear logic A → B =!A ⊸ B to give semantics to their language. The authors extend their language with a call-by-value let syntax which makes it possible to reuse sampled values. In order to give semantics to this new language they introduce a new category CLin m which can interpret this new operator, at the cost of complicating their model.
Because there is an analogous proof of Theorem 12 with the category CLin replacing RoBan, we can use their original, simpler, model to interpret our language, while not needing to use the linear logic exponential to interpret nonlinear programs.
Dahlqvist and Kozen [8] have defined a category of partially ordered Banach spaces and shown that it is a model of intuitionistic linear logic. An important difference from their approach and the one mentioned above is that they embrace variable linearity as part of their syntax. As we argued in this paper, we believe that the syntactic restriction of linearity they have used is not adequate for the purposes of probabilistic programming. They deal with this limitation by adding primitives to their languages which, by using the results of Section 5, could be programmed using the MK language.
Quasi Borel spaces [15] are a conservative extension of Meas that are Cartesian closed and have a commutative probability monad. The drawback of this model is that it is still not as well understood as its measure-theoretic counterpart, and there are theorems from probability theory used to reason about programs that may not hold in the category of quasi Borel spaces QBS.
Recently, Geoffroy [13] has made progress in connecting linear logic and quasi Borel Spaces by showing that a certain subcategory of the Eillenberg-Moore category for the probability monad in QBS is a model of classical linear logic, which we see as an instance of our model where the MK language can have higher-order functions as well.
Call-by-Push-Value The idea of having two distinct type systems that are connected by a functorial layer is reminiscent of Call-by-Push-Value (CBPV) [17], which has a type system for values and a type system for computations that are connected by an adjunction. In recent work, Ehrhard and Tasson [24] use the Eilenberg-Moore adjunction of the linear logic exponential ! to give semantics to a calculus that can interpret lazy and eager probabilistic computation, allowing for the interpretation of an eager let operator which is operationally similar to our sample construct. However, the existence of the let operator depends on properties of the ! that are unknown to hold for continuous distributions, while our semantics can naturally deal with continuous distributions as we have shown in Section 5.
Furthermore, the exponential which lies at the center of their approach is, semantically, hard to work with and does not have any clear connections to probability theory, making it unlikely that their semantics can be seen as a bridge between the Markov and linear semantics, which is the case for the models presented in Section 5.
Goubault-Larrecq [14] has defined a CBPV domain semantics to a language that mixes probability and non-determinism, a long-standing challenge in the theory of programming languages. His focus is in understanding how to make probability interact with non-determinism in a sound way. He studies the fullabstraction of his semantics but does not deal with connections to linear logic. Let LetTensor Γ1 ⊢ t : τ1 ⊗ τ2 Γ2, x : τ1, y : τ2 ⊢ u : τ Γ1, Γ2 ⊢ let x ⊗ y = t in u : τ
C Monoidal Monads and Their Algebras
An important theorem from the categorical probability literature is that Markov categories are an abstraction of programming in the Kleisli category of monoidal affine monads, where affinity means that T 1 ∼ = 1.
The monoidal product of C T is × with unit 1, the copy operation is given by ∆ X ; η X : X → T (X × X) and the deletion operation is given by T 1 ∼ = 1 and 1 being terminal.
Furthermore, under certain conditions, the Eilenberg-Moore category C T for monoidal monads is symmetric monoidal closed. The monoidal unit is given by T I, the monoidal product is given by the coequalizer depicted in Figure 9 and the closed struture is given by the equalizer depicted in Figure 10.
Theorem 15. Let C be a symmetric monoidal closed category with equalizers, reflexive co-equalizers and T : C → C a monoidal monad. The category C T is also symmetric monoidal closed. Even though, in general, in order to define the monoidal product one requires a coequalizer, for our purposes we are only interested in products of the form T A ⊗ T T B which, luckily, are easier to characterize, since the equality T X ⊗ T T Y = T (X ⊗ Y ) holds [23].
In this case the lax monoidal transformations µ X,Y : T X ⊗ T T Y → T (X ⊗Y ) and ǫ : F I → F I are simply the identity morphisms. Besides, by using the universal properties of coequalizers it is possible to show the equalityα T X,T Y,T Z = α X,Y,Z , whereα is the associator for the monoidal product ⊗ T .
Theorem 16. Let C be a symmetric monoidal category with reflexive co-equalizers and T : C → C a monoidal monad. The triple (ι, µ, ǫ) is a lax monoidal functor.
Proof. The proof follows by unfolding the definitions.
-Application: t 1 t 2 {x/u} = t 1 {x/u} t 2 {x/u}. Since the language LL is linear, only one of t 1 or t 2 will have x as a free variable. By symmetry we can assume that t 1 has x as a free variable and we can prove Γ, ∆ ⊢ t 1 {x/u} t 2 : τ by applying the rule Application and by the induction hypothesis.
|
2022-02-02T02:16:27.311Z
|
2022-01-31T00:00:00.000
|
{
"year": 2022,
"sha1": "8c6d2c24ad0dcd06055bcda1aa6b42bd37b43391",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8c6d2c24ad0dcd06055bcda1aa6b42bd37b43391",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
16403846
|
pes2o/s2orc
|
v3-fos-license
|
Sinomenine Sensitizes Multidrug-Resistant Colon Cancer Cells (Caco-2) to Doxorubicin by Downregulation of MDR-1 Expression
Chemoresistance in multidrug-resistant (MDR) cells over expressing P-glycoprotein (P-gp) encoded by the MDR1 gene, is a major obstacle to successful chemotherapy for colorectal cancer. Previous studies have indicated that sinomenine can enhance the absorption of various P-gp substrates. In the present study, we investigated the effect of sinomenine on the chemoresistance in colon cancer cells and explored the underlying mechanism. We developed multidrug-resistant Caco-2 (MDR-Caco-2) cells by exposure of Caco-2 cells to increasing concentrations of doxorubicin. We identified overexpression of COX-2 and MDR-1 genes as well as activation of the NF-κB signal pathway in MDR-Caco-2 cells. Importantly, we found that sinomenine enhances the sensitivity of MDR-Caco-2 cells towards doxorubicin by downregulating MDR-1 and COX-2 expression through inhibition of the NF-κB signaling pathway. These findings provide a new potential strategy for the reversal of P-gp-mediated anticancer drug resistance.
Introduction
Colorectal cancer is one of the most common malignant tumors in gastrointestinal track. In recent years, the incidence of colorectal cancer has significantly increased in china [1]. Surgical resection is the optimal treatment for this kind of cancer, while chemotherapy serves as one of the important adjuvant therapies for its treatment. Currently, the development of multidrug resistance (MDR), a phenotype that cancer cells become resistant to a broad spectrum of chemotherapeutics [2], is a major obstacle in colorectal cancer chemotherapy. It has been shown that emergence of MDR in cancer cells is significantly correlated with the overexpression of membrane pump proteins, including P-glycoprotein (P-gp) [3]. P-gp, encoded by the MDR-1 gene, is a member of the large ATP-binding cassette protein superfamily [4]. P-gp is able to pump a great amount of compounds from intracellular to extracellular sites. When cancer cells encounter chemotherapeutic drugs, liposoluble drugs enter cells via the concentration gradient effect. After binding to P-gp, liposoluble drugs are constantly pumped outside of the cell by a process powered by ATP hydrolysis, inducing a continuous decline in intracellular drug levels [5]. Consequently, the drug toxicity on cancer cells is gradually weakened, thereby losing efficacy and, finally, generating drug resistance in cancer cells. Sinomenine (7,8-didehydro-4-hydroxy-3,7-dimethoxy-17methylmorphinae-6-one) is one of several alkaloids extracted from the stem of Sinomenium acutumRehder & Wilson (Menispermaceae), which has been used traditionally in China and Japan to treat various rheumatic and arthritic diseases [6]. It is worth noting that sinomenine is capable of increasing the absorptive transport of digoxin (a prototypical substrate of p-glycoprotein) and decreasing its secretory transport [7]. Some studies indicate that sinomenine can block activation of NF-Kb [8]. The underlying mechanism of these phenomena remains unclear.
Cyclooxygenase (COX), a rate-limiting enzyme that catalyzes the biosynthesis of prostaglandins (PGs) from the substrate arachidonic acid (AA) and participates in multiple physiological and pathological events. Currently, there are two isoforms of COX: COX-1 and COX-2. In most tissues, COX-1 is expressed constitutively, whereas COX-2 is induced by growth factors, cytokines, and carcinogens [9]. COX-2 is commonly detected in many types of tumor tissues including esophagus, stomach, colon, liver, biliary system, pancreas, breast, lung and bladder cancers [10]. Recent findings have shown that COX-2 expression is positively correlated with P-gp expression in tumor tissue [11]. Relevant studies have demonstrated that COX-2 inhibitors increase the sensitivity of cancer cells to chemotherapeutics by regulating the activity of P-gp [12,13]. It has been found that celecoxib, a selective COX-2 inhibitor, may downregulate P-gp expression in cancer cells by suppressing the expression of transcription factors such as NF-kB [14,15]. Several studies indicated that the MDR-1 gene may contain DNA binding sites for transcription factor NF-kB [16,17].
Some studies indicate that sinomenine inhibits maturation of monocyte-derived dendritic cells through blocking activation of NF-kB [8]. In the current study, we tested the hypothesis that sinomenine may enhance the sensitivity of cancer cells towards antitumor drugs and investigated the potential molecular mechanisms of this effect by directly assessing the effect of COX-2 and NF-kB pathways on P-gp expression.
Cell Culture
The Caco-2 cell lines employed in this study were purchased from the Chinese Academy of Medical Sciences. Caco-2 cells were cultured in high glucose Dulbecco's modified eagle's medium (DMEM, Gibco, Bethesda, MD, USA) culture media containing 10% fetal calf serum at 37uC with 5% CO 2 . MDR-Caco-2 cells were developed by exposure of Caco-2 cells to increasing concentrations of doxorubicin (from 0.1 mM to 1.6 mM in 7 days). Then MDR-Caco-2 cells were incubated without doxorubicin for a week before experiments.
MTT Colorimetric Assay
The application concentration of sinomenine, celecoxib, PGE 2 and the capability of sinomenine to sensitize colon cancer cells towards doxorubicin were evaluated using the MTT colorimetric assay. Caco-2 cells and MDR-Caco-2 cells at the logarithmic phase were collected, incubated in a 96-well plate at a concentration of 2610 4 cells per well and cultured for 24 h with DMEM supplemented with 10% FCS. Following the attachment of the cells to the wall, DMEM medium (without FCS) containing sinomenine (0, 50, 100, 300, 400, 500, 1000, 2000 mM), celecoxib (0, 5, 10, 15, 20, 25, 30, 35 mM) and PGE 2 (0, 10 25 , 10 24 , 10 23 , 10 22 , 10 21 , 1, 10 mM) were supplemented at a final volume of 200 mL/well for 48 h. After treatment, the medium was removed and the cells were washed twice with DMEM. Then 200 ml DMEM supplemented with 10% FBS and 10% MTT (5 mg/ml) was added. After incubation for another 4 h, the reduced intracellular formazan product was dissolved by replacing 150 mL of DMEM with the same volume of DMSO. The optical density (OD) value was detected at a wavelength of 490 nm with a microplate reader (Bio-rad680, CA, U.S.A.). Four duplicates were designed for each well, and the mean value was calculated three times. The cell growth inhibition rate was calculated from the following formula: cell growth inhibition rate = (1 2OD value in study group/OD value in control group)6100%.
WST-1 Cell Proliferation Assay
Caco-2 cells and MDR-Caco-2 cells at the logarithmic phase were collected, incubated in a 96-well plate at a concentration of 2610 4 cells per well and cultured for 24 h with DMEM supplemented with 10% FCS. Following the attachment of the cells to the wall, DMEM medium (without FCS) containing sinomenine (500 mM), celecoxib (25 mM), sinomenine (500 mM) plus PGE 2 (1 mM) with a interval of 2 h. After incubation for 48 h, the medium was replaced with DMEM (FCS-free) containing doxorubicin (1.6, 2.0, 2.4, 2.8, 3.2, 3.6, 4.0, 5.0, 6.0 mM) for 24 h. 10 mL of the reagent wst-1 was added (Roche Applied Science, Vilvoorde, Belgium) and incubated for 2 h at 37uC. The optical density was read at 450 nm by microplate reader Labnet (Celbio, Milan, Italy). The wst-1 data were presented as the mean (6 S.D.) of triplicate experiments.
PGE 2 Estimation
MDR-Caco-2 and Caco-2 cells at a density of 5610 6 were seeded in 90 mm culture dishes. They were incubated with or without snomenine (500 mM) for 48 h. At the end of the treatment period, culture medium was collected to determine the amount of
Immunocytochemistry
The distribution of P-gp in the cell membrane and nuclear translocation of NF-kB p65 was analyzed by immunocytochemistry as standard procedures. Briefly, Caco-2 and MDR-Caco-2 cells were treated with sinomenine (500 mM) and control medium (without sinomenine) for 48 h and fixed with 4% paraformaldehyde. The cells were incubated with a P-glycoprotein (P-gp) mouse anti-human monoclonal antibody (1:200 dilution) or a NF-kB p65 rabbit anti-human polyclonal antibody (1:200 dilution) for 1 h followed by incubation with FITC labelled goat anti-mouse IgG (1:200 dilution) or FITC-labelled goat anti-rabbit IgG (1:200 dilution) for 1 h, respectively. Finally, cells were examined under a fluorescence microscope (Carl Zeiss, Thornwood, NY, USA).
Real-time Relative Quantitative Reverse Transcriptase Polymerase Chain Reaction (PCR) Assay
In order to investigate the effect of sinomenine and celecoxib on P-gp and COX-2 expression, real-time relative quantitative PCR was performed. Cells were plated in 6-well plates with DMEM supplemented with 10% FCS for 24 h. Caco-2 and MDR-Caco-2 cells were treated with sinomenine (500 mM) or celecoxib (25 mM) for 48 h.
Total RNA was isolated with TRIzol reagent (Keygen Biotech Co., Ltd, Nanjing, China), according to the protocol of the manufacturer. The isolated RNA was quantified by spectrophotometry (optical denisty 260/280 nm). The mRNA was then reverse-transcribed into cDNA, according to PrimeScript RT Master Mix Perfect Real Time purchased from Takara Bio Inc. (Dalian, China).
Real-time relative quantitative PCR was performed using the Applied Biosystems 7500 faster Real-Time PCR System with the SYBR Premix Ex Taq (Tli RNaseH Plus) Master Mix purchased from Takara Bio Inc (Dalian, China) in triplicate for each sample and each gene. PCRs were carried out using the oligonucleotide primers listed in Table 1, which describes the size of expected fragments. PCR conditions used were: denaturation at 95uC for 30 s, followed by 40 cycles of denaturation at 95uC for 5 s and 30 s at 60uC for annealing and 30 s at 72uC for elongation. The results were expressed as the ratio value of the CT value for the target mRNA to that of the b-actin mRNA (Ct sample/Ctb-actin).
Western Blot Analysis
Western blots were performed based on standard procedures. Briefly, harvested cells were washed twice with cold PBS (pH 7.4). Nuclear extracts were isolated by using the Nuclear/cytosol Fractionation Kit (Keygen Biotech Co., Ltd, Nanjing, China) according to the manufacturer's recommendations. Total protein were extracted following the manufacturer's instructions of the test kit from Nanjing KeyGEN Biotech. CO., LTD (China). After determining the protein concentration of samples using bicinchoninic acid (BCA) protein assay, equal amounts of protein samples (30 mg protein) were separated onto SDS-polyacrylamide gels (8% for P-gp, 15% for COX-2, 12% for NF-kB p65, p-IkB-a, IkB-a and beta-actin).
Statistical Analyses
Data are presented as the means 6 SE. A preliminary analysis was. carried out to determine whether the datasets accorded with a normal. distribution, and a computation of homogeneity of variance was performed using Bartlett's test. The means among diverse samples were compared by ANOVA, and multiple comparisons among the groups were conducted using the leastsignificant difference (LSD) method. If the F values were significant (P,0.05), Dunnett's method was employed to evaluate individual differences between means, and P,0.05 was considered significant. All of the data were statistically analyzed using the SPSS 11.5 software for windows.
Effect of Sinomenine, Celecoxib and PGE 2 on Caco-2 Viability
Experiments performed by incubating Caco-2 cells up to 48 h with increasing concentrations sinomenine, revealed that this compound does not influence Caco-2 cell viability at concentrations of 500 mM or less (Fig. 1A). A concentration of 500 mM was selected as the application concentration.
Dose-response and time-course studies demonstrated that celecoxib, a COX-2 specific inhibitor does not affect Caco-2 cell proliferation at doses ranging from 0 to 25 mM (Fig. 1B). Previous studies indicate that celecoxib regulates MDR1 expression by inhibition of COX-2 enzyme activity at a concentration of 25 mM. So, a dose of 25 mM was selected for our experiments [18].
To evaluate whether PGE 2 could influence the effects of sinomenine, Caco-2 cells were incubated with or without increasing concentrations (0 to 10 mM) of PGE 2 , a COX-2 end product, demonstrated that this compound does not influence Caco-2 cell viability at any concentration tested (Fig. 1C). Studies have shown that PGE 2 regulates MDR1 expression at a concentration of 1 mM [12,18,19], and it is implied that Akt is blocked in the mechanism. Therefore, we chose the dose of 1 mM in our experiments.
Sinomenine and Celecoxib Enhanced Doxorubicininduced Ctotoxicity both in Caco-2 and MDR-Caco-2 Cells
To evaluate whether sinomenine and celecoxib might sensitize Caco-2 and MDR-Caco-2 cells to the cytotoxic effects of doxorubicin, Caco-2 and MDR-Caco-2 cells were treated with doxorubicin (10 25 to 10 mM) in the absence or presence of sinomenine (500 mM), celecoxib (25 mM), or sinomenine (500 mM) plus PGE 2 (1 mM) for 48 h. Cell proliferation was determined by MTT assay (Fig. 2 A and B) and WST-1 assay (Fig. 2 C and D). Doxorubicin decreased cell viability dose-dependently both in Caco-2 and MDR-Caco-2 cells with an IC 50 value of approximately 2.4160.15 mM and 4.6760.12 mM (Fig. 2 A), respectively. In MTT assay, cotreatment of Caco-2 cells with sinomenine, or celecoxib, sensitized Caco-2 cells to the cytotoxic effects of doxorubicin with a decrease in IC 50 values from 2.4160.15 mM to 1.9160.16 mM and 1.8560.2 mM (Fig. 2 A), respectively. Nevertheless, cotreatment with sinomenine plus PGE 2 had no effect on sensitivity of Caco-2 cells towards doxorubicin.
Sinomenine and celecoxib also enhanced the cytotoxic action of doxorubicin in MDR-Caco-2 cells, which decreased the IC 50 value from 4.6760.12 mM to 2.45 m60.14 mM and 2.56 mM60.11 mM (Fig. 2 B), respectively. Surprisingly, cotreatment with sinomenine plus PGE 2 had a negative effect on sensitivity of MDR-Caco-2 cells towards doxorubicin with an increased IC 50 value from 4.6760.12 mM to 5.35 m60.13 mM.
In WST-1 assay, the IC 50 value of Caco-2 cells decreased from 2.3360.14 mM to 1.8560.13 mM and 1.8860.21 mM (Fig. 2 C). However cotreatment with sinomenine plus PGE 2 weakened the sensitivity of Caco-2 cells towards doxorubicin with a decrease in IC 50 values from 2.3360.14 mM to 2.5560.17 mM (Fig. 2 C). Sinomenine and celecoxib also enhanced the cytotoxic action of doxorubicin in MDR-Caco-2 cells, which decreased the IC 50 value from 4.5560.19 mM to 2.55 m60.25 mM and 2.52 mM60.18 mM (Fig. 2 D), respectively. Amazingly, cotreatment with sinomenine plus PGE 2 had a negative effect on sensitivity of MDR-Caco-2 cells towards doxorubicin with an increased IC 50 value from 4.5560.19 mM to 5.15 m60.14 mM (Fig. 2 D).
Sinomenine Decreased PGE 2 Release
To examine more closely the involvement of COX-2, the PGE 2 , a COX-2 end product, released from Caco-2 and MDR-Caco-2 cells was determined by ELISA method. The results clearly show a significant increase in the PGE 2 levels in MDR-Caco-2 cells compared to Caco-2 cells and a significant decline in the levels of PGE2 in MDR-Caco-2 cells treated with sinomenine (Fig. 3). In order to understand the mechanism of resistance developed in MDR-Caco-2 cells, and the mechanism involved in sinomenine and celecoxib sensitizing MDR-Caco-2 cells towards doxorubicin, immunofluorescence cytochemistry, quantitative Real-time PCR, and western blotting were performed. The results showed overexpression of MDR1 mRNA and protein significantly decreased in the presence of sinomenine and celecoxib (Fig. 4).
Sinomenine Downregulated the Expression of the COX-2 in MDR-Caco-2 Cells
To understand the role of COX-2 in the development of resistance, and the effect of sinomenine on COX-2 expression, Caco-2 and MDR-Caco-2 cells were treated with or without sinomenine and celecoxib, a COX-2 specific inhibitor, by quantitative Real-time PCR and western blotting. The results revealed that COX-2 is overexpressed in MDR-Caco-2 cells and sinomenine suppressed COX-2 expression (Fig. 5). Besides that, celecoxib has no effect on the expression. We can infer that celecoxib, as a COX-2 specific inhibitor, inhibits the function of COX-2 rather than regulating its expression.
Sinomenine and Celecoxib Decreased NF-kB Activation P-gp expression has been clearly correlated to NF-k B activation [17,20,21], which is mediated by the phosphorylation of IkB-a. Subsequently, activated NF-kB p65 subunit translocates to the nucleus and binds to the DNA site, which eventually activates transcription of MDR-1 [22]. To understand the mechanism by which sinomenine and celecoxib enhance the sensitivity of MDR-Caco-2 cells towards doxorubicin, immunofluorescence cytochemistry, quantitative Real-time PCR, and western blotting were performed to detect p65 subunit in nuclear and cytoplasmic p-IkB-a and IkB-a. The results showed that the NF-kB pathway was activated in MDR-Caco-2 cells, while sinomenine and celecoxib suppressed the activation of NF-kB pathway in MDR-Caco-2 cells (Fig. 6).
Discussion
Chemotherapy serves as one of the important treatments for colorectal cancer. Long-term chemotherapy unavoidably leads to drug resistance and this has become a major challenge to the triumph of chemotherapy. The emergence of drug resistance may correlate with an increase in efflux pump activity, a decrease in drug absorption, the activation of detoxification enzymes, alterations in drug targets and a reduction in cell apoptosis [23]. Previous studies on the efflux pump have shown that P-gp, encoded by the MDR-1 gene, plays an important part, as it pumps drug substance outside to reduce cytotoxicity presented by cancer cells and enhances the resistance of carcinoma to chemotherapeutics. However, the drug resistance presented by cancer cells can be effectively reversed by suppressing P-gp expression and function [24,25,26].
Sinomenine, a bioactive alkaloid derived from Sinomenium acutum, is used to treat rheumatic and arthritic diseases in China. Sinomenine has a variety of functions including anti-inflammation and immunosuppression [27,28]. Previous studies have indicated that sinomenine decreased the efflux of prototypical p-gp substrates, such as digoxin and paeoniflorin [6,7], and sinomenine itself might be a substrate of P-gp [29]. So the regulation methods of sinomenine to P-gp remained unknown. Our results showed that sinomenine downregulated P-gp expression in MDR-Caco-2 cells (Fig. 4) and enhanced the sensitivity of MDR-Caco-2 cells towards doxorubicin (Fig. 2). Some studies have indicated that sinomenine inhibited the expression of COX-2 [30,31]. Consistent with these results, our findings manifested that sinomenine downregulated COX-2 expression in MDR-caco-2 cells (Fig. 4) and decreased the PGE 2 , an end production of COX-2, released from MDR-Caco-2 cells (Fig. 3).
COX-2, one of the rate-limiting enzyme in the metabolism of arachidonic acid to prostaglandins, is overexpressed in a large number of human primary and metastatic neoplasms [32]. Whether COX-2 is involved in the development of drug resistance characterized by P-gp overexpression is controversial. Many studies showed that COX-2 expression is correlated with P-gp expression [33,34]. It is reported that adenovirus transfection of COX-2 gene up-regulates MDR-1 gene expression in rat glomerulus cells and maintained the toxicity of adriamycin against renal cells. In the presence of COX-2 inhibitor NS-398, MDR-1 gene expression levels were significantly reduced and the cytotoxicity of adriamycin was enhanced [35]. In line with these findings, we found that the expression of both COX-2 and P-gp are significantly enhanced in MDR-Caco-2 cells. Celecoxib, a COX-2 specific inhibitor, downregulated P-gp expression in MDR-Caco-2 cells and sensitized MDR-Caco-2 cells towards doxorubicin. As stated above, sinomenine inhibited the expression of COX-2 and P-gp. Additionally, when MDR-Caco-2 cells were treated with sinomenine plus PGE 2 , sinomenine failed to enhance the toxicity of doxorubicin towards MDR-Caco-2 cells (Fig. 2).
Previous studies showed that MDR-1 gene contains binding sites for NF-kB, which might correlate with MDR-1 gene expression [16,17].
NF-kB generally exists as a heterodimer of the p50 and p65 polypeptides, bound in the cytoplasm by the inhibitor protein IkB [36,37]. Following cellular stimulation by a series of cytokines or pathogens, IkB is phosphorylated by the IkB kinase (IKK) complex at serines 32 and 36, then degraded by the 26S proteosome. Subsequently, NF-kB translocates to the nucleus, where it binds to regulatory elements within the promoter region of target genes. There is evidence that NF-kB was downstream of COX-2 [38], nevertheless, studies have indicated that the downregulation of COX-2 expression could inhibit NF-kB [39,40]. In the present study, we found that sinomenine and celecoxib suppressed the activation of NF-kB pathway in MDR-Caco-2 cells (Fig. 6).
In conclusion, we developed a multidrug-resistant Caco-2 (MDR-Caco-2) cell line by exposure of Caco-2 cells to increasing concentrations of doxorubicin, which overexpressed both P-gp and COX-2. Sinomenine downregulated the expression of MDR1 mRNA and protein via NF-kB pathway, and inhibited the expression of COX-2, which was correlated with P-gp expression. Our findings, therefore, provided new insights into the regulation of P-gp expression in multidrug-resistant cells and proposed new potential strategies for the reversal of P-gp-mediated anticancer drug resistance. However, other signaling molecules may also participate in the regulation of the activity of MDR-Caco-2 cells and thus contribute to multidrug-resistant development. Further studies are needed to explore how COX-2, NF-kB and other signaling molecules interact in the development of P-gp-mediated multidrug-resistant in cancer cells.
|
2016-05-04T20:20:58.661Z
|
2014-06-05T00:00:00.000
|
{
"year": 2014,
"sha1": "a4d2421d6033698ce92207f04c2366857c482fcf",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0098560&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a4d2421d6033698ce92207f04c2366857c482fcf",
"s2fieldsofstudy": [
"Biology",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
92217113
|
pes2o/s2orc
|
v3-fos-license
|
A novel structure-based control method for analyzing nonlinear dynamics in biological networks
Exploring complex biological systems requires adequate knowledge of the system’s underlying wiring diagram but not its specific functional forms. Thus, exploration actually requires the concepts and approaches delivered by structure-based network control, which investigates the controllability of complex networks through a minimum set of input nodes. Traditional structure-based control methods focus on the structure of complex systems with linear dynamics and may not match the meaning of control well in some biological systems. Here we took into consideration the nonlinear dynamics of some biological networks and formalized the nonlinear control problem of undirected dynamical networks (NCU). Then, we designed and implemented a novel and general graphic-theoretic algorithm (NCUA) from the perspective of the feedback vertex set to discover the possible minimum sets of the input nodes in controlling the network state. We applied our NCUA to both synthetic networks and real-world networks to investigate how the network parameters, such as the scaling exponent and the degree heterogeneity, affect the control characteristics of networks with nonlinear dynamics. The NCUA was applied to analyze the patient-specific molecular networks corresponding to patients across multiple datasets from The Cancer Genome Atlas (TCGA), which demonstrates the advantages of the nonlinear control method to characterize and quantify the patient-state change over the other state-of-the-art linear control methods. Thus, our model opens a new way to control the undesired transition of cancer states and provides a powerful tool for theoretical research on network control, especially in biological fields. Author summary Complex biological systems usually have nonlinear dynamics, such as the biological gene (protein) interaction network and gene co-expression networks. However, most of the structure-based network control methods focus on the structure of complex systems with linear dynamics. Thus, the ultimate purpose to control biological networks is still too complicated to be directly solved by such network control methods. We currently lack a framework to control the biological networks with nonlinear and undirected dynamics theoretically and computationally. Here, we discuss the concept of the nonlinear control problem of undirected dynamical networks (NCU) and present the novel graphic-theoretic algorithm from the perspective of a feedback vertex set for identifying the possible sets with minimum input nodes in controlling the networks. The NCUA searches the minimum set of input nodes to drive the network from the undesired attractor to the desired attractor, which is different from conventional linear network control, such as that found in the Maximum Matching Sets (MMS) and Minimum Dominating Sets (MDS) algorithms. In this work, we evaluated the NCUA on multiple synthetic scale-free networks and real complex networks with nonlinear dynamics and found the novel control characteristics of the undirected scale-free networks. We used the NCUA to thoroughly investigate the sample-specific networks and their nonlinear controllability corresponding to cancer samples from TCGA which are enriched with known driver genes and known drug target as controls of pathologic phenotype transitions. We found that our NCUA control method has a better predicted performance for indicating and quantifying the patient biological system changes than that of the state-of-the-art linear control methods. Our approach provides a powerful tool for theoretical research on network control, especially in a range of biological fields.
Introduction
Numerous biological systems can be represented as networks, and several approaches have been developed to construct reliable biological networks [1,2]. Since the control process is dominated by the intrinsic structure and dynamic propagation within the system, the concepts and approaches of structure-based network control are emergently required to investigate the controllability of complex networks through a minimum set of input nodes [3][4][5][6][7][8][9][10][11][12][13]. The analysis of biological systems from the structure-based control viewpoint provides a deeper understanding of the dynamics of complex large-scale biological systems [14][15][16]. So far, the studies exploiting the structure-based control of complex networks can be mainly divided into two categories according to the styles of the networks, that is, the approaches focusing on directed networks [3][4][5][6][10][11][12][13]17] and the methods focusing on undirected networks [7][8][9] For directed networks, many researchers have developed linear structural control tools to identify the minimum number of input nodes that need to be controlled by external signals for the system to achieve the desired control objectives [5,6,13]. Although those linear control tools have many applications to biomolecular systems, such as in the detection of driver metabolites in the human liver metabolic network [14] and driver gene discovery in pan-cancer datasets [15], those tools may only give an incomplete view of the network control properties of a system with nonlinear dynamics [17]. Recently, an analytical tool called a feedback vertex set control (FC) has been shown to study the control of large directed networks in a reliable and nonlinear manner, where the network structure is prior-known and the functional form of the governing equations is not specified, but must satisfy some properties [12,18]. This formalism identifies the set of feedback vertex nodes (FVS) in networks, uniquely determining the long-term dynamics of the entire network. With such a scheme, the source nodes can converge to a unique state (or trajectory) without independent control [12,18]. Zañudo et al. showed that both the state of the source nodes and FVS can change the dynamic attractors available to the network; they identified the source nodes and FVS as the input nodes to control the direct networks with nonlinear dynamics [17]. The above approaches only focus on the linear or nonlinear dynamics of directed networks. There are few approaches to investigate the linear dynamics on undirected networks. For example, an exact controllability framework [7], an analytical framework, offers a tool to treat the structural controllability of undirected networks; the Minimum Dominating Sets (MDS) [8] is an alternative way to investigate the controllability of undirected linear networks, since it works with a strong assumption that the controllers can control its outgoing links independently. Therefore, there is still a need for efficient tools to analyze the structural controllability of the undirected networks with nonlinear dynamics.
In this paper, we first formalize the nonlinear control problem of undirected networks (NCU), that is, how to choose the proper input nodes to drive the network from one attractor to a desired attractor in the networks with nonlinear and undirected dynamics. We developed a novel graphic-theoretic algorithm (NCUA) to measure the controllability of undirected networks based on the feedback vertex sets. Specifically, (i) we assume that each edge in a network is bidirectional; (ii) we construct a bipartite graph from the original undirected network, in which the nodes of the top side are the nodes of the original graph and the nodes of the bottom side are the edges of the original graph (Figure 1 (b)); (iii) we adopt an equivalent optimization procedure for determining the MDS of the top side nodes to cover the bottom side nodes in the bipartite graph that can control the whole network using mathematical terms; and (iv) we apply random Markov chain sampling to obtain the distribution of the input nodes set and uncover the possible sets of the input nodes to control the undirected network.
Since most real world networks have a statistically significant power-law distribution, we generally have defined the control characteristics as the fraction of identified minimum input nodes and applied NCUA for multiple synthetic scale-free (SF) networks and real-world networks, and obtained several counterintuitive findings: i) the fraction of input nodes in the network increases when the degree exponent value increases for fixed average degree, indicating that control characteristics is affected by degree heterogeneity; ii) new degree heterogeneity is defined and the fraction of input nodes decreases monotonically when degree heterogeneity becomes larger for fixed average degree. Furthermore, the degree heterogeneity and the average degree determine the minimum number of control input nodes; iii) the set of input nodes tends to be highly target-connected nodes, whereas the previous linear control study suggested that driver nodes tend to avoid high-degree nodes [9][10][11][12].
We also investigated the network transition between the disease state and normal state identifiable with the stable network states (dynamical attractors) in personalized patient networks. For each sample of each cancer patient from 10 kinds of cancer sites in TCGA, we constructed a personalized differential network between the normal state and disease state, and applied the NCUA for finding their key control genes on pathologic phenotype transitions. We found that (i) although most of the cancer samples have a similar nonlinear controllability, the determining control genes still differ for different cancer samples; (ii) we identified the controllability of the reconstructed individual networks for single samples across 10 cancer datasets, and we found the high confidence cancer-specific key genes have significant enrichments in the cancer genes census (CGC) set and the FDA-approved drug target genes (DTG) set. Compared with the traditional control model of linear networks (Exact control and Liu's linear control) [7,8], our results imply that a single-patient system in cancer may be more controllable than predicted on linear dynamical networks due to the ubiquity of the nonlinear features in biological networks. In contrast to another model on the network control of undirected networks called MDS [8], our NCUA also showed a higher performance in identifying the key genes in the CGC and DTG, which were underestimated by the MDS. In conclusion, our model provides a new powerful tool for theoretical and empirical study of network controllability, especially in biological and biomedical fields.
. CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint
Formulation of the NCU
Network dynamics are commonly nonlinear, especially at the level of nodes or small groups of nodes in the network [19]. In past decades, the focus of network control research has shifted from linear dynamics to nonlinear dynamics [12,[20][21][22][23][24].
One of these methods, namely the feedback vertex set control (FC) [12,18], can be reliably applied to large complex networks in which the structure is well known and the functional form of the governing equations is not specified but must satisfy some properties. Although Zañudo et al. applied the FC to study dynamic models of direct networks to predict nodes for the control of various technical, social, and biological networks [17], we still lack a framework to solve the nonlinear control problem on undirected networks [8]. Here, we focus on the nonlinear control problem of undirected networks. Given an undirected network G (V, E), we generally consider the following broader class of the model [23] to be the following: . Then, we formalize the concept of the nonlinear control of the undirected networks, which is how we chose the set of input nodes that are injected by input signal u with the minimum cost to control the above equation (1) from an initial attractor to a desired attractor. In Figure 1, we give a diagram illustration of our NCU with a simple example.
Algorithm for the Nonlinear Control of an Undirected Network (NCUA)
In many complex biological systems, there is adequate knowledge of the underlying wiring diagram, but not of the specific functional forms [17]. Analyzing such complicated systems requires concepts and approaches of structure-based control, which investigates the controllability of complex networks through a minimum set of input nodes. The traditional structure-based control methods, such as the Structural by the cycle structure and the source nodes of the network [17]. However, they focused on the structure control of direct networks with nonlinear dynamics. We still lack an analytical framework for the feedback control of undirected networks.
. CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint Therefore, to solve the above proposed NCU, we developed a novel algorithm, the NCUA, which is based on the assumption that the edges of the undirected networks are modeled as the bi-directed edges. with nonlinear dynamic, we chose the set of input nodes with the minimum cost to control the system which is represented by an undirected network from the initial attractor to the desired attractor. By controlling the three minimum feedback vertex nodes 1 4 9 { , , } v v v and ensuring that the removal of the three nodes leaves the graph without cycles, the system is guaranteed to be controllable from initial attractor to desired attractor. By using Liu's linear control and exact control method, we identified random Markov chain sampling to obtain different input node sets. In Figure 1 give a diagram to illustrate the process of our NCUA for discovering the possible input nodes. The details of the NCUA are introduced below.
I. Constructing a bipartite graph from the original undirected network
For a given undirected network G (V, E), we assume that each edge is bi-direct
II. Obtaining the cover set with minimum cost by using Integer Linear Programming (ILP)
problem for determining the nodes to control the whole network, that is, how to select a proper node set S, in which for each node This problem can be solved by solving the following ILP model, where it will take the value x i =1 when node i belongs to the cover set; the object is to obtain the minimum number of nodes to cover set V ⊥ . Although it is an NP-hard problem [8], the optional solution is obtained efficiently for moderate sizes of graphs with up to a few tens of thousands of nodes by utilizing an algorithm that uses the LP-based classic branch and bound method [26,27] to determine the optimal solution.
III. Obtaining different input nodes by using random Markov chain sampling.
Here, we define the minimum dominating nodes in the "bipartite graph"as a Iteration: For t=1, 2,…, obtain M t+1 from M t as follows: . CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint Choose a node w uniformly at random in M t . Then, delete node w and add a new which can cover the edges connected by node w in the bipartite has been obtained.
Accept the new Markov Chain M t+1 randomly.
We terminate the procedure of the MC sampling when the absolute percentage error,
Controllability of the SF network revealed by the NCU in synthetic networks
In order to evaluate the control characteristics of the NCU, we applied our NCUA to the synthetic SF networks generated by the static model [13,29] Then, we applied our NCUA to estimate the minimum number of input nodes to control the networks with nonlinear dynamics. For a given γ and average degree <k>, 100 networks of 10,000 nodes were constructed. The results of the NCUA were averaged over all realizations. We list the numerical results of our NCUA for the synthetic networks in Figure 2.
In fact, we plotted the NCU size as a function of the degree exponents and the average degrees and list the results in Figure 2 (a-c). In Figure 2 Figure 2(a, c). These results are complemented by Figure 2 (b-c), where it . CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a [13]; however, these results are in agreement with the results of the MDS control scheme [8]. Note that the diagram of EC control, Liu's linear control, MDS control, and our NCUA are shown in Figure S2.
Counterintuitive findings of the controllability from the NCU on real-world networks
We collected 17 networks with 11 categories, which were chosen for their diversity in applications and scopes (Additional File 2). By calculating the P-value of the Kolmogorov-Smirnov goodness-of-fit statistic [31], whose results are listed in Table S3, we found that the above networks are significantly subject to the power-law distribution; the detailed results are shown in Supplementary Note 5 of Additional Figure 3 (a), we show that the number of input nodes has a tendency to increase as the exponent and the average degree increase.
Furthermore, in Figure 3(a), we can evaluate the value of scaling exponent approximately by fitting its control characteristic to that on the synthetic networks.
We observed that the degree heterogeneity becomes larger as the number of We list the results of the number of input nodes in the function of the average degree and the new converted degree heterogeneity measure in Figure 3(b). As shown in Figure 3 (b), we find that networks with a lower average degree and higher degree heterogeneity are easier to control than those with a large average degree. The control characteristics of networks can be fully discriminated by the new converted degree heterogeneity and the average degree. We also find that the set of input nodes tends to highly target connected nodes, whereas the previous linear control study suggested that driver nodes tend to avoid high degree nodes, as shown in Figure 3(c) [7]. We observe that most types of biological networks (e.g., gene regulatory, PPI, and genetic networks) require the control of a smaller fraction of nodes than social networks (trust and social communication networks); the fraction of input nodes is between 10% and 30% in biological networks vs. more than 40% in social networks.
These predictions match well with recent experimental results in cellular reprogramming and large scale social network experiments [32,33]. Note that this prediction stands in contrast with those of linear control [7] on the same type of networks, and to some extent, can address the initial arguments on network controllability [34,35].
To ensure that our NCU is physically significant, we then focused on the required control energy and the control time to achieve control for networks with nonlinear dynamics. We applied a 3-dimensional stable nonlinear Lorenz oscillator system [36,37] on the real-world network to control the networked system to the ). Note that in Figure 4, the energy cost and time cost of a given network are the average energy and time cost of different input nodes, respectively.
Finally, we evaluated the differences between closed-loop controller and linear feedback controller on the nonlinear network control. Here, we adopt the local feedback controllers [36,37] and closed-loop controllers [23] on the real-world networks. Figure 4 shows that closed-loop controllers demand a greater number of determining nodes, but require less control time and control energy than the traditional linear feedback controllers. . CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint expressed genes and where the edge exists in the both gene-gene interaction network and the differential expression network for each patient ( Figure S1 in Additional File 1).
We first used KS statistics to find that these networks have scale free properties.
We then computed the linear regression coefficient between the frequency of the network degree d (log10(f(d))) and the log 10 transformed degree d (log10(d)). We found that the scale free exponents of the single sample networks in different types of cancer are less than 2 (Additional File 3). We also found that the NCU controllability for the single sample network in different types of cancer, that is the ability to alter the normal state and tumor state, will be much easier to control than the controllability of the network linear dynamics, including the EC control scheme and Liu's control scheme (Figures 5 (a) and S5 of Additional File 1). This result reveals that for the cancer patient, we only need a small fraction of genes to change the network state between the stable states, which is not applicable for controlling the biological network from initial states to any states in linear dynamics. This observation is in agreement with previous biological conclusions [34,35].
The key control genes in the patient-manner network were further investigated using the NCUA method. In fact, the NCUA method provides a ranking of the nodes as the input control nodes according to the value of the frequencies of the nodes, in which the input control nodes are ordered by decreasing the sampling frequency in the random Markov chain sampling. We first defined the personalized key control genes as the genes that appear as the key control nodes with a high frequency (f>0.6) in the patient-manner network. Then, we calculated the frequency of the personalized key control genes for different cancer datasets. We defined the high confidence cancer-specific determining genes (f>0.6), middle confidence cancer-specific key control genes (0.3<f<=0.6), and low confidence cancer-specific key control genes (f<=0.3). The computational results of 10 cancer datasets are listed in Figure 5 (b1). Finally, we computed the p-value of the high-confidence key control genes enriching the cancer genes census set [42] or FDA-approved drug targeted genes set [43] by using the hyper-geometric test [44]. If the calculated p-value was less than 0.05, then we regarded that this cancer gene set is significantly enriched in the Cancer Genes Census set and FDA-approved DTG set. Figure 5 (b2) shows that the high-confidence determining control genes for different cancer datasets have a good enrichment in the cancer genes census set and the FDA-approved DTG set.
Furthermore, we find that the set of input nodes tends to target highly connected nodes, as shown in Figure S6 of Additional File 1. These results are in agreement with previous biological observations [45,46]. not in the NCU nodes in the Cancer Genes Census set and FDA-approved drug targeted genes set. Note: *scores of the ESG are larger than 5, but less than 10; **scores of the ESG are larger than 10, but less than 15; ***scores of the ESG are larger than 15.
Discussions
. CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint Nacher and Akutsu introduced the MDS to study the controllability of undirected networks by assuming that each node in the MDS can control all of its outgoing edges separately [8]. However, the MDS-based model assumes that more powerful control is possible (because each driver node can control its outgoing links independently), which has the possible drawback of requiring higher costs and may not be possible in many kinds of networks. Even if such powerful controllers exist, the MDS-based model still suffers from the underestimated nonlinear control of complex systems (networks). Despite its success and widespread application in searching for the important genes in the protein interaction network [16,[47][48][49][50], the MDS-based model may give an incomplete view of the undirected network control properties. In the case of a network with nonlinear dynamics, the definition of control (full control; from any initial to any final state) for the MDS-based model does not always match the meaning of control in biological, technological, and social systems, where control tends to involve only naturally occurring system states.
In this work, our control model NCU drives the whole-networked system from the initial state toward its desired dynamical attractors (e.g., the steady states and limit state cycles) by steering the input nodes to the desired dynamic attractors. Our NCU algorithm (NCUA) predicts the input nodes whose override (by an external controller or drive signals) can steer a network's dynamics toward its desired long-term dynamic behaviors (its desired dynamical attractors). Furthermore, we used the NCU control model on biological, technological, and social networks, and we identified the topological characteristics underlying the predicted node overrides. We also identified that the networks with a low average degree are easier to control than those with a large average degree, which is opposite to the previous observation from the MDS theory, as shown in Figure S7. We summarize the difference between the MDS-based method and the NCU method in Table S2 of Additional File 1.
The NCU and MDS methods are very different methods, so one should be careful about extending their predictions beyond their realm of applicability. In fact, in the case of network with MDS's assumption, the key nodes identified using our . CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint NCU control model can provide sufficient conditions to control the system from any initial state to any desired final steady state. For example, in Figure 1, the key nodes 1 4 9 { , , } v v v identified using our NCU control model dominate the nodes of the whole network for the MDS model, but the key nodes 1 4 { , } v v identified using the MDS model cannot cover the edges of the whole network for our NCU control model. To further emphasize the advantage of the NCU method over the MDS method, we provide the enrichment results from the CGC set and DTG set of the input nodes, which are nodes of the NCU, but not of the MDS for individual (paired) samples in the 10 cancer datasets. Figure 5(c) shows that the NCUA can identify the key genes in the CGC set and the FDA-approved DTG set, which are missed using the MDS method. The NCU model provides us with a more complete insight into the control of network-based systems.
Conclusions
Generally, complex biological networks whose data are limited can be diagrammed less accurately than networks, such as power grid networks. Recently, several control principles have been developed to control complex networks, but controlling complex biological networks is still hindered by network data [24]. In biological networks, we usually utilize undirected networks to model the protein interactions. Controlling the network dynamics by regulating some key nodes in the undirected networks to achieve optimal performance is still a big challenge. Two conventional control frameworks for undirected network dynamics, that is, the exact controllability and MDS-based model, focus on the linear dynamics in undirected networks. A theoretical control framework is urgently required to solve the nonlinear control problem in the undirected networks. Instead of focusing on how to obtain the state transitions of the undirected network with linear dynamics, a new concept, the nonlinear control of undirected networks (NCU), is introduced to understand how we can estimate the ability of the proper set of input nodes to achieve the control from the initial attractor to the desired attractor in undirected networks. To solve this problem, an NCUA based . CC-BY-NC-ND 4.0 International license not certified by peer review) is the author/funder. It is made available under a The copyright holder for this preprint (which was this version posted December 21, 2018. . https://doi.org/10.1101/503565 doi: bioRxiv preprint on feedback vertex sets was designed and implemented. The NCUA has been evaluated on multiple synthetic SF networks and real complex networks, and it has exhibited the novel control characteristics of the undirected SF networks with nonlinear dynamics. The NCUA has also been applied to investigate the networks and their nonlinear controllability of cancer samples from TCGA by screening known driver genes and known drug targets as controls of their phenotype transitions, as well as to provide meaningful predictions with biological significance. Interestingly, we find that the control performance of our nonlinear control method in the single-patient system in cancer is much better than that of the traditional linear control methods, which are limited to a canonical linear time-invariant approximation. The key control genes for the individual cancer samples have significant enrichments both in the CGC set and the FDA-approved DTG set. Furthermore, it is worth exploring how to solve the NCU model with more constrained conditions (such as the target control and constrained target control [5,6,13]) and how to extend our method to the edge dynamics [10] to create new avenues to tackle complex systems. Note that although the NCUA is applied to the analysis of undirected networks, we believe that, in the future, it can be extended to the analysis of directed or semi-directed networks after the implementation of a module processing technique on directed or semi-directed networks with a network community detection algorithm from the microcosmic perspective [47,51].
|
2019-04-03T13:09:04.789Z
|
2018-12-21T00:00:00.000
|
{
"year": 2018,
"sha1": "d8c67bc63e3798ff6ae1480e380ce3a885208722",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1007520&type=printable",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "90abf67bb8e7e7749718b522873288e610645fa0",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
241901232
|
pes2o/s2orc
|
v3-fos-license
|
GM-CSF and HMGB1 level were associated with the clinical characteristics and prognosis of childhood refractory mycoplasma pneumoniae pneumonia CURRENT
Background: To analyze the relationship between granulocyte-macrophage colony-stimulating factor (GM-CSF) and high mobility group box 1 (HMGB1) level in alveolar lavage fluid with the clinical characteristics and prognosis of children refractory mycoplasma pneumoniae pneumonia (MPP), so as to provide reliable targets for clinical diagnosis and treatment. Methods: A total of 106 children diagnosed with MPP and prepared for bronchoalveolar lavage therapy were selected in this study, which were divided into 2 groups according to clinical diagnosis: those showing clinical and radiological deterioration despite appropriate antibiotic therapy for ≥7 days were classified into refractory MPP group (n=47), while the others were classified into non-refractory MPP group (n=59). The data of physical examination, treatment and outcome were collected. In addition, the GM-CSF and HMGB1 levels in alveolar lavage fluid during each bronchoalveolar lavage therapy were detected by ELISA kits. Results: There was no significant difference in age, sex, course of fever, the highest temperature, WBC, L, PLT, ALT, AST, CK-MB, D-D, CK, IgG, IgA, IgM, C3, and C4 between refractory MMP group and non-refractory MMP group on admission (P0.05). The levels of N, CRP, PCT, and LDH in refractory MPP group were higher than those in non-refractory MPP group, the difference had statistical significance (P0.05). Both GM-CSF and HMGB1 levels were positively correlated with traditional indicators N, CRP, PCT and LDH (r=0.611-0.785, P<0.05). ROC analysis results showed that CRP, GM-CSF and HMGB1 had predictive value for refractory MPP attack (AUC=0.636, 0.657, 0.651, P<0.05). Logistic regression analysis results showed GM-CSF and HMGB1 were independent factors for refractory MPP (B1.0, P<0.05). ROC
3 Mycoplasma pneumoniae (MP) is one of the most common causes of community-acquired pneumonia (CAP) in children and young adults, with ability to cause local epidemics [1]. In general, Mycoplasma pneumoniae pneumonia (MPP) is mild and self-limiting, which could be rapidly improved by macrolide treatment. However, in recent years, the incidence of refractory MPP characterized by rapid development, long course of disease and severe pulmonary lesions has been increasing year by year [2]. In addition to improving the treatment methods, how to improve the early diagnosis rate and win the best treatment opportunity is also a focus of current researches [3]. Several mechanisms, including direct cytotoxicity, intracellular colonization, adsorption effect to respiratory epithelial cells, and causing immune dysfunction has been reported as so far, in which immune dysfunction, especially adaptive immune dysfunction is considered to be the most important factor to promote the development of refractory MPP [4].
Despite being only a subsystem in the more basic and all-compassing system maintaining integrity, the adaptive immune system plays a key role in recognizing pathogens and distinguishing dead cells, thus clear harmful substances and block secondary damage [5]. It is reported that pathogenassociated molecular pattern molecules (PAMPs) and damage-associated molecular pattern molecules (DAMPs) are the most important adjuvants for adaptive immune system [6]. PAMPs has been identified galore, while there are only two types of typical DAMP as so far: high mobility Group Box 1 (HMGB1) protein and adenosine triphosphate proteins (ATP) [7]. HMGB1 is a kind of highly conserved encoded non-histone, nuclear DNA-binding protein widely found in eukaryotic cells. In addition to its classical role such as DNA recombination and cell differentiation, HMGB1 also sever as an inflammatory factor to participate in anti-pathogen activities.
Granulocyte macrophage colony stimulating factor (GM-CSF) was first detected in Lung tissue from mice cultured in vitro after LPS stimulation. Recent years, a few studies focused on GM-CSF expression in children with MPP, and found GM-CSF played an important role in the inflammatory response conducted by neutrophil in acute and severe MPP [8]. Liu Y, et al. [9] found the GM-CSF level of MPP Children was associated to the duration of fever and might be a good marker for prognosis. It was reported that the bronchial epithelium could produce GM-CSF in the stimulation of MP, which serve as a proinflammatory cytokines to recruit and activates neutrophils. Meanwhile, GM-CSF could also modulate oxidative stress activity through a priming phenomenon of human neutrophil respiratory burst. Otherwise, we noticed that both HMGB1 and GM-CSF has important effect on toll like receptor 2 -the main receptor to identify mycoplasma. In this study, we analyzed their relationship with patient clinical characteristics and prognosis, as a result, we found both GM-CSF and HMGB1 have prognostic value in determining disease status of refractory MPP.
Prognostic assessment methods
Most children in non-refractory MMP group got good prognosis and discharged from hospital in 7 to 15 days, so we mainly observed the effect of refractory MMP group. The hospital stay time and pulmonary function on discharge were assessed. Those with respiratory dysfunction such as pulmonary fibrosis, unilateral hyperlucent lung syndrome and asthma., etc. were classified into poor prognosis.
Statistical analysis
Statistical analyses were performed using SPSS software (version 22.0). All normally distributed quantitative variables are expressed as means ± SD, the remainder were expressed as median (IQR) values. ANOVA and LSD-t tests were performed to compare differences among groups for normally distributed variables, and Mann-Whitney U tests were performed for variables not normally distributed. Categorical data are expressed as percentages, and chi-square tests were performed to compare differences between groups. Kruskal-Wallis H test was performed for comparisons between groups with ranked data. In addition, ROC curve analysis was used to ascertain the predicted value of BALF HMGB1 and GM-CSF for the occurrence of refractory MMP and poor prognosis of refractory MMP.
Results
Clinical characteristics of children in refractory MPP group and non-refractory MPP group before treatment The clinical characteristics of children before treatment were shown in Table 1. There was no significant difference in age, sex, course of fever, the highest temperature, WBC, L, PLT, ALT, AST, CK-MB, D-D, CK, IgG, IgA, IgM, C3, and C4 between refractory MPP group and non-refractory MPP group.
The levels of N, CRP, PCT, and LDH in refractory MPP group were all higher than those in nonrefractory MPP group, the difference between groups had statistical significance (P 0.05). Table 1 The basic data compare between children with and without refractory MMP.
The relationship between BALF HMGB1 and GM-CSF with refractory MMP attack
We analyzed the diagnostic efficiency of N, CRP, PCT, LDH, HMGB1, and GM-CSF for refractory MPP.
ROC analysis results ( Fig. 2 and Table 2) showed that CRP, GM-CSF and HMGB1 had predictive value for refractory MPP attack (AUC = 0.636, 0.657, 0.651, P < 0.05). Logistic regression analysis results (Table 3) showed that GM-CSF and HMGB1 were independent influencing factors for refractory MPP (OR > 1.0, P < 0.05). The relationship between BALF HMGB1 and GM-CSF with prognosis of refractory MPP The hospital stay time of refractory MPP group were 16 to 37 days. There were 7 cases of poor prognosis in all, including 1 pulmonary fibrosis, 1 cryptogenic organizing pneumonia, 1 unilateral hyperlucent lung syndrome, and 1 asthma. The BALF HMGB1 and GM-CSF level decreased with the treatment, and the levels at 2nd bronchoalveolar lavage therapy had predictive value for long hospital stay (> 28d) and poor prognosis of refractory MPP (Fig. 3).
Discussion
Serum specific antibody test is the most common clinical method for the diagnosis of MP infection, but the MP antibody usually appears a week after the attack of disease, and children' MP antibody appears even later because of their poor and not fully developed immune system function [11]. The delay in clinical diagnosis contributes greatly to disease progression and the attack of refractory MPP.
With the progression of refractory MPP, some cases might develop into long-term respiratory dysfunction or life-threatening complications [12,13] We measured BALF HMGB1 and GM-CSF in this study for two reasons: the BALF sample derived directly from the lower respiratory tract, which means a less chance of pollution, and a closer distance to the lesion. Several studies pointed out that BALF culture has higher pathogen detection rate than sputum culture [16]. We also considered the anti-pathogen activities Serve as immune cells, neutrophils play an important role in anti-pathogens [18]. Increased neutrophils in peripheral blood and alveolar lavage fluid are important clinical features of mycoplasma pneumonia [19]. CRP and PCT are both inflammatory markers commonly used in clinic. A retrospective study involved 119 children with community-acquired pneumonia revealed that PCT on admission is correlated to inflammatory response degree, while CRP on admission is a predictor of lobar consolidation [20]. Neeser, et al. [21] found CRP/PCT ratio could provide reliable information to help discriminating MPP from streptococcus pneumoniae. Several studies claimed PCT concentrations in children hospitalized with CAP had ability to distinguish typical bacteria (eg, Streptococcus pneumoniae and Staphylococcus aureus) and atypical bacteria (Mycoplasma pneumoniae and Chlamydophila pneumoniae) [22][23][24]. LDH occurs in the blood when tissues and organs are damaged, so it is used to be a biomarker to evaluate disease severity of MPP in clinic [25]. There was evidence that LDH is an easily accessible biological marker that has been associated with several pulmonary disorders [26,27]. A prospective cohort study of 300 children with refractory pneumonia and 353 with general pneumonia showed serum LDH with a cutoff of 379 U/L could be used to predict refractory MPP at early stage of hospitalization [28]. In brief, the traditional markers N, CRP, PCT, and LDH have good predictive value for the occurrence and development of MPP, and the correlation between BALF HMGB1 and GM-CSF with them indicate a potential value of HMGB1and GM-CSF in MPP evaluation.
After Logistic analysis and ROC analysis, we found CRP, GM-CSF and HMGB1 had predictive value for refractory MPP attack, and GM-CSF and HMGB1 were independent factors for refractory MPP. The results further confirmed their important diagnostic value.
In addition, we found the BALF HMGB1 and GM-CSF level decreased with the treatment. Considering that the number of cases decreased greatly at 3rd bronchoalveolar lavage therapy, we analyzed the prognostic evaluation effect of BALF HMGB1 and GM-CSF level in the first two treatments. The results showed the levels at 2nd bronchoalveolar lavage therapy had predictive value for long hospital stay (> 28d) and poor prognosis of refractory MPP. A few studies have showed HMGB1 might be a reliable biochemical marker for prognosis evaluation of MPP and community acquired pneumonia [29,30], which consistent with our study. The clinical studies about BALF GM-CSF in MPP are limited quantity.
Previous studies have shown that GM-CSF plays a vital role in neutrophil inflammation in M. A proper GM-CSF secretion is beneficial for lung protection, while increased GM-CSF in pulmonary could causes alveolar macrophage accumulation, which contribute to a lung parenchymal injury derived excessive immune inflammatory response [31]. This may be one of the reasons for GM-CSF level lead to poor prognosis.
Conclusions
In conclusion, the level of GM-CSF and HMGB1 in alveolar lavage fluid is closely related to the occurrence and development of refractory MPP, which can be used as an auxiliary indicator for clinical diagnosis and prognosis evaluation, and has certain guiding significance for its clinical treatment.
Consent for publication
Approval.
Availability of data and materials
The data used to support the findings of this study are available from the corresponding author upon request.
Competing interests
None of the authors of the present manuscript have a commercial or other association that might pose a conflict of interest.
Funding
None.
Authors'contributions
LC and YW conceived the study. HL acquired data. HY performed the statistical analysis. LC drafted the manuscript. All authors read and approved the manuscript prior to submission.
|
2020-04-09T09:27:44.879Z
|
2020-03-24T00:00:00.000
|
{
"year": 2020,
"sha1": "987157df1323b87162d0f1d121bfafc86523317e",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-18708/v1.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "75303eebe2ef7cee357963e447b823f8dfa0e2cb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6982020
|
pes2o/s2orc
|
v3-fos-license
|
p120 catenin associates with microtubules: inverse relationship between microtubule binding and Rho GTPase regulation.
p120 catenin (p120ctn), an armadillo protein and component of the cadherin adhesion complex, has been found recently to induce a dendritic morphology by regulating Rho family GTPases. We have identified specific serines within the Arm repeat domain that, when mutated to alanine, promote p120ctn association with interphase microtubules, leading to microtubule reorganization and stabilization. The mutant p120ctn also localized to the mitotic spindle and centrosomes. In contrast to wild-type p120ctn, the microtubule-associated p120ctn mutant did not activate Rac1 and did not induce a dendritic morphology. In addition, we show that a basic motif within the p120ctn Arm repeat domain known to be required for the inhibition of RhoA is also required for binding to microtubules. We therefore propose that binding of p120ctn to microtubules is inversely related to its ability to regulate Rho GTPases.
Introduction p120 catenin (p120ctn) is a member of the armadillo family of proteins and localizes to intercellular junctions and the nucleus (1). In epithelial cells p120ctn localizes to adherens junctions by binding to the juxtamembrane domain of E-cadherin (2)(3)(4)(5)(6). In contrast to βand γ-catenin, which act via α-catenin to link the E-cadherin complex to actin filaments, p120ctn does not appear to be involved in anchoring the E-cadherin complex to the actin cytoskeleton. Recently, p120ctn has been shown to have a supporting (7) but not essential role in Drosophila E-cadherin-mediated adhesion (7,8). It is nevertheless believed that p120ctn is important for proper E-cadherin-mediated adhesion in mammalian cells, although whether it acts positively (9,10) or negatively (11,12) to regulate E-cadherin clustering is not clear. p120ctn-binding may also have a stabilizing effect on the E-cadherin protein (13), possibly by preventing its ubiquitination and proteasomal degradation (14). p120ctn was originally identified as a Src tyrosine kinase substrate (15) and the Src phosphorylation sites have recently been mapped (16). Tyrosine phosphorylation has been suggested to increase the affinity of p120ctn for cadherins (17,18) but the exact influence of tyrosine phosphorylation on p120ctn is unknown. p120ctn is also phosphorylated on a number of serine/threonine residues (12,(19)(20)(21)(22). The serine/threonine phosphorylation state of p120ctn correlates with its intracellular localization. Membrane-associated, E-cadherin-bound p120ctn is highly phosphorylated, whereas cytoplasmic p120ctn shows much reduced phosphorylation levels (10), suggesting that p120ctn is phosphorylated by membrane-associated kinases. Serine phosphorylation events within the N-terminus of p120ctn have been suggested to regulate E-cadherin clustering (12).
In E-cadherin-deficient cancer cell lines, p120ctn is frequently observed in the nucleus (23) where it may interact with the transcription factor Kaiso (24).
Matrilysin/MMP7, a matrix metalloproteinase, has been implicated as a target gene for Kaiso (25), but the significance of the p120ctn/Kaiso interaction remains to be established.
In addition to its role in cadherin-mediated adhesion, p120ctn has recently been found to be a regulator of Rho family GTPases (26), key players in coordinating a variety of cellular processes, including cell polarity, cell migration and cell adhesion (27).
Overexpression of p120ctn in fibroblasts induces a so-called "dendritic morphology", characterized by the extension of branching dendrite-like protrusions (5). The induction of a dendritic morphology by p120ctn involves inhibition of RhoA (28) and activation of Rac and Cdc42 (29,30). p120ctn might inhibit RhoA via a direct interaction (28), whereas the activation of Rac and Cdc42 has been suggested to occur via binding of the guanine nucleotide exchange factor Vav2 (30). The Arm repeat domain of p120ctn is required for the interaction with E-cadherin (4). There is evidence that the Arm repeat domain is also involved in the inhibition of RhoA (28), and cadherin-binding and Rho GTPase regulation have been proposed to be mutually exclusive events (30).
We report here that p120ctn can associate with microtubules and that this requires a basic motif in the Arm repeat domain. Our results with different p120ctn mutants suggest that binding of p120ctn to microtubules is inversely related to its ability to regulate Rho GTPases.
Specific serine to alanine point mutations in the Arm domain promote association of p120ctn with interphase microtubules
We have recently identified several novel serine phosphorylation sites in p120ctn ( (32) and unpublished data). As part of these studies, a number of candidate serine residues in the N-and C-terminus and in the Arm repeat domain of p120ctn were mutated to alanine residues. We did not detect any serine phosphorylation sites within the Arm repeat domain of p120ctn. However, expression of the constructs in Cos-7 cells showed that two p120ctn mutants carrying serine to alanine mutations within the Arm repeat domains 4 and 5 (SS538/539AA and S587A) localized along interphase microtubules in approximately 20% of transfected cells, whereas wild-type p120ctn did not localize to microtubules in these cells ( Figure 1A and 1C). When the two mutations (SS538/539AA and S587A) were combined in one protein (SSS538/539/587AAA or AAA-p120ctn) the percentage of transfected Cos-7 cells showing p120ctn/microtubule colocalization increased to approximately 30% ( Figure 1A and 1C). Proteins containing the single mutations S538A or S539A only rarely localized along microtubules (data not shown).
Localization of AAA-p120ctn, but not wild-type p120ctn, along microtubules was also observed in the fibroblast cell lines, Swiss 3T3 cells (data not shown) and NIH 3T3 cells (see Figure 4A). In these cells, AAA-p120ctn was associated with microtubules in more than 80% of transfected cells. The association of p120ctn with microtubules did not appear to depend primarily on the expression levels of the mutants. Cells expressing similar amounts of AAA-p120ctn (as judged by comparing fluorescence intensities of neighbouring cells) showed either near-complete p120ctn/microtubule colocalization or diffuse cytoplasmic and sometimes nuclear distribution of p120ctn. In some Cos-7 cells showing a low level of AAA-p120ctn expression, the staining for p120ctn along microtubules was not evenly distributed but was concentrated in distinctive puncta ( Figure 1B). The association of AAA-p120ctn with microtubules is likely to be indirect, as it did not co-immunoprecipitate with αor β-tubulin nor was it enriched in a detergentinsoluble microtubule-containing cell fraction (data not shown).
AAA-p120ctn and endogenous p120ctn localize to centrosomes and the mitotic spindle during mitosis
Association of AAA-p120ctn with tubulin-containing structures was not restricted to interphase microtubules. In Cos-7 cells, AAA-p120ctn localized to both the mitotic spindle and to centrosomes during mitosis (Figure 2A). Most of the exogenous p120ctn appeared to be associated with the mitotic spindle but a fraction of mutant p120ctn showed a punctate localization around the circumference of the cell. In contrast to the clear localization to the two mitotic centrosomes, AAA-p120ctn could never be detected at the single interphase centrosome, suggesting that the association of p120ctn with centrosomes occurs exclusively during mitosis. Consistent with this observation, a fraction of endogenous p120ctn was detected at centrosomes or the pericentrosomal region in the breast cancer cell line MDA-MB-231 cells during mitosis ( Figure 2B). The recruitment of endogenous p120ctn to the centrosomes in these cells suggests a physiological role for the interaction of p120ctn with tubulin-containing structures during cell division. During interphase, no clearly defined centrosomes could be identified in these cells by γ-tubulin staining. It was therefore not possible to establish whether the p120ctn/centrosome association also occurs in interphase. Localization of endogenous p120ctn to centrosomes was not observed in Cos-7 or NIH 3T3 cells, possibly because the expression level of p120ctn in these cells is too low to allow detection of a centrosome-associated pool.
Association of p120ctn with microtubules leads to their stabilization p120ctn association with microtubules changed their morphology: they formed long, thick and curly bundles and were in some cases arranged in circles around the nucleus ( Figure 1A). This is in contrast with typical interphase microtubules in fibroblasts, which originate from the MTOC and extend their (+) ends towards the plasma membrane.
Microtubule bundling into large multi-filament structures has been linked to microtubule stabilization (33). During their maturation from unstable, highly dynamic to stabilized microtubules, the α-tubulin subunit of the tubulin dimer undergoes a series of modifications (34,35), such as acetylation of a C-terminal lysine. An anti-acetylated αtubulin antibody was used to confirm that p120ctn binding to microtubules had a strong stabilizing effect on microtubules in MDA-MB-231 cells (Figure 3). Only a low level of acetylated tubulin was detected in untransfected cells or in cells transfected with GFP alone (Figure 3A'), whereas the thick microtubule cables in the AAA-p120ctn-GFP expressing cells stained very strongly for acetylated tubulin (Figure 3B'). Expression of wild-type p120ctn did not increase the level of acetylated α-tubulin in the body of the cell. Microtubules within the p120ctn-driven cellular extensions, however, showed strong staining for acetylated tubulin (Figure 3C'), suggesting that wild-type p120ctn may stabilize microtubules exclusively in these extensions.
AAA-p120ctn does not induce a dendritic morphology or activate Rac1
Overexpression of wild-type p120ctn is known to induce extension of protrusions (dendritic morphology) (5). In contrast, AAA-p120ctn did not stimulate branching in Cos-7 cells ( Figure 1A). Rather, cells expressing AAA-p120ctn appeared to be more spread than untransfected cells. The p120ctn-induced dendritic morphology is especially prominent in NIH 3T3 fibroblasts (5), and this cell line was therefore chosen to quantitate the ability of wild-type and AAA-p120ctn to induce dendritic extensions. In preliminary experiments VSV-tagged p120ctn was found to be more potent in eliciting a dendritic morphology than p120ctn-GFP, although p120ctn-VSV and p120ctn-GFP showed the same intracellular distribution (data not shown). Consequently, wild-type p120ctn-VSV and AAA-p120ctn-VSV constructs were used to quantitate morphological responses.
Overexpression of wild-type p120ctn-VSV induced a dendritic morphology in more than 70% of transfected cells ( Figure 4B). The cell body was highly constricted around the nucleus and cells showed a number of long protrusions, some of which extended for more than twenty times the cell body. The ends of the extensions often showed extensive arborisation reminiscent of dendritic spines. In contrast, AAA-p120ctn-VSV caused branching in less then 10% of transfected cells, those extensions that formed were short, and the cells appeared spread out ( Figure 4A).
Active Rac1 and Cdc42 are required for the p120ctn-induced branching: coexpressing dominant-negative versions of Rac1 or Cdc42 with p120ctn efficiently blocks branching (29,30). To determine whether the reason for the failure of AAA-p120ctn to induce branching is the loss of its ability to activate Rac1, Rac pulldown experiments were performed (36). NIH 3T3 cells were transiently transfected with either wild-type p120ctn-VSV, AAA-p120ctn-VSV or with empty vector as a control. In a parallel transfection of p120ctn-GFP, the transfection efficiency was estimated to be between 50 and 70%. The pulldown experiments showed that wild-type p120ctn increased the levels of active Rac1 in the cells more than 2-fold ( Figure 5A and B). AAA-p120ctn, however, did not raise active Rac1 levels significantly above the levels obtained when vector alone was transfected. Because of the level of transfection efficiency, the true level of Rac1 activation in response to wild-type p120ctn expression is underestimated in the pulldown experiments, which measure active Rac1 levels averaged over both transfected and untransfected cells. However, the difference in Rac1 activity between wild-type p120ctn and AAA-p120ctn-transfected cells was statistically highly significant (P<0.01). These results suggest that mutant, microtubule-associated p120ctn fails to induce branching because it is unable to activate Rac1.
Deletion of a basic motif in the Arm repeat domain of AAA-p120ctn prevents microtubule association and induction of the dendritic morphology
Association of AAA-p120ctn with microtubules coincided with the inability of this mutant to induce a dendritic morphology. To clarify further the relationship between microtubule binding and the dendritic morphology, we attempted to identify a region in p120ctn required for both functions. A cluster of lysines between p120ctn amino acids 622 to 628 (KKGKGKK), situated on a looped-out structure within Arm repeat 6 (1), has been shown to be required for p120ctn to induce a dendritic morphology in NIH 3T3 cells (28). In agreement with this finding, wild-type p120ctn-GFP missing the basic motif between amino acids 622 to 628 (∆K-p120ctn-GFP) did not induce a dendritic morphology in Cos-7 cells (Figure 6). To test if this basic motif is also involved in microtubule binding, amino acids 622 to 628 were deleted in the microtubule-targeted protein AAA-p120ctn-GFP (∆K-AAA-p120ctn-GFP). Deletion of the basic motif completely prevented localization of AAA-p120ctn to microtubules in Cos-7 cells ( Figure 6). Substitution of lysines 622/623 with isoleucines also blocked microtubule association, whereas p120ctn carrying isoleucines at positions 627/628 associated with microtubules to the same degree as AAA-p120ctn, showing that lysines 627/628 are not required for microtubule binding (data not shown). Therefore, parts of the basic motif are involved in both the interaction of p120ctn with microtubules and the induction of the Rho GTPase-dependent dendritic morphology.
Discussion
p120ctn is known to associate with adherens junctions, regulate the activity of Rho GTPases and under some conditions translocate to the nucleus, but its association with microtubules has not previously been reported. We report here that p120ctn localization to microtubules is promoted by mutating three serines to alanines in the Arm repeat domain, and is also dependent on a lysine-rich motif. Binding of p120ctn to microtubules and its ability to activate Rac and induce a dendritic morphology are mutually exclusive.
The serines 538/539 and 587 mutated to alanines in AAA-p120ctn are highly conserved in p120ctn family members. In other Arm repeats of p120ctn, the consensus residue at the equivalent position is either serine or alanine (1), indicating that alanine residues at these positions are compatible with Arm repeat folding. The increased association of AAA-p120ctn with microtubules may reflect a structural change in the Arm repeat domain so that a microtubule-binding motif, such as the KKGKGKK motif between amino acids 622 and 628, is now more exposed. Alternatively, it is possible that the serines are phosphorylation sites, and that their phosphorylation reduces microtubule binding. Introduction of alanines at these sites would then prevent phosphorylation and microtubule dissociation. However, so far we have no evidence that these serines are phosphorylated, either from mass spectrometry (32) or from use of phospho-specific antibodies (unpublished data).
The observations that AAA-p120ctn localizes to the mitotic spindle and centrosomes and that endogenous p120ctn can localize to the pericentrosomal region during mitosis suggests that it might regulate microtubule organization around the centrosome. Drosophila melanogaster APC2 and armadillo have been shown to be required for anchoring the mitotic spindle to cortical actin (37,38). AAA-p120ctn showed some punctate localization around the cell cortex during mitosis and could similarly be involved in anchoring astral spindle microtubules to the cortex.
Interestingly, p120ctn is the only known component of the E-cadherin complex to undergo a change in its phosphorylation state during mitosis (39), consistent with it playing a role in spindle organization.
AAA-p120ctn occasionally showed a punctate localization along microtubules, similar to that observed for proteins or vesicles that interact with microtubules via motor proteins such as kinesins or dyneins. It is therefore possible that p120ctn is transported actively along microtubules. In agreement with our results, p120ctn has recently been reported to localize to perinuclear microtubules in cadherin-deficient cell lines and this has been suggested to involve association with kinesin (31). Other Arm repeat proteins have been shown to bind to motor proteins: β-catenin interacts with the motor protein dynein (40), whereas APC travels along microtubules by binding to kinesin superfamily proteins (41).
The inability of AAA-p120ctn to induce a dendritic morphology correlated with its lack of effect on Rac1 activity, suggesting that microtubule association and Rac1 activation are inversely related functions of p120ctn. p120ctn has been proposed to regulate Rho GTPases in at least two different ways: it inhibits RhoA via a direct interaction (28) and it activates Rac and Cdc42 via the exchange factor Vav2 (30). Both the activation of Rac/Cdc42 and the inhibition of RhoA are required for p120ctn to induce the dendritic morphology (28)(29)(30). However, expression of constitutively active Rac1 or Cdc42 alone is not sufficient to induce the dendritic morphology, possibly because they need to cycle between active and inactive forms to stimulate extension (42).
In contrast, inhibition of RhoA or its target ROCK is sufficient to induce neurite outgrowth and branching (43,44). Given that the activity levels of Rac and RhoA are usually inversely related, that increased Rac activity leads to decreased RhoA activity (45,46), and inhibition of RhoA signalling can induce an increase in Rac activity (47), p120ctn could affect the activities of both Rho and Rac by directly affecting the activity of only one. Interestingly, the basic motif KKGKGKK between amino acids 622 and 628, which we found to be required for p120ctn to associate with microtubules, is also required for the inhibitory effect of p120ctn on LPA-induced RhoA activity (28).
Because p120ctn uses the same motif to bind to microtubules and to inhibit RhoA, microtubule association could lead to RhoA displacement and termination of the inhibitory effect on RhoA, and consequently to a decrease in Rac1 activity.
Alternatively, microtubule binding could prevent Vav2 interaction and Rac activation and thereby indirectly prevent the decrease in RhoA activity.
A number of microtubule-associated proteins (MAPs), such as MAP-1B (48) and the yeast protein CBF5 (49), interact with microtubules via tubulin-binding motifs containing characteristic repeats of double lysines, which resemble the KKGKGKK motif of p120ctn. This basic motif is highly conserved among the p120ctn family members (1) and has been proposed to be a nuclear localization signal (50). We observed nuclear localization of ∆K-mutants in some transfected cells, though the nuclear localization of ∆K-mutants was much reduced compared to wild-type or AAA-p120ctn (unpublished data). The association of p120ctn with the microtubule network has recently been suggested to counteract nuclear import of p120ctn (31). In contrast, our data indicates that p120ctn mutants which are unable to associate with microtubules show decreased nuclear localization. The precise functional link between microtubule-binding and nuclear translocation of p120ctn remains to be determined.
The association of p120ctn with microtubules leads to their remodelling into thick, curly bundles and to their stabilization as indicated by the increase in α-tubulin acetylation. Similarly, microtubules within p120ctn-driven extensions show a high degree of acetylation compared to microtubules in the cell body. It is therefore possible that p120ctn normally interacts with and stabilizes microtubules exclusively in protrusions. Interestingly, extension of neurites, like the p120ctn-induced dendritic morphology, requires Rac and Cdc42 activation and RhoA inhibition as well as microtubule polymerisation (51,52). In addition, microtubules become stabilized during neurite outgrowth, correlating with an increase in their acetylation (53)(54)(55). The combined abilities of p120ctn to stabilize microtubules, regulate Rho GTPases and induce a dendritic phenotype make it a prime candidate for playing an important role in promoting neurite outgrowth. In support of this idea, p120ctn localizes to growth cones and to dendritic spines in cultured hippocampal neurons and its expression is increased during rat brain development (56) and to neurites containing acetylated tubulin in NGFtreated PC12 neuronal cells (our unpublished data). NPRAP/δ-catenin, a closely related neuronal member of the p120ctn family, has recently been shown to enhance dendritic morphogenesis in primary hippocampal neurons and to induce dendrite-like processes in 3T3 fibroblasts, activities which required the stabilization of microtubules (57,58).
Similarly, overexpression of p120ctn induces extensive dendritic extensions in PC12 cells (our unpublished data). The dendritic morphology observed in p120ctn-overexpressing fibroblasts could therefore reflect a physiological function for p120ctn in neurons, where it would promote neurite outgrowth by stabilizing microtubules and regulating Rho GTPases.
|
2014-10-01T00:00:00.000Z
|
2004-02-20T00:00:00.000
|
{
"year": 2004,
"sha1": "2d089258708a2df6611a237929920cfe304bd2ff",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/279/8/6588.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "CiteSeerX",
"pdf_hash": "2d089258708a2df6611a237929920cfe304bd2ff",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
218770462
|
pes2o/s2orc
|
v3-fos-license
|
SADDLE PLANTING SYSTEM, ANEW WHEAT SOWING METHOD IN SOUTHEASTERN ANTOLIA ENVIRONMENT CONDITION
Especially in aqueous conditions, where the ground water level is high or when grain crops alternation with cotton plant, saddle planting system is useful because this system reduce costs in Southeastern Anatolia Region near Syria and Iraq. This planting system is going common in farm condition in our region. Total of 8 durum wheat varieties were compared by using traditional planting and saddle planting methods in Diyarbakir ecological conditions in 2010-2011 production season. Acording to the combined analysis, significant differences were determined at the level of 1% and 5%, in terms of sowing methods, genotype and genotype x planting methods interactions in terms of grain yield, test weight and thousand of grain weight. The combined analysis on the data of different planting method; genotype, planting method and genotype x planting method interactions were significant at the level of 1 to 5%. According to analysis on planting methods, grain yield changed between 7430-7950 kg / ha-1, test weight between 80.9 -81.1 g and thousand of grain weight between 44.7-47.1 g. According to results; grain yield and hektoliter weight were high in conventional method of planting. Saddle planting system, in irrigation, weed struggle, disease, pest management, harvesting operations can be made more comfortable. Acording to result of this study, depending on the conditions (alternation planting cotton, irrigated areas, or the price of seed is high ) suggest that saddle planting system can be applied successfully in wheat cultivation. INTRODUCTION Wheat is important cereal crop and serves a staple food in many countries of the world and it has widest distribution among cereal crops in Turkey. Durum wheat is a traditionally important crop in Southeastern Anatolia Region of Turkey near Syria, Iraq and Iran. Its importance still continues due to production and export potent. Therefore, the studies focus on both the breeding and cultivation techniques in GAP International Agricultural Research and Training Center/Diyarbakir. In the studies, until now a very efficient and high quality durum wheat varieties were developed to suitable conditions in region. However, the studies focuses on training packages to obtatin high yield from these varieties, it would be useful more. therefore, the method of saddle planting has been developed to provide save irrigation water and amount of seed, facilitate cultural operations and after harvesting of cotton cultivation, could make planting late period. The research recevied in 25/2/2012 accepeted in 25/6/2012. Wheat is planted through broadcasting on a large area after cotton is harvested in South-Eastern Anatolian. Broadcasting do not only requires higher seed rate but also results in untidy plant population. On the other hand, drill sowing method is recommended because of its uniform seed distribution and planting at desired depth, which usually results in higher germination and uniform stand (Kiliç and Gürsoy, 2010). Seeding rate is one of the important production factors. Higher wheat grain yield with better quality requires appropriate seeding rate for different cultivars. Increase in seed rate above optimum level may only enhance production cost without any increase in grain yield (Rafique et al., 2010). The optimum seed rates for wheat by altering with variety, location and method of planting. Larson and Watson (2010) reported that more and more producers are growing wheat and other small grains in no-tillage cropping systems because no-till systems produce major ecological and economic benefits. If growers can achieve adequate stands in notill systems, grain yields usually are similar to conventional wheat systems. Ridge planting method, primarily provides saving irrigation water and seed. Besides, due to regulation of traffic on the field, it is ease to selection foreign kind in seed production and increase efficiency and reduce soil erosion, seems to be preferred as a system. Especially in the GAP region, opened new areas for irrigation, can be considered as an application in terms of efficient use of water. With normal sowing seed drill to be modified, it's adaptation can be achieved without creating an additional cost to the farmer (Kılıç, 2005). MATERIALS AND METHODS The experiments were conducted in 2010 – 2011 in GAP International Agricultural Research and Training Center in Diyarbakir of Turkey (Latitude:37° 56'36"N, longitude: 043°15'.13"E at an elevation of 602 m above sea level). The soil of the experimental area is siltyl loam and slightly alkaline (7.83) in reaction, low in organic matter (1.45%), medium in available P (4.3 kg/da-1) and high in K (95 kg/da-1). The weather conditions during the crop cycles are presented in Table 2. There was higer rainfall and lower average temperatures after planting in 2010 2011 as compared to long term. Irrigation is important during production season in saddle of sowing method just for one, but the precipitation was high. So, the experiment didn't irrigate in planting of saddle method. Experiment was conducted as a randomized complete block design with three replications using a split plot treatment arrangement. The cultivars were randomized in the main plots and seed rate in the sub-plots. The net plot size was 2.8 × 5 m. The seeding rates were used 300 seeds m-2. Acording to research (Kılıç (2005), seeding rates are available 250 seed m-2 in seddle planting method and 400 seed m-2 in traditional planting method Southeastern Anatolia Region environment condition. So, seed rate was used middle of these two systems. The trial was sowed in 25 October. The cultivars were used Altıntoprak, Artuklu, Eyyubi, Fırat 93, Güneyyıldızı, Sarıçanak, Şahinbey and Zühre which are widely grown in South-Eastern Anatolian region (Table 1). Wheat was grown in rotation following cotton. Cotton as a summer crop was planted in May and harvested in October. Wheat as a winter crop was planted in the optimum period of late November to early December and harvested in late June and early July. New raised beds were prepared for cotton and after harvesting of cotton, wheat was grown in winter season under zero tillage following a required repairing of the beds (Figures 1 and 2). Planting was carried out with a planter modified for planting two rows of seed on the top of permanent bed. The width of the ridge was 70 cm from furrow bottom to furrow bottom. The space between each row on ridge was 15 cm. Table(1): The name and origin and time of registered of wheat varietes used inexperiment Name of Cultivar Origin Time of registered Altıntoprak GAPUTAEM 1998 Artuklu GAPUTAEM 2008 Eyyubi GAPUTAEM 2008 Fırat 93 GAPUTAEM 1993 Güneyyıldızı GAPUTAEM 2010 Sarıçanak GAPUTAEM 1998 Şahinbey GAPUTAEM 2008
INTRODUCTION
Wheat is important cereal crop and serves a staple food in many countries of the world and it has widest distribution among cereal crops in Turkey. Durum wheat is a traditionally important crop in Southeastern Anatolia Region of Turkey near Syria, Iraq and Iran. Its importance still continues due to production and export potent. Therefore, the studies focus on both the breeding and cultivation techniques in GAP International Agricultural Research and Training Center/Diyarbakir. In the studies, until now a very efficient and high quality durum wheat varieties were developed to suitable conditions in region. However, the studies focuses on training packages to obtatin high yield from these varieties, it would be useful more. therefore, the method of saddle planting has been developed to provide save irrigation water and amount of seed, facilitate cultural operations and after harvesting of cotton cultivation, could make planting late period.
Wheat is planted through broadcasting on a large area after cotton is harvested in South-Eastern Anatolian. Broadcasting do not only requires higher seed rate but also results in untidy plant population. On the other hand, drill sowing method is recommended because of its uniform seed distribution and planting at desired depth, which usually results in higher germination and uniform stand (Kiliç and Gürsoy, 2010). Seeding rate is one of the important production factors. Higher wheat grain yield with better quality requires appropriate seeding rate for different cultivars. Increase in seed rate above optimum level may only enhance production cost without any increase in grain yield (Rafique et al., 2010). The optimum seed rates for wheat by altering with variety, location and method of planting. Larson and Watson (2010) reported that more and more producers are growing wheat and other small grains in no-tillage cropping systems because no-till systems produce major ecological and economic benefits. If growers can achieve adequate stands in no-till systems, grain yields usually are similar to conventional wheat systems. Ridge planting method, primarily provides saving irrigation water and seed. Besides, due to regulation of traffic on the field, it is ease to selection foreign kind in seed production and increase efficiency and reduce soil erosion, seems to be preferred as a system. Especially in the GAP region, opened new areas for irrigation, can be considered as an application in terms of efficient use of water. With normal sowing seed drill to be modified, it's adaptation can be achieved without creating an additional cost to the farmer (Kılıç, 2005).
MATERIALS AND METHODS
The experiments were conducted in 2010 -2011 in GAP International Agricultural Research and Training Center in Diyarbakir of Turkey (Latitude:37° 56'36"N, longitude: 043°15'.13"E at an elevation of 602 m above sea level). The soil of the experimental area is siltyl loam and slightly alkaline (7.83) in reaction, low in organic matter (1.45%), medium in available P (4.3 kg/da-1) and high in K (95 kg/da-1). The weather conditions during the crop cycles are presented in Table 2. There was higer rainfall and lower average temperatures after planting in 2010 -2011 as compared to long term. Irrigation is important during production season in saddle of sowing method just for one, but the precipitation was high. So, the experiment didn't irrigate in planting of saddle method. Experiment was conducted as a randomized complete block design with three replications using a split plot treatment arrangement. The cultivars were randomized in the main plots and seed rate in the sub-plots. The net plot size was 2.8 × 5 m. The seeding rates were used 300 seeds m-2. Acording to research (Kılıç (2005), seeding rates are available 250 seed m-2 in seddle planting method and 400 seed m-2 in traditional planting method Southeastern Anatolia Region environment condition. So, seed rate was used middle of these two systems. The trial was sowed in 25 October. The cultivars were used Altıntoprak, Artuklu, Eyyubi, Fırat 93, Güneyyıldızı, Sarıçanak, Şahinbey and Zühre which are widely grown in South-Eastern Anatolian region (Table 1). Wheat was grown in rotation following cotton. Cotton as a summer crop was planted in May and harvested in October. Wheat as a winter crop was planted in the optimum period of late November to early December and harvested in late June and early July. New raised beds were prepared for cotton and after harvesting of cotton, wheat was grown in winter season under zero tillage following a required repairing of the beds (Figures 1 and 2). Planting was carried out with a planter modified for planting two rows of seed on the top of permanent bed. The width of the ridge was 70 cm from furrow bottom to furrow bottom. The space between each row on ridge was 15 cm. The whole dose of P (60 kg P ha-1) with half dose of nitrogen (60 kg N ha-1) were applied at sowing time and the remaining nitrogen (60 kg N ha-1) was used at the begining of stem elongation time. All other agronomic practices like irrigation, weeding etc. were kept normal and uniform for all the treatments.
Data on growth and yield components were collected using standard procedures and were analyzed statistically by using Fisher's analysis of variance technique. Least significance difference (LSD) tests were performed to determine the significant differences between individual means. All statistical analyses were performed using the SAS program (SAS Institute, 1999).
RESULTS AND DISCUSSION
Acording to the combined analysis on the data of different planting methods; genotype, planting method and genotype x planting method interactions had significant at the level of 1 to 5%. Test Weight: Planting Method was significant in level 5%, variates and varietes x planting method interactions was significant in level 1% (Table 3). Data regarding test weight showed that maximum test weight (81.1 kg/hl) was obtained from method of conventional planting followed by method of saddle planting (80.9 kg hl). Due to it was thought low the number of spikes per plant and kernels per spike in planting method of conventional, the results of test weight was high in method of conventional than method of saddle planting. The test weight of wheat varieties ranged from 79.5 to 82.3 kg hl. The maximum test weight obtained from Şahinbey cultivar, the minimum test weight was obtained from Saricanak cultivar. Şahinbey is new than Saricanak (Table 1). When these new cultivars was registered, the breeders especially concentrated on teknologicial qualty criteria (test weight and tousand grain yield). So, these new cultivars have high test weight. Varietes x planting method interactions had significant effect on test weight. Maximum test weight (83.1 kg hl) was obtained from method of conventional planting in Şahinbey, minimum test weight (79.3 kg hl) was obtained planting saddle method in Sarıçanak cultivar. Acording to Atlı at al (1999) and Sade at al (1999), test weight is change under differet planting methods, varietes, ecological conditions, cultural practices, pest and disease. Kendal at al (2011), studied on ten durum varietes in same region to determine test weight, the results were changed between 77.3-81.7 kg/hl. The results obtained of this study showed that results of two studies changed between same values. ISSN 1815-316X(Print) ** = means significiantly level of 1 %, * = means significiantly level of 5 %, NS = not significiantly Thousand grain weight: Method of sowing was significant in level 5%, variates was significant in level 1%, there was not significiant effect in varietes x planting method interactions (Table 3). Data regarding thousand grain weight showed that maximum value (47.1 g) was obtained from method of saddle sowing followed by method of conventional sowing (44.7 g). Acording the result of thousand grain weight, values were high in method of saddle sowing than method of conventional without Fırat cultivar. Planting method had a significant impact on the thousand grain weight of wheat. These results are in agreement with Khokhar et al. (1985), Hussain et al. (2001) and Kiliç et al. (2010). The thousand grain weight of wheat varieties ranged from 42.4 to 51.8 g. The maximum thousand grain weight obtained from Şahinbey cultivar, the minimum test weight obtained from Güneyyıldızı cultivar. Şahinbey is the best on teknologicial between these varietes.When Şahinbey was registered, the breeders especially concentrated on teknolojicial qualty criteria (tousant grain yield). So, Şahinbey have high thousand grain weight.Varietes x planting method interactions had not significant effect on thousand grain weight. Maximum thausand grain weight (54.0 g) was obtained from saddle planting method in Şahinbey, minimum test weight (41.7 g) was obtained method of conventional planting in Güneyyıldızı cultivar. Method of saddle planting gived high TGW. Due to in the saddle planting method firstly, produced more healthy plants which in turn synthesized healthier and plump seed, secondly it may be due to more favorable environmental conditions. Acording to Aydın at al (1999), Kılıç at al (2010) and Kendal at al (2011) and 20), thousand grain weight is change under differet metod of planting, varietes, ecological conditions, cultural practices, pest and disease. Kendal at al (2011), studied on ten durum varietes in same region to determine thousant grain weight, the results were changed between 30.0-42.8g. The results were different between studies, because of the results are changing under year conditions and different planting methods. Grain yiled: Effect of planting method, variates and varietes x planting method interactions was significant on grain yield. (Table 4). Grain yield varied considerably during the experimental period. Data regarding wheat grain yield showed that maximum wheat grain yield (7950 kg/ha-1) was obtained from planting method of conventional followed by planting method of saddle (7430 kg ha-1). Due to during the experimental period precipitation was high, the planting method of conventional gave high grain yield than planting method of saddle, but the results would changed between two method, If saddle planting method were irrigation and were not high precipitation during season. There were great differences between varieties in grain yield. The yield of wheat varieties ranged from 5764 to 9034 kg ha-1. The varieties, Şahinbey, Sariçanak and Zühre, were productive cultivars with high grain yield. The lowest yielding variety, Altintoprak is older than high productive cultivars. Varietes x planting method interactions had significant effect on grain yield. Overall maximum grain yield (9585 kg ha-1) was obtained from method of conventional planting in Şahinbey cultivar. Minimum grain yield (5699 kg ha-1) was obtained method of saddle planting in Altintoprak cultivar. Sayre and Moreno Ramos (1997) and Mollah et al. (2009) reported that seed rate did not have significant effect on grain yield of wheat in bed planting conditions. However, Sayre and Moreno (1997) reported that some farmers had been using seed rates as low as 50-75 kg ha-1, while Kabakçi (1999) suggested that 100 kg ha-1 seeding rate was appropriate for wheat on bed planting system. Kiliç and Gürsoy (2010), the grain yield for the optimum seeding rate was estimated at 253 seed per m-2 (approximately 111.4 kg h-1) for wheat grown successfully on permanent bed in cotton wheat cropping system. As we have seen the results of this study also supports the results of our study. Kiliç and Gürsoy (2010), studied on two varietes to determine right seed m-2 for sowing, the results were changed between 5367.6 kg ha-1 and 2746.1 kg ha-1. but the precipitation during study season was low in their study than our Varietes x method of planting:57.02671* ** = means significiantly level of 1 %, * = means significiantly level of 5 %, NS = not significiantly study. On the other hand, the grain yield already was high in all region during 2010-2011 season than other season. I think cause of gap between this different two study is depend on different productive season. According to Jones and Singh (2000); Olesen et al (2000) and Wheeler et al (2000), factors like weather conditions and soils are important causes for crop yield variability. It is concluded that method planting on saddle gave good growth and seed production to compare with conventional planting system, because it have been used little seeds than conventional planting. So, we recommended method of planting on saddle for successfully grown wheat on permanent bed after cotton harvest. On the other hand, Especially, If the seed is not enough, the time is late and the fields is muddy for sowing, then, It can be implement successfuly method of saddle sowing.
|
2020-02-06T09:10:33.154Z
|
2012-05-28T00:00:00.000
|
{
"year": 2012,
"sha1": "fdf6546c288943b60ac45a4e406a15937d912eb5",
"oa_license": "CCBY",
"oa_url": "https://magrj.mosuljournals.com/article_62026_46885bd63965cb4bf3a9025f5a801ca4.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1efcd094d86db272e45931cb802bb95a0cf8b9f0",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
119454997
|
pes2o/s2orc
|
v3-fos-license
|
Mu-tau reflection symmetry with a texture-zero
The $\mu\tau$-reflection symmetry is a simple symmetry capable of predicting all the unknown CP phases of the lepton sector and the atmospheric angle but too simple to predict the absolute neutrino mass scale or the mass ordering. We show that by combining it with a discrete abelian symmetry in a nontrivial way we can additionally enforce a texture-zero and obtain a highly predictive scenario where the lightest neutrino mass is fixed to be in the few meV range for two normal ordering (NO) solutions or in the tens of meV in one inverted ordering (IO) solution. The rate for neutrinoless double beta decay is predicted to be negligible for NO or have effective mass $m_{\beta\beta}\approx 14\text{ -- }29\,{\rm meV}$ for IO, right in the region to be probed in future experiments.
I. INTRODUCTION
After the discovery of the nonzero value of the reactor angle θ 13 in 2012 [1], a few unknowns remain in the neutrino sector if neutrinos are Majorana: the ordering of neutrino masses, the absolute scale of neutrino masses and the values of the three CP phases -one of Dirac type and two of Majorana type. Major experimental efforts on neutrino oscillations are now focused on determining the Dirac CP phase and the mass ordering.
One of the simplest symmetries that can predict all the CP phases and yet allow CP violation is the symmetry known as µτ -reflection symmetry or CP µτ symmetry where the neutrino sector is invariant by exchange of the muon neutrino with the tau antineutrino [2]. This symmetry ensures that the Dirac CP phase is maximal (δ = ±90 • ) while the Majorana phases are trivial allowing the possibility of discrete choices of the CP parities. Additionally, the atmospheric angle θ 23 is predicted to be maximal (45 • ) at the same time that θ 13 is permitted to be nonzero. These values for the neutrino parameters are still allowed by current global fits and in fact there are hints that δ ∼ −90 • [3,4]. The fixed values for the CP phases also lead to characteristic bands for the possible effective mass of neutrinoless double beta decay but still allows leptogenesis to occur if flavor effects are taken into account [5].
It was shown in Ref. [5] that the simplest way of implementing CP µτ and guarantee diagonal charged lepton masses is to combine CP µτ with the combination L µ − L τ of family lepton numbers. This combination is trivial in the sense that these two symmetries commute and this feature allows us to avoid the vev alignment problem that requires special treatment in many models with discrete nonabelian flavor symmetries [6]. In fact, CP µτ can be successfully implemented, sometimes accidentally, in models with discrete flavor symmetries [7]. More recently, it was shown that maximal θ 23 and maximal Dirac CP phase can be obtained without the explicit imposition of a CP symmetry [8] at the expense of requiring vev alignment and losing the predictions for the Majorana phases. See Ref. [9] for a review on CP µτ and also on the µτ interchange symmetry [10].
Our main goal here is to show that one can have CP µτ symmetry along with a discrete abelian symmetry that ensures a one-zero texture 1 . This reduces the number of free parameters in the neutrino mass matrix from five to four to account for the four observables ∆m 2 21 , ∆m 2 32 , θ 12 , θ 13 -the rest are fixed from symmetry -and we obtain a highly predictive scenario where the absolute neutrino mass is fixed and further correlations of parameters appear.
Our approach is a combination of two very different approaches to lepton flavor: (a) texture-zeros that increase predictivity and relate mixing angles with masses [11,12] and (b) symmetries that fully or partly determine the mixing structure independently of the masses [6,13]. The former usually requires abelian symmetries [14] while the latter requires nonabelian discrete symmetries [6]. The most general cases of texture-zeros has been analyzed recently in Ref. [15] where generic texture-zeros are required for the mass matrices of both charged leptons and neutrinos. These cases include the well-studied parallel structures where both mass matrices have the same texture-zeros [16]. We refer to Ref. [17] for a review. In contrast, residual CP symmetries have also been considered to determine the mixing angles (and CP phases) [18,19,23].
If we give up symmetries that predict mixing angles, it is also possible to use nonabelian flavor symmetries to obtain texture-zeros together with equal elements in the neutrino mass matrix, a scenario known as hybrid texture [20], recently generalized in Ref. [21]. Nonabelian groups are generally required because one needs noncommuting symmetries. The combination of two noncommuting symmetries cannot be arbitrary and needs to fulfill some compatibility rules so that the whole group closes. 2 In fact, some consistency conditions are required to combine a CP symmetry with a nonabelian discrete flavor group [23]. For this reason, we choose the simplest setting where we combine an abelian discrete symmetry with CP µτ in a consistent but nontrivial way. As a result, the chosen abelian symmetry will simultaneously be responsible for the diagonal charged lepton masses and for the texture-zero in the neutrino mass matrix.
The outline of this work is as follows: in Sec. II we show the two symmetries that will be combined in a consistent way. A useful parametrization of the neutrino mass matrix is shown in Sec. III and the possible one-zero textures that are compatible with data are presented in Sec. IV. Section V develops an example model and our conclusions can be read in Sec. VI.
II. UNDERLYING SYMMETRY
We say the neutrino mass matrix is invariant by CP µτ symmetry, or µ − τ reflection [2], when By rephasing we can eliminate either the phase of c or d so that we have five real continuous parameters in total. This form for the neutrino mass matrix in the flavor basis (diagonal charged lepton masses) is known to predict maximal θ 23 = 45 • and δ = ±90 • at the same time that θ 13 = 0 is allowed [2]. Additionally, the Majorana phases are trivial and four discrete choices for the CP parities are possible [2,5]. These features lead to characteristic predictions for the neutrinoless double beta decay rate and leptogenesis [5]. The mass matrix (1) has five real independent parameters to describe five observables: θ 12 , θ 13 , m 1 , m 2 , m 3 . One of them -the absolute neutrino mass scale -is unknown. Given the same number of parameters and observables, there is no sharp prediction for the latter if only CP µτ is present. We will show in the following that one can have CP µτ symmetry along with an abelian symmetry that ensures a one-zero texture. With one less parameter we obtain a definite prediction on the absolute neutrino mass. It is clear that d or c cannot vanish because the resulting matrix after appropriate rephasing is symmetric by ν µ − ν τ interchange which leads to the experimentally excluded value of θ 13 = 0.
In order to implement CP µτ naturally, it was shown in Ref. [5] that the only way we can combine a residual U (1) symmetry in the charged lepton sector and a residual CP symmetry in the neutrino sector with nontrivial CP violation is to consider a U (1) generated by the combination of lepton flavor numbers L µ −L τ and CP µτ as the CP symmetry. In group theoretical terms, other combinations such as L e −L µ and CP eµ are allowed but they are not phenomenologically viable.
If we allow the electron flavor to have nontrivial charge and consider Z n instead of U (1), other possibilities arise beginning with Z 8 [5]. Here we use such a possibility to ensure CP µτ symmetry with a texture-zero. We assign Z 8 charges to the charged leptons (e, µ, τ ) as follows This symmetry ensures diagonal charged lepton masses. In contrast, the CP µτ symmetry acts as usual on the lefthanded neutrino fields ν αL , α = e, µ, τ , as where cp denotes the usual CP conjugation and X is ν µ -ν τ interchange, We can think that these two symmetries -Z 8 generated by T and CP µτ -initially act on the left-handed lepton doublets (L e , L µ , L τ ) before they are spontaneously broken. Then the two symmetries act on the same space and CP µτ induces the following automorphism on Z 8 [23]: We also note that the rephasing transformations that preserve Z 8 in (2) and CP µτ in (3) are of the form It is clear that these transformations also preserve the form of the mass matrix in (1) and can be used to make c or d real. Flavor independent rephasing by i also preserves the form of the mass matrix (flips the sign of a, b) but changes CP µτ by a global sign. Hence, only the relative sign of a and b is significant.
On the other hand, each quadratic combinationν c αL ν βL (Majorana neutrinos) that will give rise to the neutrino mass matrix carries the following Z 8 charges:ν As all entries carry different charges (including the trivial), we can arrange the appropriate texture-zero in the (ee) or (µτ ) entry by making the nonzero entries come from the vacuum expectation values of scalars carrying the desired quantum numbers [14]. We give an explicit construction in Sec. V.
III. PARAMETRIZATION
It was shown in Ref. [2] that the CP µτ symmetric matrix (1) can be diagonalized by a matrix of the form with u i real and conventionally positive. The diagonalization performs where m i = ±m i , with m i being the neutrino masses. Therefore, the full diagonalizing matrix can be written as where K is a diagonal matrix of 1 or i depending on the sign on (9). We can classify the cases according to the sign of m i or the diagonal entries of K 2 [5] as (+ + +), (− + +), (+ − +), (+ + −) .
There is also the freedom to replace U 0 by U * 0 in (9), together with M ν → M * ν . This replacement flips the sign of the Dirac CP phase and the Jarlskog invariant, leaving the rest of observables invariant.
Comparing (8) to the standard parametrization of the PMNS matrix and choosing the convention that −iw 3 > 0 we arrive at the parametrization The ± sign coincides with the Dirac CP phase given by e iδ = ±i and θ 23 = 45 • is fixed by symmetry. Note that the standard parametrization corresponds to diag(1, 1, −1)U 0 diag(1, 1, ∓i).
If we invert the relation (9) by using (12) Choosing the bottom signs in (12) corresponds to taking d → d * and c → c * . The phases of c, d are also convention dependent as they can be transferred from one to the other by the rephasing transformation (6). A rephasing invariant CP-odd quantity is for both signs in (12). This invariant is clearly nonzero for physical values and corresponds to one of the invariants in Ref. [24] adapted to the CP µτ symmetry case.
We can also note that if we perform a change of basis of M ν only by the first matrix of (12), we obtain a real symmetric matrix which can be diagonalized by a real orthogonal matrix. If we compare the trace of M ν and M 2 ν in this new basis as well as the determinant, we obtain the following relations: This means that we can trade three among a, b, |c|, |d|, Im(c * d 2 ) by the three neutrino masses m i for each choice of CP parities.
IV. POSSIBLE ONE-ZERO TEXTURES
A texture-zero in the (ee) or (µτ ) entries of the CP µτ symmetric neutrino mass matrix (1) is possible depending on the neutrino CP parities and the mass ordering. By using the relations in (13), the texture-zero relation essentially fixes the lightest neutrino mass except for the uncertainty in the values of the mixing angles and mass differences.
The solutions are summarized in Table I, where we show the possible values for the mass of the lightest neutrino, the effective neutrinoless double beta decay parameter (m ββ ) and the sum of neutrino masses. The mixing angles θ 12 , θ 13 and the mass differences ∆m 2 12 , ∆m 2 23 are taken within 3-σ of the global fit of Ref. [3] while the values θ 23 = 45 • and δ = ±π/2 are fixed by symmetry.
We can see that case III is excluded due to the Planck power spectrum limit (95% C.L.) [25], We are left with two cases for the normal ordering (NO) and one case for the inverted ordering (IO). All these cases are also compatible with the latest KamLAND-Zen upper limit for the neutrinoless double beta decay parameter at 90%C.L. [26], The variation in the latter, comes from the uncertainty in the various evaluations of the nuclear matrix elements. In the future, KamLAND-Zen and EXO-200 experiments will probe the IO region that includes our case IV. The texture-zero relation a = 0 or b = 0 in (13) also leads to a correlation between mixing angles and the lightest neutrino mass when the parameters are allowed to vary within the experimental uncertainties. For the phenomenologically allowed cases, we show this correlation in Fig. 1 for θ 12 . We can see that the correlation is strong for θ 12 while for θ 13 we have checked that it is only mild. It is clear that a more precise determination of θ 12 will lead to a more precise prediction for the lightest neutrino mass. Concerning the neutrinoless double beta decay rates, this information leads to a testable prediction for m ββ in case IV but only to a falsifiable prediction for other cases (m ββ = 0).
V. MODEL
In order to obtain the CP µτ symmetric mass matrix (1) with vanishing (ee) or (µτ ) entries, it is sufficient to introduce scalars carrying Z 8 charges corresponding to the nonzero entries in (7). The absence of appropriate fields will lead to texture-zeros [14].
Let us introduce SM singlet scalars η k ∼ ω k 8 labelled by their Z 8 charges, k ≤ 7. We need to know how η k transforms under CP µτ . For that, we just need to infer how η 1 transforms. From (3), we can see that the lepton doublet L µ ∼ ω 8 has the same charge and transforms under CP µτ as where L cp τ ∼ ω −3 8 . So we expect CP µτ : If we double the charge, we obtain and the last identification can be made if η 2 carries no other quantum number besides Z 8 [5]. Therefore, the fields η 1 , η 3 , η * 2 , η 2 couple to the appropriate quadratic combination in (7) giving rise to the (eτ ), (eµ), (µµ) and (τ τ ) entries of M ν , respectively.
To avoid the (ee) combination in (7) to acquire a bare coupling, 3 we introduce a Z B−L 4 symmetry under which the leptons have charge −i while the scalar η k ∼ −1. 4 If we also introduce the real fields η 0 and η 4 , the dimension five Weinberg operator will come from the nonrenormalizable operators Since CP µτ symmetry ensures c eτ = c * eµ , c τ τ = c * µµ and real c ee , c µτ , we obtain the CP µτ symmetric mass matrix (1) if CP µτ is not broken by η k , i.e., We show in the following that these symmetric vevs are possible. Finally, the texture-zero in the (ee) or (µτ ) entry follows if η 0 or η 4 is absent. In this effective case, the texture-zero is not exact because even if η 4 is absent it can be replaced by e.g. η 0 η 2 2 or η * 1 η 3 η 2 but it only appears with three η k fields due to Z B−L 4 and give entries in the neutrino mass matrix suppressed by η k 2 /Λ 2 . Possibly, this suppression can be improved in a specific UV complete model. Some examples of UV completions for the case where the abelian symmetry is L µ − L τ can be seen in Ref. [5].
The remaining task is to check that the scalar potential involving η k can be minimized by values conserving CP µτ , i.e., obeying (22). The potential contains no trilinear terms and the terms that depend on their phases are only quartic: where λ 1 is real while the rest are complex. This corresponds to the case where η 4 is absent.
If we parametrize η k = u k e iα k , we can see that the only terms that depend on the combination α 13 ≡ α 1 + α 3 are the terms with coefficients λ 1 , λ 1 , λ 3 , λ 4 . The CP µτ symmetry corresponds to which flips the sign of α 13 while the rest are invariant. For the terms with λ 1 and λ 3 , the dependence is through cos 2α 13 and cos α 13 respectively. The dependence for the terms with λ 1 and λ 4 is only through (η * 2 1 + η 2 3 ) that have the form u 2 1 cos(α 13 + ϕ) + u 2 3 cos(α 13 − ϕ) .
This expression depends on α 13 only through cos α 13 if u 1 = u 3 . In this case, for all these terms, parameters can be chosen so that α 13 = 0 is a minimum.
To check that u 1 = u 3 can be achieved, we can gather the terms that do not depend on α 13 and write the quadratic contributions for u 1 and u 3 , after taking the minimizing values for other parameters, as where B comes from the λ 3 , λ 4 terms. If we arrange A + B < 0 and A − B > 0, u 1 − u 3 = 0 minimizes the quadratic terms in (26) and hence the whole potential in the direction orthogonal to (u 1 , u 3 ) ∼ (1, 1). This checks that the symmetric minimum (22) is possible. We have also checked that numerically it is easy to obtain the symmetric minimum. If η 4 is present instead of η 1 , the terms with coefficients λ 4 and λ 4 are replaced by We arrive at the same result as before: there are parameter regions where CP µτ remains conserved by the vevs. 3 In the case of (Mν )µτ = 0 we can obtain (Mν )ee from the bare coupling but it will come from a operator which is lower order than the rest. 4 Since this charge is real, the identification of η * 6 = η 2 in (20) is consistent. [5] VI. CONCLUSIONS We have shown by explicit construction a highly predictive scenario where the neutrino mass matrix is symmetric by CP µτ or µτ -reflection and additionally contains one texture-zero in the (ee) or (µτ ) entry. Besides the usual predictions of CP µτ -maximal θ 23 , maximal Dirac CP phase and trivial Majorana phases -we find that only two values for m 1 are possible for normal ordering and only one value for m 3 is possible for the inverted ordering. The NO solutions correspond to m 1 of a few meV and the IO solution has m 3 of around 20 meV. The specific intervals when we allow for the uncertainties in the oscillation parameters can be seen in Table I together with the possible CP parities, the value for the neutrinoless double beta decay parameter and the sum of neutrino masses. The strong correlation that appears between the solar angle θ 12 and the lightest neutrino mass is shown in Fig. 1. The IO solution is expected to be tested in the near future by the neutrinoless double beta decay experiments such as KamLAND-Zen and EXO-200 as they reach the IO region. For the solutions with NO, we predict a negligible neutrinoless double beta decay rate as m ββ ≈ 0 which can be falsified but will be impossible to confirm. Finally, the possibility of a neutrino mass matrix with CP µτ symmetry simultaneously with a texture-zero that is enforced by symmetry was first shown here and it is only allowed by combining in a non-usual way a discrete abelian symmetry at least as large as Z 8 and CP µτ .
|
2017-01-16T17:04:09.000Z
|
2016-11-24T00:00:00.000
|
{
"year": 2016,
"sha1": "86865901039f670b6eddd08faffe77af869af34d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/JHEP01(2017)068.pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "86865901039f670b6eddd08faffe77af869af34d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
49390622
|
pes2o/s2orc
|
v3-fos-license
|
Carbonate chemistry in sediment pore waters of the Rhône River delta driven by early diagenesis (NW Mediterranean)
. The Rhône River is the largest source of terrestrial organic and inorganic carbon for the Mediterranean Sea, and a large fraction thereof is buried or mineralized in the sediments close to the river mouth. The mineralization follows aerobic and anaerobic pathways with varying impacts on the 10 carbonate chemistry in the sediment pore waters. This study focused on the production of dissolved inorganic carbon (DIC) and total alkalinity (TA) by early diagenesis at the sediment water-interface, consequential pH variations and the effect on calcium carbonate precipitation or dissolution. The sediment pore water chemistry was investigated during the DICASE cruise along a transect from the Rhône River outlet to the continental shelf. The concentrations of DIC, TA, SO 42- and Ca 2+ were 15 analyzed on bottom waters and extracted pore waters, whereas pH and oxygen concentrations were measured in situ using microelectrodes. The average oxygen penetration depth into the sediment was 1.7 ± 0.4 mm in the proximal domain and 8.2 ± 2.6 mm in the distal domain, indicating intense aerobic respiration rates. Diffusive oxygen fluxes through the sediment water interface range between 3 and 13 mmol O 2 m -2 d -1 . The DIC and TA concentrations increased with depth in the sediment pore waters up 20 to 48 mmol L -1 near the river outlet and up to 7 mmol L -1 on the shelf as a result of aerobic and anaerobic mineralization processes. Due to oxic processes, the pH decreased by 0.6 pH units in the oxic layer carbonate of the a manocalcimeter with uncertainties of 2.5 % of A manocalcimeter in a small, gastight the be acidified with HCl to calcium carbonates. The increase of pressure is measured with a manometer and is directly proportional to the carbonate content of the sediment sample. Sediment samples also been analyzed via X-Ray diffraction on a X-Pert
carbonate chemistry in the sediment pore waters. This study focused on the production of dissolved inorganic carbon (DIC) and total alkalinity (TA) by early diagenesis at the sediment water-interface, consequential pH variations and the effect on calcium carbonate precipitation or dissolution. The sediment pore water chemistry was investigated during the DICASE cruise along a transect from the Rhône River outlet to the continental shelf. The concentrations of DIC, TA, SO 4 2and Ca 2+ were 15 analyzed on bottom waters and extracted pore waters, whereas pH and oxygen concentrations were measured in situ using microelectrodes. The average oxygen penetration depth into the sediment was 1.7 ± 0.4 mm in the proximal domain and 8.2 ± 2.6 mm in the distal domain, indicating intense aerobic respiration rates. Diffusive oxygen fluxes through the sediment water interface range between 3 and 13 mmol O 2 m -2 d -1 . The DIC and TA concentrations increased with depth in the sediment pore waters up 20 to 48 mmol L -1 near the river outlet and up to 7 mmol L -1 on the shelf as a result of aerobic and anaerobic mineralization processes. Due to oxic processes, the pH decreased by 0.6 pH units in the oxic layer of the sediment accompanied by a decrease of the saturation state regarding calcium carbonate. In the anoxic part of the sediments, sulfate reduction was seen to be the dominant mineralization process and was associated to an increase of pore water saturation state regarding calcium carbonate. 25 Ultimately anoxic mineralization of organic matter caused calcium carbonate precipitation as shown by large decrease in Ca 2+ concentration with depth in the sediment. The saturation state and carbonate precipitation decreased in offshore direction, together with the carbon turnover and sulfate consumption in the sediments.
30
The coastal ocean is a net sink of atmospheric CO 2 and plays an important role in the global carbon cycle (Hedges and Keil, 1995;Chen and Borgès, 2009;Bauer et al., 2013;Laruelle et al., 2013). It is not only a sink for atmospheric CO 2 , but also a location where terrestrial organic and inorganic carbon is buried or recycled (Hedges and Keil, 1995;Cai, 2011). Due to strong pelagic-benthic coupling, a large fraction of organic matter (OM) is mineralized in continental shelf sediments (McKee et al., 35 2004;Burdige, 2011;Bauer et al., 2013). Estuaries and delta regions are a very dynamic part of shelf regions characterized by a high carbon turnover (Hedges and Keil, 1995;Cai, 2011). They are the principal link between continents and oceans and receive inputs of terrestrial organic and inorganic carbon, in both, particulate and dissolved phases (McKee et al, 2004;Cai, 2011;Dai et al, 2012;Bauer et al., 2013). An important fraction of these inputs remains on site and undergoes oxic and anoxic 40 mineralization (Andersson et al., 2005;Aller and Blair;Chen et al., 2012). Despite their importance for the coastal carbon cycle, there is a lack of knowledge about the links between early diagenesis and the carbonate system in river dominated sediments (McKee et al., 2004). Aerobic and anaerobic reaction pathways contribute to the production of dissolved inorganic carbon (DIC), which creates acidification of the bottom water and to the production of total alkalinity (TA), which increases 45 the CO 2 buffer capacity of seawater (Thomas et al., 2009). Variations in DIC and TA affect the partial pressure of CO 2 (pCO 2 ) in seawater and ultimately the CO 2 exchange with the atmosphere (Emerson and Hedges, 2008). The processes by which TA is produced in the sediments are still not well understood : anaerobic respiration (denitrification, sulfate reduction, iron and manganese reduction) seems to play a major role (Thomas et al., 2009, Krumins et al., 2013 but dissolution/precipitation of 50 calcium carbonate can have a large impact on TA concentrations as well (Jahnke et al., 1997). Indeed, the changes in sediment pore water composition and pH can lead to over or under-saturation of calcium carbonate saturation state (and therefore influence carbonate dissolution and burial in sediments (Mucci et al., 2000). Jahnke et al., (1997) showed using benthic flux measurements in deep sea the sediments of the Skagerrak and associated OM mineralization can increase pore water pH by proton-consuming reduction processes of oxidized iron and manganese. They pointed at the complexity of the multiple competing reaction pathways in anoxic sediments and observed that the existing theoretical background (Froelich et al., 1979, Berner, 1980 was insufficient to disentangle them. In regions with a high carbon turnover, sulfate reduction is a large contributor to anoxic early diagenesis 70 and can even be the dominant mineralization process for OM (Mucci et al., 2000;Burdige and Komada, 2011;Pastor et al., 2011). Sulfate reduction slightly decreases pH Soetaert et al., 2007), but nevertheless, it tends to enhance carbonate precipitation because of its coupling with precipitation of sulfide minerals from iron oxides (Gaillard et al., 1989;Mucci et al., 2000;Burdige, 2011). As an example, in sapropelic sediments from a Mangrove Lake, Mackenzie et al. (1995) reported 75 a stable pH throughout the sulfate-reduction zone and a buildup of supersaturation with respect to carbonate with depth. These results contrast with a theoretical point of view where sulfate reduction was supposed to lead to carbonate dissolution because of the pH decrease . Even today, the reproduction of measured pore water profiles in the sediments and the estimation of TA and DIC fluxes across the SWI by modeling is very challenging Krumins et al., 2013;80 Jourabchi et al., 2005). In addition, the magnitude of DIC and TA fluxes across the SWI are not well constrained and can show important variations between different study sides (Mucci et al., 2000). Biogeosciences Discuss., doi:10.5194/bg-2016-212, 2016 Manuscript under review for journal Biogeosciences Published: 23 May 2016 c Author(s) 2016. CC-BY 3.0 License.
In order to improve our understanding of the influence of early diagenesis of organic matter on carbonate dissolution/precipitation, we designed a study in the Rhône River delta in the Mediterranean Sea, which displays a range of biogeochemical characteristics (Lansard et al., 2009;Cathalot et al., 85 2010;Cathalot et al. 2013). Indeed, the Rhône River delta receives inputs of terrestrial organic and inorganic carbon, in both particulate and dissolved phases which decrease with the distance to the river mouth. An important fraction of these inputs remains on site and undergoes mineralization in the sediments (Pastor et al., 2011a). Therefore sediments display strong spatial gradients in biogeochemical parameters such as nutrients, organic and inorganic carbon, affecting the diagenetic transport-reaction 90 network (Bourgeois et al., 2011;Lansard et al., 2008). High sedimentation rates and resuspension events make this environment very dynamic and heterogeneous (Cathalot et al. 2010). In extreme cases near the river outlet, the downward advection due to high sedimentation rates can compete with diffusive transport of dissolved species like DIC and TA. We investigated a transect of stations characterized by various biogeochemical conditions (from oxic-dominated to sulfate reduction-95 dominated sediments). We used a combination of in situ oxygen and pH microelectrode measurements and pore water analysis of DIC, TA, SO 4 2and Ca 2+ concentrations to cover different vertical scales. We calculated and discussed the calcium carbonate saturation state in regard to the different intensity of biogeochemical processes in these river-dominated sediments.
The Rhône River delta
With a drainage basin of 97 800 km 2 and a mean water-discharge of 1700 m 3 s -1 , the Rhône River is the largest river of the Mediterranean Sea in terms of fresh water discharge, inputs of sediment and terrestrial organic and inorganic matter (Pont, 1997;Durrieu de Madron et al., 2000;Sempéré et al., 2000). The Rhône River mouth is a wave-dominated delta located in the microtidal Mediterranean 105 environment of the Gulf of Lions (Sempéré et al., 2000). Its river plume is mostly oriented southwestward, due to the Coriolis Effect and the wind forcing (Estournel et al., 1997). The annual discharge of particulate inorganic carbon (PIC) is estimated to 0.68 ± 0.45 10 9 gC (Sempéré et al., 5 2009). The total particulate organic carbon (POC) deposition in the Rhône delta system (265 km 2 ) is about 100 ± 31 10 9 gC y -1 where the deltaic front accounts for nearly 60 % of the total POC deposition 110 (Lansard et al., 2009). Off the river mouth, the deposited sediments are of cohesive nature and composed of fine grained sediments with more than 90 % of silts and clays (Roussiez et al., 2005;Lansard et al., 2007). Previous studies have shown that the carbonate content in the surface sediments varies between 28 and 38 % (Roussiez et al., 2006) and the content of OC between 1 and 2 % (Roussiez et al, 2005(Roussiez et al, , 2006Lansard et al., 2008Lansard et al., , 2009. The PIC in the sediments is composed by autochtonous 115 and allochtonous carbonates. The most aboundant calcifying organisms in this area are foraminifera (Mojtahid et al., 2010).
The seafloor bathymetry shows that the delta is divided in three zones, characterized by different water depth, sedimentation rate and strength of continental slope. Got and Aloisi, 1990, defined three major domaines that we call : Proximal domain, in a radius of 2 km from the river outlet with water depth 120 ranging from 10 to 30 m, Prodelta domain, between 2 and 5 km from the river mouth with depth ranging from 30 to 70 m and Distal domain, with depth between 70 and 80 m passed the 5 km from the river mouth. Annual sedimentation rates reach up to 30-48 cm yr -1 close to the river mouth (Charmasson et al., 1998) and rapidly decrease below 0.1 cm yr -1 on the continental shelf (Miralles et al. 2005). The sea floor in this region is a dynamic environment with important heterogeneity concerning diagenetic 125 activities, sediment pore water profiles and exchange fluxes at the sediment-water interface (Lansard et al., 2009;Cathalot et al., 2010).
Diffusive oxygen fluxes into the sediment show spatial variability, both with the distance from the river mouth (decreasing in offshore direction) and on the horizontal scale of a few cm² (Lansard et al., 2009;Pastor et al., 2011b). Anoxic mineralization processes play a major role in the Prodelta sediments and 130 are dominated by iron and sulfur cycling (Pastor et al., 2011a).
The DICASE cruise
The DICASE oceanographic cruise took place in the Gulf of Lions from the 2 nd to the 11 th of June 2014 on board of the RV Tethys II (http://dx.doi.org/10.17600/14007100). Ten stations have been sampled order to investigate spatial variability at these two stations. During this cruise, a benthic lander was 140 used to measure in situ oxygen and pH micro profiles and sediment cores were taken for pore water extraction and solid phase analysis.
In situ measurements
To measure in situ oxygen and pH micro profiles at the sediment-water interface, an autonomous lander (Unisense ® ) was used. This lander is equipped with a high precision motor capable to move 145 simultaneously five oxygen microelectrodes (Revsbech, 1989), two pH microelectrodes and a resistivity probe (Andrews and Bennet, 1981) with a vertical resolution of 100 µm. The recorded oxygen profiles were calibrated using oxygen concentrations measured in bottom waters (BW) by Winkler Titration (Grasshoff et al., 1983) and the zero oxygen measured in the anoxic zone (Cai and Sayles, 1996). The location of the SWI was positioned where the strongest vertical oxygen gradient was situated (Rabouille 150 et al., 2003). The calibration of the pH electrodes was carried out using NBS buffers, thus allowing the estimation of the slope of the pH variation at onboard temperature. The slope was then recalculated at in situ temperature and the electrode signal variation was transformed into pH changes. The pH of bottom waters was determined using the spectrophotometric method with m-cresol purple following (Clayton and Byrne, 1993;Dickson et al., 2007) and pore water pH on the total proton scale (pH t ) was 155 recalculated using this BW value the micro electrod measured pH variations. At each depth, the profiler was waiting for 20 seconds to stabilize the electrode before measurements were recorded. Each data point is an average of five measurements carried out at every depth. For all in situ profiles, the signal drift of each microelectrode was checked to be inferior to 5 % between the beginning and the end of the measurements. The slope of the pH electrodes was checked to be at least 95 % of the theoretical slope 160 from the Nernst equation of -59 mV per pH-unit at 25 °C. At each station, 5 oxygen profiles and two pH profiles were measured simultaneously on a surface of 109 cm².
Calculation of oxygen fluxes across the sediment-water interface
Sediment oxygen uptake rate has been widely used to assess benthic OC mineralization during early diagenesis. Total oxygen uptake (TOU) rate can be split into two parts: (i) oxygen uptake rate of 165 diffusive nature (DOU), and (ii) advective oxygen uptake. The DOU rates across the SWI were calculated using Fick's first law (Berner, 1980): with: D s : apparent diffusion coefficient adjusted for diffusion in porous environment calculated following 170 = 0 1+3⋅(1− ) where D 0 is the diffusion coefficient in free water according to (Broecker and Peng, 1974) ϕ : sediment porosity [ 2 ] | = 0 : Oxygen gradient at the sediment-water interface
175
Bottom water samples were collected with a 12-L Niskin bottle as close as possible to the sea floor at each station. On these samples, temperature was measured using a digital thermometer with a precision of 0.1 °C and salinity was measured with a salinometer with a precision of 0.1. pH and concentrations of DIC, TA and oxygen were measured on board as soon as possible within one hour for pH and within six hours for DIC and TA. The pH of seawater was measured using a spectrophotometer and m-cresol 180 purple as dye (Clayton and Byrne, 1993;Dickson et al., 2007) 0.4 µmol L -1 . All DIC concentrations (bottom waters and pore waters) were measured on a DIC analyzer (Apollo SciTech ® ) using 1 ml sample volume with 4 to 6 replicates. The principle of the method is to acidify the sample with phosphoric acid of 10 % concentration to transform all forms of 185 DIC into CO 2 . The sample is then outgassed using ultra-pure nitrogen as vector gas. The degassed CO 2 is quantified by a LICOR containing a non-dispersive infrared detector (NDIR). To calibrate the method, certified reference material (CRM-batch #122, provided by A. Dickson, Scripps Institution of Oceanography) was used at least twice a day to confirm accuracy of the measurements. TA concentrations were measured in a potentiometric open cell titration on 3 ml sample volume (Dickson et 190 al., 2007). Uncertainties of DIC and TA measurements in the sediment pore waters were below 0.5 %.
Sediment cores were sampled using a UWITEC ® single corer. After sampling, the cores were rapidly introduced in a glove bag under N 2 atmosphere to avoid oxidation and pore waters were extracted using Rhizons with pore size of 0.1-0.2 µm (Seeberg-Elverfeldt et al., 2005). The Rhizons had been degassed and stored in a N 2 -filled gas tight box before use. Pore waters were typically extracted with a 2 cm 195 vertical resolution and split into subsamples for DIC, TA, SO 4 2and Ca 2+ analysis. Sulfate concentrations were measured in the laboratory using a turbidimetric method (Tabatai, 1974).
Concentrations of calcium ions were measured using ICP-AES (Ultima 2, Horiba ® ) by the "Pôle Spectométrie Océan" in Brest (France) with a relative uncertainty of 0.75 %. The calcium concentrations were salinity corrected by assuming constant Na + concentrations with depth in the pore 200 waters, in order to avoid any evaporation effects due to the sample storage.
At each station, additional cores were taken for solid phase analysis. To establish porosity profiles, fresh sediment samples were weighted, dehydrated during one week at 60 °C and weighted again. Knowing the salinity and density of seawater and sediment, porosity was calculated from the weight loss after drying. Total carbonate content of the solid phase was analyzed using a manocalcimeter with 205 uncertainties of 2.5 % of CaCO 3 . A manocalcimeter consists in a small, gastight container where the sediment can be acidified with HCl to dissolve calcium carbonates. The resulting increase of pressure is measured with a manometer and is directly proportional to the carbonate content of the sediment sample. Sediment samples have also been analyzed via X-Ray diffraction on a X-Pert Pro diffractometer, using the θ-θ-technique with the K-α-line of copper, to quantify the calcite/aragonite 210 proportion. The uncertainties of the XRD measurements were below 5 % of the aragonite proportion (Nouet and Bassinot, 2007).
Calculation of carbonate speciation, CaCO 3 saturation states and pH in pore waters
According to Orr et al. (2015), the best way to compute the 12 parameters of the carbonate system at in 215 situ conditions is to start with DIC and TA concentrations. The thermodynamic constants proposed by Lueker et al. (2000) were used to calculate data about DIC speciation and pore water pH by the program CO2SYS (Lewis and Wallace, 1998). The calcium carbonate saturation state is expressed as the solubility product of calcium and carbonate ions concentrations divided by their solubility constant k sp : The solubility constant k sp was calculated for in situ temperature, salinity and pressure following (Millero et al.,1979;Mucci, 1983;Millero, 1995). The existing numerical tools are developed for the water column, but we used them in the sediments knowing that pore water concentrations (DIC, TA, nutrients) are much larger than those in the water column. Despite this potential artifact, the calculated outputs (e.g. pH) agree with our measurements. 225 Table 2 summarizes the main diagenetic reactions (simplified) and their impact on the DIC and TA concentrations. The dissolution and dissociation of CO 2 in seawater leads to the formation of carbonic acid (R1), the consumption of CO 3 2and ultimately leads to carbonate dissolution (R2). DIC is always produced by OM mineralization, whereas the TA budget of these reactions and the resulting pH 230 variation can be positive or negative. Aerobic mineralization leads to a decrease of pH without TA production (R3) and finally decreases . In the sediments, oxygen is also used to reoxidise reduced species, a process that decreases pH even more strongly than aerobic respiration (R4-6) and thus reoxidation decreases as well. In contrast, anaerobic mineralization causes much weaker pH drops compared to the oxic processes and can even increase pH (R7-9). The precipitation of sulfur minerals 235 does not affect the amount of pore water DIC, but can have an important influence on pH and TA (R10-13). The two reactions R14 and R15 deal with the coupling of sulfate reduction and methanogenesis and its impact on DIC.
240
In June 2014, the Rhône River water level was low and close to 1000 m 3 s -1 since the previous 2 months. Therefore, the spread and thickness of the Rhône River plume were very limited and bottom waters were not influenced by the river outflow, even close to the river mouth. Bottom water temperature, salinity, O 2 , DIC and TA concentrations, pH, SO 4 2concentrations and bottom water pCO 2 are given in Table 1. Salinity remained very constant close to the sea floor, whereas temperature decreased with 245 water depth from 16.8 to 14.3°C. Bottom waters were well oxygenated and oxygen concentrations decreased also with increasing water depth. Whereas DIC and TA concentrations varied slightly and the TA/DIC ratio in the bottom waters of all stations was 1.1 ± 0.02. The pH of bottom water showed some local variability with a general decrease in offshore direction. SO 4 2concentrations were constant between the stations and showed typical values for seawater around 30mmol L -1 . pCO 2 showed 250 oversaturation compared to the atmosphere at all stations. creating small oxygen peaks below the oxygen penetration depth. The diffusive oxygen uptake rate (DOU) calculated from the measured oxygen profiles are shown on Figure 3 as a function of the distance to the river mouth in the direction of the river plume. The positive value significates an uptake of O 2 into the sediment. DOU decreases exponentially with the distance from 12.3 ± 1.1 mmol L -1 m -2 d -1 at station A towards the minimum flux of 3.8 ± 0.9 mmol L -1 m -2 d -1 at station F. 260
The oxic layer
In situ pH micro profiles were measured in the top 4 cm of the sediment at all stations (Fig. 4).
Immediately below the sediment-water interface, the pH drops about 0.6 to 0.7 units in the oxic layer.
Similarly to the oxygen micro profiles, the pH gradient in the OPD is stronger close to the river mouth and weaker in the distal domain. Just below the first drop, pH increases for 0.1-0.2 pH units and tends towards an asymptotic value between 7.4 to 7.6. The pH inflexion point, i.e. where the decrease stops 265 and pH starts increasing, is located deeper in the distal zone than in the proximal zone, just below the OPD. The pH profiles show very high heterogeneity, even at one station. Figure 5 shows the DIC and TA pore water profiles measured during the DICASE cruise. All pore water gradients across the sediment-water interface were strongest close to the river mouth and 270 decreased in offshore direction. At the SWI at all stations, the DIC gradients were stronger than the TA gradients. Despite spatial heterogeneity in the sediments, the three major areas defined by! Got and Aloisi, (1990)
Calcium and sulfate concentrations
The calcium pore water profiles are shown in Fig. 6. At all stations, bottom water Ca 2+ concentration varies between 10 and 11 mmol L -1 . In the proximal domain, the Ca 2+ concentration decreases just 295 below the SWI to reach a minimum of 2 mmol L -1 at 15-20 cm depth, where DIC and TA concentrations reach a maximum and sulfate concentration a minimum. In the prodelta domain, the Ca 2+ concentration remains stable with the depth until 10-15 cm depth related to the weaker TA and DIC gradients (Fig. 6). Below this depth, where the TA and DIC gradients increase, Ca 2+ decreases to values around 7 mmol L -1 at the bottom of the cores. The distal domain is characterized by constant Ca 2+ 300 concentrations which remain above 10 mmol L -1 .
In extracted sediment pore water, sulfate concentrations range from 5 to 32 mmol L -1 from the surface down to 30 cm depth. Our measurements indicate strong sulfate consumption rates in the proximal domain (Fig. 8) where DIC and TA gradients are strong as well. Even in the first cm below the SWI in the proximal domain sulfate concentration decrease compared to the bottom water. In the prodelta 305 domain, sulfate reduction starts to occur between 10 and 15 cm depth (Fig. 8), the same depth where TA and DIC gradients increase. In the distal domain no significant sulfate reduction seems to occur in the first 30 cm, as sulfate concentration remains constant (Fig. 8)
Solid carbonates
The carbonate content of the solid phase scattered around 35 % at all stations, from the surface down to 30 cm. The composition of sedimentary CaCO 3 was dominated by calcite (> 95 %), with a small fraction of magnesian calcite (> 5 %), and less than 2 % of aragonite (data not shown). Taking into account the precision of the DRX measurements of ± 5 %, we cannot differentiate if both these phases 315 were present in the sediments of the study area or if it is a measurement error.
Calcium carbonate saturation state
In this study, we only report on the Ω calcite since calcite is dominant and aragonite is insignificant in the sedimentary CaCO 3 . The results for the calcite saturation state in pore waters are shown on Fig. 8. The saturation state drops in the oxic layer. In the proximal domain (Fig. 8), the saturation state increases 320 immediately below this first drop to reach very high values of around 5 to 10. In the prodelta domain ( Fig. 8), the saturation state remains very close to 1 at a depth between 5 and 10cm before increasing to super saturation (3 to 4) below 10 to 15 cm depth. In the distal domain (Fig. 8), the saturation state shows no variation below the first drop.
The impact of oxic and suboxic processes on the carbonate system
The upper part of the sediment, is defined as the oxic zone, supporting aerobic respiration (R3).
Generally, the oxygen penetration depth (OPD) is related to aerobic respiration rates (Cai and Sayles, 1996). Aerobic respiration consumes O 2 to mineralize organic matter, produces metabolic CO 2 in the sediment pore water, increases the DIC concentration, lowers pH and possibly decreases the CaCO 3 330 saturation state (R1 and Cai et al., 1993Cai et al., , 1995. The OPD and oxygen fluxes are therefore key parameters to assess the effect of aerobic respiration on calcium carbonate in the sediment (Jahnke et al., 1997;Jahnke and Jahnke, 2004). In the Rhône River delta, the OPD increases with water depth and distance from the Rhône River mouth. Very similar in situ OPD were reported for the sediment of the same study area in previous 335 studies (Lansard et al., 2008(Lansard et al., , 2009Cathalot et al., 2013). These low values of O 2 penetration depths are classical for river-dominated ocean margins and they depend mainly on the sedimentation rate, the flux, the age and the oxidation state of OM (Lansard et al., 2009, Cathalot et al. 2013. Few in situ O 2 profiles show oxygen peaks at depth below the OPD. These are likely the effect of sediment bioturbation by the benthic macrofauna. As reported by Bonifácio et al., (2014), the macrofauna community is dominated 340 by polychaetes and the highest activity is found in the prodelta domain. Nevertheless, comparisons between TOU and DOU rates have demonstrated that DOU account for about 80 % of total oxygen uptake rate into the sediments (Lansard et al., 2008). As a consequence, diffusive transport is dominant compared to advective transport and bioturbation (i.e. bioirrigation and bioventilation). Diffusive O 2 fluxes calculated from in situ 1D micro profiles (Fig. 2) are therefore representative for total oxygen 345 uptake rates. As shown on Fig. 3, the diffusive oxygen fluxes into the sediment decrease exponentially with the distance from the river mouth, from 12.3 ± 1.1 mmol O 2 m -2 d -1 , close to the Rhône River mouth, to 3.8 ± 0.9 mmol O 2 m -2 d -1 offshore. Despite spatial and temporal variability, similar oxygen fluxes have been reported by previous studies in the same area (Lansard et al., 2008(Lansard et al., , 2009Cathalot et al., 2010). According to Pastor et al., (2011a), the POC flux in the proximal domain is one order of 350 magnitude higher than in the offshore regions of the Rhône prodelta. Following model estimates, this OM flux and especially fast fraction supports oxygen consumption as it is completely mineralized in the oxic layer (Pastor et al., 2011a).
During aerobic respiration, the ratio of oxygen used to mineralize OM is close to 1, conforming to the stoichiometry of equation (R3). As a result, DIC concentrations increase just below the SWI at all 355 stations (Fig. 5). The balance between O 2 flux and carbon oxidation in the sediment is affected by O 2consumption linked to the oxidation of inorganic species produced via anoxic organic carbon degradation (NH 4 + , Fe 2+ , Mn 2+ and HS -). The oxidation of reduced diagenetic products has a profound effect on pore water O 2 and pH profiles in O 2 limited sediments (Cai and Reimers, 1993). These reactions (R4 to R6), in addition to aerobic bacterial respiration, consume TA and affect porewater pH 360 and therefore the calcium carbonate saturation state. There is a large contribution of anoxic processes to Biogeosciences Discuss., doi:10.5194/bg-2016-212, 2016 Manuscript under review for journal Biogeosciences Published: 23 May 2016 c Author(s) 2016. CC-BY 3.0 License. total OM mineralization in sediments near the Rhône River mouth, certainly due to large inputs of fresh organic material combined with high sedimentation rates (Pastor et al., 2011a). The diagenetic byproducts originally produced during anoxic organic matter mineralization are almost entirely precipitated (> 97 %) and buried in the sediment, which leads to a relatively low contribution of the re-365 oxidation of reduced products to total oxygen consumption. Still, about 10 to 40 % of the oxygen flux is used to oxidize reduced species of iron and manganese, contributing to lower pH (Pastor et al., 2011a).
Again, the upward flux of reduced species in the sediments is higher in the proximal domain than in the others. Offshore, less OM is available and the diagenetic activity is weaker, providing less reduced species from deeper sediment layers. pH drops below the SWI, caused by all oxic processes, are visible 370 on the in situ pH micro profiles and decreases until the OPD is reached (Fig. 2 and 4). As the OPD are smaller and the oxygen fluxes are higher in the proximal domain, the pH minimums are reached at shallower depth in the sediment than in the other domains. The pH drop is lowering Ω by consuming carbonate ions (Emerson and Hedges, 2008;Jourabchi et al., 2005). The decrease of Ω, due to both aerobic respiration and the oxidation of reduced species, is clearly visible between the first two points 375 located above and below the SWI interface (Fig. 8).
Just below the oxic layer, OM mineralization via MnO 2 and Fe(OH) 3 reduction (R7-8) increases pH and releases large amounts of TA. The first pore water data point sampled in the sediments represents a mixture of oxic and anoxic pore water. Therefore, we potentially over estimate Ω in the oxic layer based on calculations from pore water concentrations (Cai et al., 2010). Different measurements in the deep 380 sea revealed that Ω shows a minimum in the oxic layer (Cai et al., 1993(Cai et al., , 1995(Cai et al., , 1996Hales and Emerson, 1997;). As pH decreased at all stations to the same value, but the TA and DIC gradients at the interface are the strongest in the proximal domain, Ω should show the highest values in the oxic sediments of the proximal domain and decrease in offshore direction. High TA concentrations in the oxic layer resulting from anoxic OM mineralization below, prevent the carbonate saturation state from 385 getting undersaturated. Therefore potential dissolution in the oxic layer would most likely occur in the distal domain, but could be inhibited in the proximal domain. In agreement with current understanding of anoxic diagenesis, the observed pH increase of 0.1 to 0.2 units below the OPD can be attributed to OM mineralization via reduction onf iron and manganese (R7 and R8). These anoxic reactions release TA and increase pH in the oxic-anoxic transition zone 390 (Aguilera et al., 2005;Jourabchi et al., 2005). This pH increase and the release of important quantities of TA create an important increase in the pore water saturation state (Ω). Previous works showed that the turnover of Fe and Mn is important in the sediments close to the river mouth (Pastor et al., 2011a).
Sulfate reduction and its impact on carbonate chemistry
With sulfate concentration in seawater around 30 mmol L -1 , SO 4 2--reduction can generate large amounts 395 of DIC and TA during organic matter mineralization through sulfate reduction. Indeed, in organic rich sediments, sulfate reduction can account for the majority of OM mineralization (Gaillard et al., 1989;Jourabchi et al., 2005;Burdige, 2011;Fenschel et al., 2012) (R9). Following equation R9, two units of DIC and TA are produced for one unit of sulfate consumed (Mucci et al., 2000;Krumins et al., 2013). Fig. 9 shows the diffusion corrected DIC/SO 4 2ratio in the pore waters of the proximal domain. This 400 ratio compares the difference of pore water concentration of sulfate or DIC at a given depth with the concentration in the bottom water at the same station corrected for molecular diffusion following the equation proposed by (Berner, 1980) and using the diffusion coefficients determined by (Li and Gregory, 1973). Below 10 cm depth, the observed DIC/SO 4 2ratio of 1.9 ± 0.3 is statistically similar to 2 which indicates that sulfate reduction is dominant below this depth. The large standard deviation 405 observed around the mean can be linked to higher oxidation states of organic matter which lowers the SO 4 2requirement for mineralization, carbonate precipitation lowering DIC concentrations or a coupling of sulfate consumption with methane through the anaerobic oxidation of methane what increases DIC concentrations (Burdige and Komada, 2011;Antler et al., 2014). These possibilities may be acting in the Rhone Delta proximal zone, as a large fraction of the OM mineralized in the proximal domain is of 410 terrestrial origin, aged and already partly oxidized before being deposited on site (Cathalot et al., 2013), as calcium carbonates precipitate (Fig. 7) and as the presence of methane has been reported by Garcia-Garcia et al., (2006). As shown by (Burdige, 2011, Burdige andKomada, 2011), the interaction of all diagenetic pathways are hard to disentangle and do not provide clear evidence of changes in DIC/SO 4 2ratio. Nonetheless, 415 the value of the observed DIC/SO 4 2ratio (1.9 ± 0.3) points towards the dominance of sulfate reduction in the deeper layers of the sediment (below 10 cm depth). Sulfate reduction is also attested by the co-production of alkalinity and DIC (Fig. 5) and according to (Krumins et al., 2013), by far the most important alkalinity producer in marine sediments. Sulfate reduction creates a TA/DIC ratio very close to 1 in the pore waters of the proximal zone sediments. This situation is very similar to Mangrove Lake 420 sediments (Mackenzie et al., 1995) where depletion of sulfate is almost complete and DIC and TA concentrations build up to 40 mmol L -1 in the sediment pore waters, or to other coastal environments (Burdige, 2011;Antler et al., 2014). No other reaction in the anoxic zone has a TA/DIC production ratio near 1. As pH is buffered, probably by precipitation of FeS and FeS 2 (R12), this large increase of alkalinity is accompanied, in the proximal zone, by a large increase of the saturation state of pore waters 425 with respect to calcite (Fig. 8) up to values of oversaturation (Ω) from 5 to 10. The effect of sulfate reduction and the carbonate saturation state has been a matter of debate since the early work of Ben-Yaakov (1973). Indeed, sulfate reduction produces large quantities of both alkalinity which increases Ω and protons which decrease Ω. This has been summarized in Jourabchi et al.'s model (2005) by estimating that sulfate reduction would lead to decrease of Ω if it was the only ongoing reaction. The 430 sediments from the proximal area of the Rhône Delta show, on the contrary, that pH stabilizes between 7.2 an 7.6 driven by sulfate reduction which generates an increase of saturation state with respect to calcite correlated with sulfate decrease (Fig. 8). This situation is very similar to (Mackenzie et al., 1995) and (Mucci et al., 2000) who also showed an increase of Ω when sulfate reduction is significant. Using a closed system model, (Ben Yaakov, 1973) estimated that oxidation of HScoupled to iron hydroxide 435 reduction with FeS precipitation (as in R11 or R12) would buffer or even increase pH. Charles et al., (2014) suggested, that OM mineralization in the prodelta of the Rhône could be coupled to pyritisation.
The Rhône River is known to be the most important riverine input of iron into the Mediterranean Sea (Guieu et al., 1991) with an iron content varying between 2 and 4 % in the solid phase discharge. In the proximal zone of the Rhone Delta, dissolved sulphide is absent from the first tens of centimeters in the 440 sediment (Pastor et al., 2011a) which indicates that re-oxidation and/or precipitation of sulphide is occurring in these sediments. Pastor et al, (2011a) estimated that 97 % of the reduced species from the anoxic sediments precipitate before diffusing to the oxic layer and that sulfides are the limiting factor for pyrite precipitation in this environment. With this FeS coupling, pH is stabilized or tends even to increase and a large oversaturation with respect to calcium carbonate is created due to produced 445 carbonate ions.
In the proximal domain, the large supersaturation with respect to calcite, induces calcite precipitation as evidenced by a large decrease of dissolved calcium in the pore waters (Gaillard et al., 1989;Boudreau et al., 1992). Indeed, Ca 2+ concentration decreases by 9 mmol L -1 between the bottom water and 25 cm depth in proximal sediments. In the prodelta domain (Fig. 7), a similar set of reaction involving sulfate 450 reduction and sulphide re-oxidation and precipitation is also visible with lower amplitude as sulfate depletion is only 5 mmol L -1 . Oversaturation with respect to calcite reaches values ranging from 3-4 only below 15 cm, and the Ca 2+ decrease is limited and arises deeper. In the distal zone where Ω is around 2 down to 25 cm, no calcium decrease is visible indicating that precipitation does not occur.
The large precipitation of calcium carbonate in the proximal zone may have implication on the CO 2 455 source from the sediment. Indeed, calcium carbonate precipitation generates CO 2 (R2) which can then be exported to the water column. In addition, calcium carbonate precipitation consumes TA. Thus pH and Ω are lowered in the bottom waters by these anoxic processes and lead estuarine environments to a high pCO 2 .
As the majority of the reduced species is precipitated in the anoxic layer so they do not contribute to 460 lower pH in the oxic layer and as the produced alkalinity fluxes are high, calcium carbonates could even be preserved in the oxic layer. Therefore, the alkalinity build up below could diffuse across the oxic sediment layer and contribute to buffer bottom waters and to increase CO 2 storage capacity. Without this TA flux, the pCO 2 of the bottom waters in the prodelta of the Rhône would be much higher than
Conclusions
The results of this work indicate the existence of three major domains in the Rhône prodelta characterized by different degres of organic and inorganic particulate carbon interactions. Close to the 470 river mouth where the carbon turnover is the most important, the biogeochemical gradients are the strongest resulting in high chemical fluxes across the SWI. This confirms that the biogeochemistry in the prodelta region is driven by the import and degradation of terrestrial organic matter.
The oxic reactions produce CO 2 and create a pH drop of 0.6 to 0.8 pH units and reduce Ω. As a consequence calcium carbonate may dissolve in the oxic layer, but dissolution could not be put to 475 evidence in this study. The majority of oxygen is used for OM mineralization as most of reduced species precipitate in anoxic sediments and does not contribute to oxygen consumption. The mineralization of OM by Fe and Mn oxides increases pH and Ω just below the oxic layer in several mm depth.
The strong TA and DIC gradients observed in the sediments of the Rhône prodelta suggest that OM 480 mineralization is dominated by anaerobic processes. Close to the river mouth, where the organic carbon content in the sediments is highest, sulfate reduction is the dominant mineralization process for OM degradation creating a strong coupling between TA and DIC pore water profiles. Despite its theoretical lowering effect on pH, sulfate reduction is related to an increase of Ω by important alkalinity production and via the simultaneous pH increase by precipitation of iron-sulfate-minerals. As a result, pore waters 485 are over saturated regarding calcite at all sampled stations. Calcium carbonate precipitation occurs in the proximal and in the prodelta domain, depleting the majority of dissolved calcium ions in the proximal domain. This carbonate precipitation represents an additional CO 2 source from the sediments to the water column. But due to important anoxic TA production, the pCO 2 of bottom waters stays relativly low compared to the important release of DIC due to OM mineralization. 490
Acknowledgments
We would like to thank Bruno Bombed and Jean-Pascal Dumoulin for their technical help during the DICASE cruise and in the laboratory. We also thank the captain and crew of the RV Tethys II (INSU) 495 for their excellent work at sea. A lot of our gratitude is for the SNAP-CO2 for the inter-comparison of DIC and TA concentrations in our seawater samples. We are grateful to Celine Liorzou for the ICP-AES measurements, to Serge Miska for the help with the X-Ray diffraction analysis and we want to thank Stephanie-Duchamp-Alphonse to put a manocalcimeter at our disposition. This research was financed by the MERMEX project (http://mermex.pytheas.univ-amu.fr/?page_id=62) and by the 500 MERMEX Rivers project. (Berner, 1980). 565
|
2018-06-25T17:07:10.897Z
|
2016-05-23T00:00:00.000
|
{
"year": 2016,
"sha1": "ff5e130da0d015bb372cc689a2f4cab9e0b05720",
"oa_license": "CCBY",
"oa_url": "https://www.biogeosciences.net/13/5379/2016/bg-13-5379-2016.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4f807c4f48250092ffc42a22730636567da1a04d",
"s2fieldsofstudy": [
"Environmental Science",
"Geology",
"Chemistry"
],
"extfieldsofstudy": [
"Geology"
]
}
|
219534534
|
pes2o/s2orc
|
v3-fos-license
|
Short-term effects of specific humidity and temperature on COVID-19 morbidity in select US cities
Little is known about the environmental conditions that drive the spatiotemporal patterns of SARS-CoV-2. Preliminary research suggests an association with meteorological parameters. However, the relationship with temperature and humidity is not yet apparent for COVID-19 cases in US cities first impacted. The objective of this study is to evaluate the association between COVID-19 cases and meteorological parameters in select US cities. A case-crossover design with a distributed lag nonlinear model was used to evaluate the contribution of ambient temperature and specific humidity on COVID-19 cases in select US cities. The case-crossover examines each COVID case as its own control at different time periods (before and after transmission occurred). We modeled the effect of temperature and humidity on COVID-19 transmission using a lag period of 7 days. A subset of 8 cities were evaluated for the relationship with meteorological parameters and 5 cities were evaluated in detail. Short-term exposure to humidity was positively associated with COVID-19 transmission in 4 cities. The associations were small with 3 out of 4 cities exhibiting higher COVID19 transmission with specific humidity that ranged from 6 to 9 g/kg. Our results suggest that weather should be considered in infectious disease modeling efforts. Future work is needed over a longer time period and across different locations to clearly establish the weather-COVID19 relationship.
Introduction
Experimental and observational studies demonstrate the influence of meteorological parameters on the seasonal transmission of influenza, human coronavirus (HCoV), and human respiratory syncytial virus (RSV), which are often characterized by distinct increases in incident cases and detection frequency in the winter months (Lowen and Steel, 2014;Tamerius et al., 2013;Tamerius et al., 2011;Midgley et al., 2017;Killerby et al., 2018;Landes et al., 2013;Morikawa et al., 2015). An accumulating evidence-base suggests that seasonal changes in indoor and outdoor environmental factors exert a modifying effect on both the transmission efficiency and viability of the respiratory virus and the host's airway immune defense (Moriyama et al., 2020). These environmental factors are then compounded by human behavior, social interactions, or hygiene practices that enhance viral transmission between individuals who are infected and those who are susceptible.
Like these seasonal viruses, SARS-CoV-2 can be transmitted through aerosols, large respiratory droplets, or direct contact with fomites . SARS-CoV, responsible for the SARS outbreak in 2003, and SARS-CoV-2 responsible for COVID-19, rely on the same receptor-angiotensin-converting enzyme 2 (ACE2)-for infecting humans (Sun et al., 2020). Both made their debut in the winter months giving further credence to the role of the winter environment as an important contributor in transmission, particularly in temperate regions (Li et al., 2020;Paules et al., 2020;Kuiken et al., 2003;Peiris et al., 2003). Scientists conjecture that low humidity and temperature likely promote the viability of SARS-CoV-2 in respiratory droplets and it's plausible that airborne transmission is highly likely among COVID-19 cases with severe pneumonia. A recent population-based study examining the daily incidence of COVID-19 and daily temperature and relative humidity across Chinese provinces observed that in addition to dry and cold locations, locations with low absolute humidity also experienced increased virus transmission rates .
Little is known about the environmental conditions that drive the spatiotemporal patterns of SARS-CoV-2/COVID-19 and preliminary research suggests an association with meteorological parameters Luo et al., 2020;Sajadi et al., 2020;Luo et al., 2020). However, the relationship between temperature and humidity is not yet apparent for COVID-19 cases in the US. As the US begins its public health response to COVID-19, the implementation of extensive public health interventions are needed at appropriate time scales to mitigate the health and economic impacts of the COVID-19 pandemic. Research on the seasonality and influence of meteorological parameters on COVID-19, such as temperature and specific humidity, can be used to inform the timing of effective interventions to mitigate SARS-CoV-2/COVID-19 transmission at the local scale to save countless lives and resources.
The objective of this research is to examine the association between meteorological variables and COVID-19 in US cities. Unlike previous studies, we will use a high-resolution spatiotemporal meteorological dataset to answer the following: Is there an association with meteorological parameters and COVID-19? If so, which meteorological parameters predict COVID-19 transmission? Is the association stronger after accounting for locally implemented social distancing measures? How does this relationship vary spatially across the US? By answering these questions, the knowledge gained on the contribution of environmental factors like temperature and humidity on transmission can be paired with other nonpharmaceutical interventions related to behavioral (e.g., wearing face mask, washing hands) factors that boost immunity or the timing of social distancing measures with seasonal spikes in influential environmental parameters to reduce transmission.
Study design and location
This retrospective case-crossover study examined the nonlinear and delayed association between environmental factors and COVID-19 transmission. We selected the following US locations that exhibited high relative caseloads of COVID-19 in the early stages of the pandemic for their underlying populations: Seattle, Washington; New York, NY; Albany GA; New Orleans, LA.; Bridgeport-Stamford-Norwalk, Conn.; Pittsfield, Mass; Detroit, MI; Chicago, IL. Fig. 1 is a map of the 8 study locations.
COVID-19 cases
The primary health outcome of interest was incident cases. Daily confirmed new cases of COVID-19 for all cities were abstracted from the Johns Hopkins Center for Systems Science and Engineering repository (source: https://github.com/CSSEGISandData/COVID-19). The repository continually assembles global COVID-19 cases from multiple sources including the World Health Organization, the Center for Disease Control, and the COVID-19 Tracking Project (Dong et al., 2020). We assumed at least a median incubation period of 5.2 days (Lauer et al., 2020). Case counts were log-transformed, and time series were created when cities had N2 new daily cases. Because deidentified and anonymized data on case morbidity were obtained from a publicly accessible data portal this research did not involve participant consent and institutional review was not warranted.
Environmental parameters
Meteorological data were derived from the European Center for Median Range Weather Forecast (ECMWF) atmospheric reanalysis dataset (ERA-5) (C3S, 2017). ERA-5 provides a suite of hourly weather parameters that may affect local COVID-19 transmissions at a 30-km spatial resolution. While not commonly used in environmental health studies, the advantage that ERA-5 data provides over individual weather station data is that spatial heterogeneity is more representative and the estimation of health effects of temperature and humidity can be derived in locations far from weather stations or without any station. Previous research has shown that reanalysis data and weather station data show similar health risk estimates (Royé et al., 2020). Daily average near-surface air temperature, specific humidity, and solar radiation were extracted from ERA-5 for each study location by a simple spatial average. Because relative humidity (RH) is highly correlated with temperature, we chose to instead include specific humidity (Q) as a predictor variable in the analysis.
Heat mapping
Preliminary studies have suggested that the combination of humidity and air temperature could affect the transmission of local COVID-19 cases (e.g., Sajadi et al., 2020;Lou et al., 2020;Oliveriros et al., 2020). We examined the association between local COVID-19 cases and air temperature and specific humidity using the density heatmap. To construct the density heatmap, the daily confirmed COVID-19 case reports were first separated based on their corresponding daily mean air temperature (every 1°C) and mean specific humidity (every 0.5 g/kg). All daily confirmed case counts were classified into the same air temperature and specific humidity conditions (e.g., 0°C b T air b 1°C and 1 g/kg b Q b 1.5 g/kg) and evaluated together as a density measurement. This explanatory analysis was intended to demonstrate the association of COVID-19 cases with the combined effect of air temperature and specific humidity. The heatmap could identify the range of optimal meteorological conditions for local transmissions. Considering the incubation period of COVID-19, we applied the analysis to local weather data at different lead times (i.e., 0, 2, 5, 7, 10, 14 days).
Case-crossover distributed lag non-linear model
We applied a time-stratified case-crossover design that uses each individual COVID-19 case as their own control. A conditional Poisson regression was used in combination with the Distributed Lag Nonlinear model (DLNM). This approach is more flexible than conditional logistic regression (Armstrong et al., 2014) in that it allows for overdispersion. The application of the DLNM to the case-crossover design provides a means to assess the nonlinear and delayed effects, as well as the cumulative exposure-response between short-term daily average exposure to meteorological parameters and daily counts for COVID-19 cases. We performed separate analyses for our primary health outcome -COVID-19 morbidityand each meteorological parameter relative to the median and quartiles (i.e., 50th versus 75th). This approach is suitable for studying the effects of time-varying exposures (e.g., intermittent changes in meteorological) on a rare, acute condition (i.e., COVID-19 transmission) (Armstrong et al., 2019;Malig et al., 2016;Guo et al., 2011). We relied on the following equation: where t is the day of the observation; Y t is the observed daily case counts on day t; α is the intercept; T t,l is a matrix obtained by applying the DLNM to temperature or humidity, β is a vector of coefficients for T t,l , and l is the lag days. Strata t is a categorical variable of day (30 day time period) used to control for trends, and λ is a vector of coefficients. SD t is a binary variable that is "1" if day t was a social distancing order, and υ is the coefficient. Our model was adapted from similar work by Guo et al. (2011) who also employed a case-crossover design and DLNM to investigate the effects of temperature on mortality. Given that the incubation period between exposure and symptom occurrence is 2 to 14 days (Linton et al., 2020), we used a maximum 14-day lag period to explore the potential delayed association of temperature and humidity in our model for approximating the pre-and post-infection period for each case.
The DLNM utilizes the "cross-basis" function to flexibly model the lag and exposure components to account for cumulative effects in environmental exposure (Gasparrini et al., 2010). We first examined the association between temperature and humidity individually for our primary outcome. Final models included both temperature and humidity, to examine the contribution of temperature and humidity to COVID-19 transmission in US cities. But our assumption was that temperature would have a predominant effect followed by humidity based on emerging literature (e.g., Shi et al., 2020;Araujo et al., 2020;Wang et al., 2020;Oliveiros et al., 2020;Notari et al., 2020) and therefore humidity was included in the crossbasis term.
Sensitivity analysis
A sensitivity analysis was conducted to select degrees of freedom for the lag polynomial (2-8 degrees of freedom) and the response polynomial (2-8 degrees of freedom) for New Orleans, LA (data not shown). In addition, we changed the maximum lag to 14 and 20 days, which gave similar results (data not shown). Prior research has examined a 0 day, 3 day to 5 day lags for COVID-19 transmission (Ma et al., 2020;Wang et al., 2020) all the way to a lag period extending 7 to 14 days for meteorological parameters . For our initial examination of meteorological parameters independently, we compared the best model fit using quasi-Akaike Information Criterion (qAIC) to determine the optimal degrees of freedom and lag periods. q-AIC is a wellestablished technique for sensitivity analysis and was used to compare DLNM-only models and DLNM + Case-Crossover models to confirm the final model selection (Guo et al., 2011). Models were also examined for the adjustment for trends, such as the day of the week. Initially, we examined the influence of the month in the strata term for the DLNM + Case-Crossover models, but the qAIC values demonstrated the addition of these variables resulted in poor model fit. Likely due to the short time seriesand thus we selected the most parsimonious model that only included day in the strata. The "dlnm" package was used to create the DLNM model (Gasparrini, 2011) using R statistical software (R Core Team, 2020). We adopted the rare-disease assumption where our study hypothesis tested the association between weather exposure and a disease (i.e., COVID-19) characterized by low prevalence. Therefore, we assumed the odds ratio to approximate the relative risk. All relative risks (RR) were presented with corresponding 95% confidence intervals (95% CIs).
Attributable burden of COVID-19 transmission due to weather
In epidemiology, measures of potential impact are used to examine the expected impact of changing the distribution of one or more risk factors in a particular population (Kleinbaum et al., 1982, Szklo andNieto, 2014). For example, the attributable risk, also known as the etiologic fraction, is used to examine the proportion of all new cases in a given time frame that is attributable (or causally associated) to the exposure of interests (Szklo and Nieto, 2014). Because the evidence-base linking COVID-19 transmission and weather is new and evolving, it is too early to assume a causal association. Therefore, we relied on the excess fraction (EF) as an analogous, but alternative measure to the attributable risks in our analysis, to approximate the excess caseload due to exposure. To examine the attributable burden of transmission for COVID-19 due to weather we calculated the percent excess fraction for humidity and temperature for individual cities. We used the following equation: is the point prevalence of COVID-19 for each city. Point prevalence was calculated as the number of cases over the study period divided by the total population in a specific city. We adopted the modified version of this equation based on Gasparrini and Leone (2014) to extend the definition for the excess fraction.
Results
Our analysis included a total of 266,760 cases and 19,729 deaths across 8 cities ( Table 1). The crude rate of COVID-19 per location was highest for New Orleans, LA (374 daily cases per 100,000 people), followed by New York City, NY (51 daily cases per 100,000 people), Albany, GA (42 daily cases per 100,000 people), and Bridgeport, CT (25 daily cases per 100,000 people). The lowest rates of COVID-19 cases were in Seattle, WA (4 daily cases per 100,000 people), and Pittsfield, MA (8 cases per 100,000 people). The highest crude death rates were observed in New York City, NY (6 daily deaths per 100,000 people), Albany GA (3 daily deaths per 100,000 people), New Orleans, LA (2 daily deaths per 100,000 people) and Detroit, MI (2 daily deaths per 100,000 people).
Density heatmaps
The density heatmap (Fig. 2) presents a descriptive explanatory analysis of the combined association of temperature and specific humidity on COVID-19 cases for the selected cities. Based on the heatmap, COVID-19 cases were more common in low specific humidity (2-6 g/kg) and low temperature (2-11°C) conditions. This association was consistent when we consider different incubation times (lag 0-14 days).
Distributed lag-non linear models
3.2.1. All locations Table 2 shows the goodness of fit (qAIC) values across model types for all locations and parameters, which is a common validation and sensitivity technique (e.g., Gasparrini et al., 2010;Guo et al., 2011). In general, the humidity was the strongest predictor for COVID-19 cases, with a better model performance for humidity than temperature across all model types and study locations. Case crossover models performed higher in Seattle, WA, New York City, NY, Chicago, IL, and New Orleans, LA. The variation in the dose-response profile for humidity was negligible before and after adding temperature as a predictor into the model, indicating that humidity exhibited a robust association. Model performance was poor (indicated by high qAIC values) for Detroit, MI, Pittsfield, MA, and Bridgeport, CT. Results for these cities were insignificant and therefore not reported in the final results (Supplemental Figs. 1, 2). Overall, the case-crossover + DLNM model outperformed the DLNM only model. However, select locations had a better model fit for DLNM only (e.g., Albany, GA), although marginally better. Results were presented for the following cities: New Orleans, LA, Albany, GA, and Seattle, WA, and models were selected based on qAIC values. DLNM and case-crossover models were also constructed for these locations to analyze the effect of solar radiation (W/m 2 ) on COVID incidence rates.
New Orleans, LA
The relative risk for COVID-19 exhibited a U-shaped relationship with increases in cases at high and low humidity in New Orleans. With reference to the median humidity, relative risk peaked at minimum (5 g/kg, RR: 1.98, CI: 1.07-3.66) and maximum (16 g/kg, RR: 2.18, CI: 1.28-3.72) values. Similarly, temperature exhibited a Ushaped relationship with reference to the median and a significant relative risk at 16-17°C (RR: 1.17-1.23; CI: 1.03-2.24) and at the maximum observed temperatures (23°C; RR: 1.75, CI: 1.13-2.44). Solar values exhibited an inverted U-shaped relationship with a higher relative risk from 5200 to 6300 (W/m 2 ) (Fig. 3).
Albany, GA
Temperature and solar radiation were not significant predictors of COVID-19 cases. With reference to the median humidity, significant relative risk is observed from 6 to 9 g/kg (RR: 1.23-1.47, CI: 1.06-1.94). Due to a lower qAIC value and more robust results, unlike other cities, a DLNM-only model was applied to the humidity and the COVID-19 relationship for Albany, GA (Fig. 4).
New York City, NY
Temperature exhibited a linear association with the COVID-19 incidence that revealed a protective effect from 9 to 10°C (RR: 0.60-0.69, CI: 0.39-0.95), whereas no relationship was observed between humidity and solar radiation and COVID-19 cases in NYC (Fig. 6).
The excess burden of new COVID-19 cases due to weather
Overall, the attributable burden of excess COVID-19 cases associated with exposure to humidity and temperature was low for each city ( Table 3). The excess fraction was the highest for New Orleans, with 3.7 to 4.5% of new cases occurring within the humidity range of 5 g/kg to 16 g/kg and 6.8 to 9.1% occurring within the temperature range of 16(°C) to 23(°C).
Discussion
In this study, we examined whether daily meteorological patterns in humidity, temperature, and solar radiation were associated with the transmission of COVID-19 in U.S. cities that emerged as early hot spots for infection. We applied the DLNM to a case-crossover design to assess the nonlinear and delayed effects of meteorological parameters on COVID-19 incident cases. To our knowledge, this study is the first to assess the effects of meteorological variables on COVID-19 morbidity using a robust distributed lag nonlinear model and case-crossover design. We observed a weak but statistically significant relationship between COVID and meteorological parameters for select locations including Albany, GA, New Orleans, LA, New York City, NY, and Chicago, IL and no relationship for other locations like Pittsfield, MA, Detroit, MI and Bridgeport, CT. Spatially, we found a weaker or insignificant relationship with meteorological variables in the northeastern US (e.g., Pittsfield, MA, Bridgeport, CT, and New York City, NY). In contrast, all southern cities (e.g., Albany, GA, and New Orleans, LA) exhibited a stronger association with meteorological variables. This difference could in part be due to the time period (March-April) where weather daily fluctuations are more prominent depending on the origin of air masses resulting in greater temperature and humidity ranges for southern locations. Although this analysis is based on selected cities in the United States only, this result is similar to results derived from selected cities worldwide with community transmission (Sajadi et al., 2020).
Humidity was observed as the best predictor for the coronavirus outbreak followed by temperature and solar radiation. The majority of cities included in this study demonstrated a nonlinear dose-response relationship between a range of specific humidity conditions and sustained COVID-19 transmission. More specifically, 3 of the 4 cities were characterized by a significant relationship between COVID-19 transmission and humidity (e.g., Albany, GA, New Orleans, LA, and Chicago, IL). Humidity in the range of 6 to 9 g/kg (analogous to an Absolute Humidity range of 7.56-11.37 g/m 3 ) was a significant predictor of COVID-19 cases and resulted in an up to two-fold increased risk of transmission in some areas. Early research in China and other international locations reported a similar relationship between the variability in relatively humid conditions and transmission of COVID-19 (Lou et al., 2020;Shi et al., 2020;Oliveiros et al., 2020;Bukhari et al., 2020;Rahman et al., 2020;Islam et al. 2020). Our results for specific humidity are higher than those reported by Sajadi et al. (2020) who reported optimal transmission at low specific humidity levels (3-6 g/kg) for locations outside of the US. % of cases f Fig. 2. The density heatmaps of COVID-19 cases in the selected cities in association with temperature and specific humidity at different time lags. The red histogram above each heatmap is the histogram of COVID-19 cases in relation to temperature while the blue histogram beside each heatmap is the one in relationship to specific humidity. Temperature and solar radiation did not exhibit a strong association with COVID-19 incidence in our study locations. Our results for New York City, NY support and extend previous research on COVID-19 and meteorological parameters in New York City that found a significant association with temperature using simple correlation coefficients (Bashir 2020). Bashir et al. observed a direct association with higher temperatures predicting higher COVID-19 cases (2020). Conversely, our research found a protective effect at higher temperatures and is corroborated by earlier studies (Qi et al., 2020;Wang et al., 2020) and other respiratory viruses (Moriyama et al., 2020). These mixed findings on the influence of temperature on COVID-19 transmission highlight the need for more analysis across a variety of geographic locations and over a longer time series.
Future studies
The modeling approach used in this research study can be used to expand upon the evidence-base with the addition of social determinants of health (e.g., age, sex, race, and ethnicity, occupation, income status) to examine the joint and independent effects of social and environmental drivers of COVID-19 transmission. The transmission of respiratory viruses, like COVID-19, is likely to be impacted by a number of factors including meteorological conditions, population density, testing capacity, and geographic disparities in access to and quality of medical care (Dalziel et al., 2018). These factors should be considered in future studies to fully understand the contextual influence of meteorological effects on COVID-19 transmission.
Strengths and limitations
The main strength of this study was the case-crossover design. This design is used in observational studies to capture short-term effects of exposures and removes the effects of seasonal and secular trends by allowing each COVID-19 case to serve as their own control (e.g., Armstrong et al., 2019;Malig et al., 2016;Guo et al., 2011). This design was particularly advantageous given the limited information available on cases and the short time series under investigation. While the current evidence base is newly emerging, the majority of published studies to date have only examined the relationship between meteorological factors and transmission using descriptive correlation statistics or simple linear regression. One important advantage to the DLNM method is that it not only allows the model to maintain a detailed time course of the non-linear exposure-response relationship, but it also generates an estimate for the overall effect of an exposure on a health outcome in the midst of changes in the effect over different lagged or delay periods (Gasparrini et al., 2010). Unlike previous studies examining the influence of meteorological factors on COVID-19 transmission, an additional strength of our study is the adjustment for social distancing measures (Sajadi et al., 2020).
Most environmental health research includes either a variable for relative humidity (RH) and/or absolute humidity (AH). However, specific humidity, the metric included in our study, is more conservative and less susceptible to changes in pressure and temperature compared to AH. Further, in addition to the confounding influence of humidity and temperature, RH is typically not useful as a stand-alone humidity variable in environmental health or epidemiological research. Our results are comparable to a few recent studies examining the association between COVID-19 and specific humidity (e.g., Ma et al., 2020;Sajadi et al., 2020).
Recent research has demonstrated the linkage between poor air quality and COVID-19 mortality ; however, we did not adjust for background air quality measures as a potential confounding factor in our study. While our modeling strategy did adjust for social distancing measures, our estimates do not account for underreporting of case counts (Lachmann, 2020), demographic data on cases, changes in testing capacity, or the date of onset of COVID-19 symptoms. This study did not include information on the type or amount of testing at each location, as this data was not available at the time of publication. There is currently a void of publicly accessible COVID-19 testing data at the local level. While efforts are underway to capture this data at the state-level, there are a number of inconsistencies relating to reliance on multiple data sources, the timing of the release of these data, and changes in the ways in which states are counting negative and positive test results. However, future research studies should consider including daily testing, as well as other contexutal social and environmental parameters, as control variables for examining the association between meteorological variables and COVID-19 cases.
Conclusion
Meteorological factors may influence COVID-19 transmission and spread in the US. The influence of meteorological parameters on COVID-19 was modest and not uniform throughout select study locations. Humidity was the best predictor of COVID-19 transmission compared to solar radiation and temperature in US cities presenting as early emergers in the pandemic. The case-crossover design was an enhancement to the application of DLNM. As an emerging infection, future research is needed to fully understand the impact of environmental conditions on COVID-19 transmission.
Declaration of competing interest
The authors declare no conflicts of interest.
|
2020-06-09T13:02:25.337Z
|
2020-06-09T00:00:00.000
|
{
"year": 2020,
"sha1": "c667fca3ebfb287d0ee3fb8e3c38e1ef7485167f",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.scitotenv.2020.140093",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d3f6b36a8d2db1fbd450afb713ae206979545d4",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science",
"Medicine"
]
}
|
251265051
|
pes2o/s2orc
|
v3-fos-license
|
Pitch Evaluation of Matouqin Chamber Music Performance Based on Artificial Neural Network
In order to study the pitch evaluation of Matouqin chamber music performance based on articial neural network, this paper puts forward the relevant theories in the elds of human ear auditory perception system, auditory psychology, music theory knowledge, and pattern recognition. is paper extracts the auditory image features of chords and then establishes a sparse representation classier model for chord recognition and classication. Scale-invariant feature transformation (SIFT) and spatial pyramid matching (SPM) are used to extract the detailed features of chord auditory images. e experimental results show that the highest correct recognition rate of the chord recognition algorithm based on the auditory image proposed in this paper is 76.2%, which is 20.4% higher than that of MFCC feature based on human auditory characteristics.
Introduction
e exploration of Matouqin chamber music performance technology can be traced back to the 1960s [1]. is is an era of vigorous development of Matouqin playing technology.
rough the improvement of piano making technology, Chinese and Mongolian performers have developed treble Matouqin, alto Matouqin, sub alto Matouqin, bass Matouqin, and so on. It covers the range structure system required by chamber music orchestration. After half a century of continuous innovation and exploration, it has nally formed an independent Matouqin chamber music performance technology (di erent from Matouqin solo performance method). Due to the physical e ect of sound transmission, the strength of the sound heard by human ears in the high range and low range is di erent under the same strength. e sound played in the high range is weak and the sound played in the low range is strong. is is because the amplitude and frequency of high range and low range in sound transmission are di erent. e sound amplitude played in high range is small and the frequency is high, and the sound amplitude played in low range is large and the frequency is low. When we appreciate symphonic works, we will nd that the sound of 20 violins is less than that of timpani [2]. Due to the di erences in amplitude and frequency of instruments with di erent voices, the research on the orchestration of instruments with di erent voices began when the western national chamber music originated in the 14th century. Relying on the exploration of the volume structure, a standardized indoor and orchestration regulation has been formed. is standard indoor orchestra was initially applied to the court and gradually popularized in the mid-19th century. Its complete teaching and theoretical system of chamber music was formed and gradually improved in the 16th century, and a complete application system of performance technology has been formed in the 18th century. In the exploration of Matouqin chamber music, it is based on the string ensemble mode of Western chamber music, and integrates the singing timbre and volume of long key in short key for imitation exploration. erefore, the strength application processing technology in Matouqin chamber music performance method is essentially di erent from Matouqin solo [3]. multimedia technology in daily life, more and more multimedia information has poured into the Internet, and digital music information, as an important component of digital multimedia information, also shows a rapid growth trend as shown in Figure 1. Sabir et al. and others found that the basis of sound production in chamber music performance method is ensemble. In the regulated orchestration, the representative instrument of each voice part will generally be selected, in which the general structure mode of high musical instrument, medium musical instrument, and bass instrument will inevitably appear. In this structure, composers often add subbaritone and bass instruments to the orchestration to enrich the texture structure of harmony and polyphony. In this century's Matouqin chamber music, there will also be the orchestration structure of the combination of Matouqin and wooden and copper groups in western music [4]. Meftah et al. and others believe that because of this, the volume control of different instruments in the performance of Matouqin chamber music will refer to the orchestration structure, and the volume effect after the superposition of ensemble volume needs to be considered in the joint performance [5]. Liu et al. and others first adopted the split volume processing method: the specific strength processing method of splitting into single parts in the overall harmony framework [6]. Monroe et al. and others take the Matouqin chamber music work "Chronicle of the wolf" as an example, which is a trio composed of the first Matouqin, the second Matouqin, and the piano. Since the piano has three parts: high, middle, and third, the first and second Matouqin are both the same part. e Matouqin (which covers the middle sound area and high sound area in the three octaves) plays two parts (high sound part and low sound part) of the piano in the first to twelfth bars of the introduction of the work [7]. Ahf et al. and others found that the main melody now starts with the triad of the low voice part, transitions to the high voice part in the fourth bar, transitions to the low voice part in the fifth bar, and is completed by two voice parts in the eighth bar. In this paragraph, the composer is marked with MP (medium weak), and the two parts are separated in volume processing. e main melody is played by MP (medium weak), and the accompaniment melody is played by P (weak). When the two parts are combined, the auditory effect of medium and weak volume is produced [8]. Wang et al. and others found that the ensemble structure of Matouqin and Qin starts from the 13th bar. e first Matouqin and the second Matouqin are a parallel main melody of polyphonic structure, and the sound part is a rhythmic accompaniment melody. e composer marks the volume prompt of P (weak) here. Due to the duet structure of Matouqin, in the volume processing of splitting in each sound part, each sound part can be played as P (weak), and the two sound parts are strong at the same time until the fifteenth bar is f (strong). In the strong position, the two sound parts are split into MF (medium strong) [9]. Carpenter and others believe that this strong and weak segmentation method stems from the different prominence of strong and weak tones. When two weak tones are superimposed, it produces a weak volume effect in the sound effect [10]. However, Batool et al. and others believe that when two medium and strong volumes are combined, strong sound will produce strong sound effect, and strong sound will have a greater impact on sound in hearing. Just as in listening to symphony works, weak sound will produce synesthesia effect of distance feeling, and strong sound will produce sound impact feeling of close distance [11]. erefore, in the cutting and integration of the strength treatment of chamber music performance, we need to first consider the constant volume of the ensemble of musical instruments, split each sound part on this basis, reasonably plan the strength of each sound part, and then integrate to form a complete constant volume structure. In the performance of chamber music, the music processing method of theme is the soul of the work. e theme structure is divided into presentation theme, dialogue theme, and imitation performance method in a solo paragraph. Schneider et al. and others found that in the paragraph of presentation theme, the presentation theme is generally in the highest position of the specified volume in the standard volume. Take the Matouqin chamber music work "Recollection (for Matouqin and Woodwind Group and brass group)" as an example [12]. Matveev et al. and others believe that the theme melody in the works is first played by the clarinet, and the other parts are long accompaniment or pause. e composer's musical expression is marked MF (medium weak) here. During the performance, the clarinet is played in MF (medium weak), while the string group is marked MP (medium weak), but it is completed by five parts together. Each part, after being divided into five parts, is played in PP (very weak). After the ensemble, it leads to the superposition of volume and presents the auditory effect of MP (medium weak) [13].
On the basis of this research, this paper proposes a study of intonation evaluation in Matouqin chamber music performance based on artificial neural network. Scale-invariant feature transformation (SIFT) and spatial pyramid matching (SPM) were performed on auditory images of different chords to extract detailed features of chord auditory images. e experimental results show that the method has certain development prospects. First, extract the auditory image features of the music chord segment, convert the one-dimensional music signal into two-dimensional image features, and then extract the local features of the auditory image, that is, the SIFT feature vector, and then use the SPM matching method to integrate the local feature vectors of the image into a feature vector to represent the features of the complete auditory image, that is, the chord features of music. Secondly, the pattern recognition method based on the sparse representation classifier (SRC) has achieved great success in image scene classification, object recognition, and face recognition. Subsequently, SRC has also been introduced into music genres, classical music classification, and music chord recognition. ey have also achieved good results. erefore, this paper uses the SRC method to identify chords. Finally, experiments are carried out under the optimal parameter settings, and the experimental results show that the recognition effect of the method using auditory image features and SRC recognition is the best.
Method
Music is the product of the combination of science and art. Music recognition involves different disciplines such as physics, musicology, signal processing, art, and so on [14]. As the smallest component of the music signal, a chord can convey the harmony content, melody, rhythm, emotion, and other important information of music. As one of the important research topics in the field of music information retrieval, music chord recognition has a wide range of applications, such as music segmentation, similarity music retrieval, and humming retrieval.
is paper introduces the generation mechanism of human voice, the basic attribute characteristics of music and human ear auditory system, so as to further deepen the research on music signal processing and the human ear auditory model. Sound is formed by the sound wave generated by the regular vibration of an elastic object [15]. e basic physical properties of sound include pitch, timbre, and intensity. ese basic characteristics play a major role in the chord, rhythm, and melody of the middle and high levels of music. erefore, the analysis of music signals requires researchers to know and master the basic theory of music. With the support of music theory, they can have a deeper research on music signals, so as to develop a better recognition algorithm. Pitch, that is, the height of sound is produced by the vibration of different objects. Its height is determined by the frequency of sound wave vibration. e two constitute a positive proportional relationship, that is, if the vibration frequency is high, the sound will be high. On the contrary, if the vibration frequency is low, the sound will be low. e vocal cord vibration frequency of a female singer is higher than that of a male singer, so the female voice is higher than the male voice. Human perception of pitch has a logarithmic relationship with the fundamental frequency as shown in equation (1). e unit of pitch is mel. For example, when the frequency of a sound signal is 1000 Hz, the pitch perceived by human ears is about 1000 mel.
e tuning curve of the piano is shown in Figure 2. Since the maximum gap between bass and treble can reach dozens of sound points, if the pitch is determined according to the twelve average law when the piano is actually tuned, the bass area should be down and the treble area should be up, so as to produce a correct scale feeling [16]. International standard pitch refers to the frequency of note an above the central C of the piano, i.e., a � 440 Hz.
Rhythm refers to the absolute accurate height of all notes in the musical system and the relationship between them. It is a concept formed in the continuous development of music. ere are three main categories of rhythm, namely, pure rhythm, five degree phase law, and twelve average law. Among them, twelve average law is a widely used representation of rhythm in the world. Twelve average law (hereinafter referred to as average law) is the most commonly used form of rhythm expression in western music [17]. It divides the adjacent tones in an octave into twelve semitones according to the principle of equal frequency ratio, in which the semitone represents the minimum distance value of the pitch in the twelve average law system. Generally, keyboard instruments adopt the average rhythm system, that is, the pitch difference of any two adjacent keys is halftone, and the frequency ratio is equal. See the following formula: To transmit an impulse signal to more than one cell To transmit an impulse signal to more than one cell The nucleus The axon The axon endings σ Figure 1: Abstraction of biological neuron structure by artificial neuron.
Mobile Information Systems Scale refers to each sound in the musical system. It is the distance unit between tones. In music theory, there are seven basic levels named A, B, C, D, E, F, and G (i.e., sound names), and these seven levels are made by the white keys of the piano. e piano has a total of 88 keys, so the above 7 sound level marks are recycled. e pitch of the same sound level in each group is different, and there is an "octave" difference between the same sound level in the adjacent two cycle groups. According to the law of the twelve mean law, the octave is divided into 12 equal parts, each of which is called semitone. Two semitones form a whole tone, in which the semitone is the smallest unit of music, and the whole tone and semitone form a double relationship in width [18]. Interval refers to the pitch distance between two levels, which is expressed in degrees. e number of tones refers to the sum of the number of semitones and whole tones contained between intervals. Degree refers to the number of sound levels contained between the root and crown sounds (i.e., the number of lines contained in the staff). On the staff, the interval relationship between two tones on the same line or at the same interval is called "one degree" or "same degree." If one sound is on the line and the other sound is between the adjacent sounds, it is called "second degree." e name of interval is determined by degree and tone number. Table 1 shows the naming rules of interval.
A chord is a group of three or more notes with a certain interval relationship. In other words, the simplest chord consists of three notes, and the complex chord can consist of five to seven notes. e most basic sound in a chord is called the root sound. Other sounds are divided into three degrees, five degrees, and seven degrees according to the distance between them. ere are many kinds of chords. According to the number of notes in the chord, it can be divided into triad, seventh chord, ninth chord, etc. [19]. e triad contains three notes, namely, root, third degree, and fifth degree. It can also be divided into large and small triads and increasing and decreasing triads. A seventh degree is superimposed on the basis of a triad to form a seventh chord. Similarly, seven chords can also be divided into four types: large and small seven chords and increasing and decreasing seven chords. On the basis of seven chords, a ninth is superimposed to form nine chords. By analogy, we can get eleven, thirteen chords, and so on. Table 2 shows how different chords are named. e process of hearing is: first, the sound wave is transmitted to the tympanic membrane through the external auditory canal, causing the vibration of the tympanic membrane, and then transmitted to the inner ear through the auditory ossicles, so that the receptors in the cochlea are stimulated to produce nerve impulses. Finally, the nerve impulse is transmitted to the auditory center of the cerebral cortex through the auditory nerve, so as to form hearing. e external ear is composed of pinna, external audit meatus, and eardrum. e auricle transmits the sound heard to the tympanic membrane through the external auditory canal, which causes mechanical vibration and converts the sound energy of sound waves into mechanical energy. Because the auricle is curly, it can better locate the high-frequency sound and directionally transmit the sound wave to the ear canal [20]. e external auditory canal is an approximately circular, uniform tube with one end closed. Its diameter is about 5 mm and its length is about 25 mm, which effectively protects the tympanic membrane from mechanical damage caused by external sound. Formant frequency of external auditory canal � sound velocity/sound wavelength. It is known that the sound wave length is about 4 times the length of the external auditory canal and the sound propagation rate is 340 m/s. erefore, the formant frequency of the external auditory canal � 340/(4 × 0.025) � 3.4 kHz, that is, the natural resonance frequency is about 3.4 kHz. e human ear is more sensitive to sound in some frequency ranges. e main reason is that the external auditory canal has resonance and diffraction effect on sound waves, resulting in high transmission gain of the external ear in the frequency range of 2 kHz ∼ 4 kHz. erefore, the main function of the external ear is to locate the sound source and amplify the sound. e tympanic membrane is located in the innermost part of the external auditory canal, separating the external ear from the inner ear, and the sound is transmitted to the inner ear by the vibration of the tympanic membrane [21]. e middle ear consists of tympanic membrane, tympanic chamber, three auditory ossicles, oval window, and circular window. e tympanic membrane is located between the outer ear and the inner ear, which plays a role in isolation. e utility model uses a circular window and an oval window to communicate with the inner ear and then establishes a connection with the outside world through the eustachian tube to balance the atmospheric pressure between the middle ear and the outside world. When the sound intensity is within a certain range, the auditory ossicles transmit sound waves in a linear form. However, when the sound intensity is very high, the auditory ossicles exhibit nonlinear propagation. e nonlinear propagation mode of auditory ossicles effectively protects the inner ear from mechanical damage. To sum up, the middle ear has two main functions: one is to amplify the sound pressure value on the tympanic membrane, and the other is to realize nonlinear transmission when the sound is very strong, so as to effectively protect the inner ear. e inner ear is located in the deepest part of the skull and is composed of semicircular canal, oval window, and e receptors located in the semicircular canal can feel the stimulation brought by rotating speed change. On the contrary, the receptor located in the vestibular window senses the movement of linear speed change [22]. Cochlea is the most important part of the inner ear and plays the greatest role in auditory perception. It is the receiver of hearing. e Basilar membrane is an important part of the cochlea. e Basilar membrane near the vestibular window is hard and narrow, while the part near the cochlear hole is soft and wide. e organ of Corti is located on the basement membrane and plays a sensing role. e potentials on both sides of the hair cell membrane on the organ of Corti change with the change of the fluid velocity in the cochlea. is change makes the auditory nerve release and inhibit. It is this change that converts the sound wave into nerve impulse and then completes the signal release process. e masking effect is due to the frequency selectivity of the human ear to the sound, that is, when the strong sound and the weak sound exist at the same time, the strong sound is most easily detected by the human ear, while the weak sound is masked by the strong sound and difficult to detect.
is phenomenon of increasing the hearing threshold of weak tone due to the existence of strong tone is called masking effect. e former is called masking sound and the latter is called masked sound as shown in Figure 3.
Whether a sound can be perceived by the human ear is determined by the frequency and intensity of the sound. e frequency range of sound that can be detected by ordinary human ears is 20 Hz ∼ 20 kHz, and the sound intensity is −5 dB ∼ 130 dB. e sound beyond this range cannot be detected by human ears. Within the normal hearing range, the most sensitive frequency band of human ear response to sound is 2 kHz ∼ 4 kHz. Beyond this frequency band, the auditory sensitivity will be reduced. Hearing threshold refers to the value of the sound pressure level of the weakest sound that can be heard by human ears. e hearing threshold is related to the sound frequency function. e dotted line in Figure 3 represents the hearing threshold curve of human ear in a quiet environment. Human ear cannot hear the sound signal with sound pressure value lower than the hearing threshold. For example, when the sound pressure value of a pure tone signal is lower than the hearing threshold, the human ear cannot hear the sound signal [23]. In fact, the minimum value of human ear hearing threshold is in the range of 3 kHz ∼ 5 kHz, that is, the human ear is the most sensitive to weak signals in this frequency band. e hearing threshold outside this frequency band is much larger than that of this frequency band, that is, the human ear has poor sensitivity to the sound signal outside this frequency band. In the range of 0.8 kHz ∼1.5 kHz, the threshold curve changes most gently, and the hearing threshold changes little with frequency [24].
If there is a strong sound signal, the listening threshold curve will change within its frequency range, that is, the listening threshold will be increased.
is value is called masking threshold, as shown in Figure 3. In the neighborhood, the sound below the masking threshold is masked, so the human ear cannot hear the masked sound. Masking effect can be subdivided into simultaneous masking (also known as frequency domain masking) and isochronous masking (also known as time-domain effect). e difference between these two effects is whether the masking sound and the masked sound act at the same time. Isochronous masking can be further divided into front masking and rear masking. e former appears before the beginning of masking sound and the latter appears at the end of masking sound. Figure 4 shows three masking effects, with the horizontal axis representing the duration and the vertical axis representing the sound pressure level [25]. At the same time, masking occurs in the time period of 0 ∼ 200 ms of masking sound, the front masking occurs in the first 20 ms of masking sound, and the rear masking occurs in 200 ms after masking sound disappears. As can be seen from Figure 4, the disappearance of isochronous masking is related to time. Generally, the duration of front masking is about 5 ∼ 20 ms, while that of rear masking can last up to 50 ∼ 200 ms.
Auditory image model (AIM) is a time-domain model, which simulates the auditory pathway according to the response state of human auditory system in different processing stages of sound signal, and then processes the signal effectively. "Auditory image" first appeared in the article published by Patterson in 1995. e model successfully formed the sound signals heard by human ears in the brain, and the initial consciousness was simulated as a neural representation. Aim provides a basic model for more researchers committed to audio research. e auditory image model is mainly composed of five basic functional modules b391, which are: (1) Transmitting the sound signal to the cochlear pre-processing (PCP) module of the oval window; (2) the module of the response of the cochlea to the basal membrane motion (BMM) of the signal; (3) neural activity pattern (NAP) in auditory nerve and cochlear nucleus; (4) strobe temporal integration (STI) module for generating auditory images; (5) form a stable auditory image (SAI) with auditory awareness, as shown in Figure 5. AIM refers to the physiological structure and function of human ear and completes the simulation of human ear hearing through filter design. Each functional module forms a corresponding relationship with human ear hearing structure. e corresponding relationship between human ear structure, aim function blocks and implementation methods is shown in Table 3.
e sound frequency range that human ears can perceive is 20 Hz ∼ 20 kHz. PCP module actually uses the filtering function of band-pass filter to simulate the response process of external ear and middle ear to sound signal. e signals beyond the hearing range of human ears will be filtered out, and the effective signals will be transmitted to the subsequent processing module for analysis and processing. Figure 6 shows the original audio of the large chord segment and the waveform after PCP processing. e upper figure is the original audio waveform, the lower figure is the waveform after PCP processing, the horizontal direction is the time axis, and the vertical direction is the normalized amplitude.
Results and Analysis
According to the principle of parallel accumulation, structural music is the simplest structural method in musical forms, also known as the principle of parallel combination. It is characterized by the accumulation in the horizontal extension between the music forms with different degrees of contrast and renewal. Each part has the same scale and weight, and can express clear music content. ere are two types of music forms in line with this principle which are illustrated below; when it comes to the basis of juxtaposition, when we start from the simplest musical form, it is its horizontal accumulation that can gradually form the structure of the principle of juxtaposition and combination. According to the theoretical interpretation of Professor Yang Ruhuai in the article "On marginal musical forms," the structure of a musical form also conforms to the principle of juxtaposition and combination [26]. Most of these works are based on folk tunes, which are characterized by repeated themes. e change lies in the modification and enrichment of melodies by decorative sounds and changing sounds. e structural thinking is relatively simple and short. See Table 4 for the following examples.
"Lullaby," a single part structure, consists of a segment and aʹ segment. e length of the whole sentence is equal, and each paragraph is divided into (2 + 2). e a segment ends with a chord and belongs to an open structure segment. e aʹ segment is stably wrapped above the main chord, and D sentences are added on the basis of 4 sentence bodies to strengthen the sense of ending at the end of the music. e theme material is very simple. In the latter two sentences, the rhythm of attachment points is introduced to form a comparison with the previous materials. e ending uses the d-phrase, and the rear materials end in the way of voice part alternation. For Mongolian works, see Table 5 [27].
"Chulugen," a single musical form, is composed of two single segments a and aʹ, and segment a is composed of two phrases a and B. Among them, the phrase a can be divided into two small music sections with 3 + 2 structure, and the phrase B has 2 + 2 music section structure. e material of the whole song is concentrated and runs through with "B feather mode," which belongs to a single tonal music segment structure. See Table 6.
"Heyinghua" is a typical single phrase multi paragraph structure. Each phrase is 4 bars long with interlude in the middle. e introduction and excessive use of the same material. In the 'B feather mode, other phrases are shown in the F angle mode. e form of introduction is simple and clear, which is the norm of folk music. e theme phrase consists of 2 + 2 stanzas. e latter stanza is like the answer sentence of the former stanza. It ends with a Shang tone and stays on the main chord in "D major," giving people a sense of unfinished meaning and slight expectation. In the six theme presentations, the melody structure is relatively stable, and the main melody has been played by the first Matouqin. e changes of harmony and texture strengthen the audience's memory of the theme [28].
In the traditional music of China and Mongolia, the horizontal line plays an absolutely dominant role. As a nation capable of singing and dancing, Mongolia has a large Mobile Information Systems number of treasures of folk songs and folk music. In the continuous excavation and protection of predecessors, they are displayed in front of us in a variety of forms. In folk activities, the form of multi part music has already blossomed everywhere, but its form is generally relatively simple, focusing on the melody imitation of single part, and its creation follows the mode of linear thinking. In the long history of national music development, many singers and instrumental players have established simple vertical and horizontal sound combination experience through various ways, that is, what we call harmony today. Of course, some of the establishment of these acoustics are conscious, while others are unconscious. ey often linger between regularity and irregularity. e composers of China and Mongolia rely on the nationality of Mongolian music in their creation, mostly around the pentatonic mode. After years of washing, people gradually no longer use simple hearing or experience to judge the quality of music, but pay more attention to the details of music itself. e appearance of the chamber music form of Matouqin ensemble is only in the past 30 years, and the creative groups are more complex, including composers, Matouqin players, conductors, and so on. In the frequent cultural exchanges between the two countries in recent years, the form of music is also constantly changing. More and more students majoring in composition in China choose to go to Mongolia for further study, and their style is also close to the creative style characteristics of Mongolia. In the study and application of harmony techniques, due to special historical reasons, China has interrupted cultural exchanges with Europe, America, and other countries, and implemented the policy of "leaning to one side" to the Soviet Union. erefore, a large number of excellent works and related books of the Soviet Union were widely spread in China. Sposobin harmony system had a far-reaching impact on the development of Chinese music. For Mongolia, which is adjacent to Russia, the concept of harmony is deeply influenced by it. It is more bold and open in creation and pays attention to the expression of diversified music ideas. Tone transfer is an indispensable and important technique in multi part music creation.
rough the change of tonal color, we can express the music content and shape the music image. e power of harmony can be enhanced through tone transfer, which helps to promote the development of music and highlight the contrast and balance between various parts of music. rough tone transfer, with the help of tonal color change and functional function, harmony in music works can have rich color contrast and dynamic function. Composers in China and Mongolia both love the basic method of three degree superposition as chord composition in their creation, and are also committed to coordinating and expanding this harmonic method with the style of pentatonic mode. is has many similarities with China's pentatonic mode harmony theory. It can also be studied by using the national harmony theory proposed by Professor Fan Zuyin, such as several fixed harmony structure forms formed in the works, including harmony of three-dimensional structure, harmony of four and five-dimensional structure, and harmony of two-dimensional structure. In the chords that omit three tones, the omission of three tones leads to an empty and simple harmonic sound, which mainly appears in the form of texture or outer frame interval in the works. e chord structure with an additional 6 degrees has independent harmonic meaning. e use of a 6-degree interval attached to the triad is a common harmonic means in folk In Mongolia's works, tonality transformation is more frequent, but the overall tonal trend is still moving towards subordinates. rough the author's research, it is found that the technique of downward tone transfer has the foundation of music culture. In the long history of Mongolian music, the concept of tonality in folk songs has had a far-reaching impact on later composers, which is also the reason why most Mongolian music works prefer to turn down the tone.
Conclusion
is paper introduces the background, significance, and research status of music chord recognition. It can be understood that music chord recognition is an important research content in the field of music information retrieval. It involves the research of music theory, signal processing, machine learning, and artificial intelligence, and its application range is extremely wide, including music humming retrieval, audio detection and segmentation, music scoring system, and so on. At the same time, auditory model has been widely developed and applied in the field of music information retrieval in recent years, and achieved good results. erefore, auditory model is applied to chord recognition in this paper. e experimental results show that this method has a certain development prospect. Firstly, the auditory image features of music chord segments are extracted, the one-dimensional music signal is converted into two-dimensional image features, and then the local features of auditory images, namely, SIFT feature vector, are extracted. en, using the matching method of SPM, the local feature vectors of images are integrated into a feature vector to represent the features of complete auditory images, namely, the chord features of music. Secondly, the pattern recognition method based on sparse representation classifier (SRC) has achieved great success in image scene classification, target recognition, and face recognition. en SRC has also been introduced into music genre, classical music classification, and music chord recognition, and has achieved good results. erefore, this paper uses SRC method to recognize chords. Finally, the experiment is carried out under the optimal parameter setting. e experimental results show that the recognition effect of auditory image features and Src recognition method is the best. e sounding principle of Matouqin is different from that of other musical instruments. Most of the instruments we see are bow string instruments. No matter how many strings there are, each string is composed of one string. For example, we are familiar with violin, Cello and guitar, as well as Chinese Erhu, and Tibetan string. But which musical instruments are different from the horse head Qin? Although the horse head Qin has only two strings, each string is composed of hundreds of horsetail wires. Due to the different length and tension of horsetail wire, the timbre of Matouqin will never be as "clean" as violin and erhu, but it is precisely this "unclean" that has become the unique timbre of Matouqin different from other musical instruments and the root of its unique charm. In addition, musical instruments such as Sihu and Sanxian are also used by other ethnic groups. It seems that it is difficult to distinguish them from other ethnic instruments in some aspects, while the horse head Qin is unique and different from any musical instrument of other ethnic groups, so it has become the most representative musical instrument of Mongolia.
Data Availability
No data were used to support this study.
Conflicts of Interest
e authors declare that there are no conflicts of interest with any financial organizations regarding the material reported in this article.
|
2022-08-03T15:10:18.515Z
|
2022-07-31T00:00:00.000
|
{
"year": 2022,
"sha1": "b5fd39d90bc7c6622f46d33fd1bab5d772aab25c",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/misy/2022/7868975.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "9e19674aaa02b0d0328d4d3275f73a6c9009177f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
212691457
|
pes2o/s2orc
|
v3-fos-license
|
Pregnancy and perinatal outcomes in pregnancies resulting from time interval between a freeze-all cycle and a subsequent frozen-thawed single blastocyst transfer
Background Adverse obstetric outcomes are correlated with altered circulating hormone levels at the time implantation by the trophectoderm. What’ more, embryo freezing process may also have adverse effect on perinatal outcomes. This study aims to evaluate whether increasing interval time between a freeze-all cycle and a subsequent frozen-thawed single blastocyst transfer could have any effect on pregnancy and perinatal outcomes. Methods This was a retrospective cohort study included the first single blastocyst transfer in artificially cycles of all patients who underwent a freeze-all cycle between January 1st, 2016 and September 30th, 2018. All patients were divided into two groups according to the time interval between oocyte retrieval and the day of first frozen-thawed embryo transferred (FET): Group 1 (immediate FET cycles) and Group 2 (delayed FET cycles). Results No significant differences were reported between the two groups regarding the rates of clinical pregnancy, live birth, biochemical pregnancy and pregnancy loss even after adjusting for measured confounding. When accounting for perinatal outcomes, gestational age, birth weight, delivery mode, fetus gender, preterm birth, gestational hypertension, GDM, placenta previa, fetal malformation and low birthweight also did not vary significantly between the two groups. Only the incidence of macrosomia was more frequently in the Group 2 compared with the Group 1 (AOR 3.886, 95%CI 1.153–13.103, P = 0.029) after adjusting with a multiple logistic regression model. Conclusions We found delayed FET cycles for blastocyst transfer following freeze-all cycles may not improve the pregnancy outcomes. On the contrary, postponement of FET cycles may increase the risk of macrosomia. Therefore, FET cycles for blastocyst transfer should be done immediately to avoid adverse effects of delayed time on perinatal outcomes.
Background
Since the first baby with frozen-thawed embryo transferred (FET) was born in 1983, the embryo cryopreservation technology has progressively increased [1]. As is well-known, its widespread application is due to multi-follicular stimulation which may enable excessive number of oocytes to be obtained and eventually increase the cumulative live birth rate [2,3]. With the increased safety and efficacy of embryo cryopreservation technology, the concept of freeze-all strategy has evolved due to reduce the risk of ovarian hyperstimulation syndrome (OHSS) and improve endometrial environment [4][5][6][7]. During the process of in-vitro fertilization (IVF), controlled ovarian hyperstimulation (COH) is a double-edged sword. COH can increase the number of oocytes to be retrieved and enhance cumulative pregnancy rate, but at the same time leads to supraphysiological hormone levels. Embryo implantation and subsequent placental growth and maintenance is correlated with altered circulating hormone levels at the time implantation by the trophectoderm [8]. It affected not only the endometrial receptivity and early implantation but also on placentation and subsequent fetal growth [9][10][11][12]. Some studies clearly demonstrated that the increased risk of developing disorders related to abnormal placentation in patients with elevated estradiol (E 2 ) levels and suggested that ovarian hyperstimulation may change angiogenesis of the endometrium [8,10,13,14]. Although the detrimental effect of supraphysiological hormone levels on pregnancy and perinatal outcomes seems clear, our concerns are how long it takes for the endometrium after COH return to prestimulation functionality and whether supraphysiological hormone levels could have negative effect on a subsequent treatment so that FET need to be postponed in an attempt. Meanwhile, embryo freezing process may also have adverse effect on perinatal outcomes [15,16]. Therefore, we performed this study to evaluate whether increasing interval time between a freeze-all cycle and a subsequent FET could have any effect on pregnancy and perinatal outcomes.
Study population and design
We conducted a retrospective cohort study including all patients (≤40 years old) who underwent a freeze-all IVF or intracytoplasmic sperm injection (ICSI) cycle and subsequent first single blastocyst transfer in artificially cycles at the Reproductive Medical Center of Tongji hospital between January 1 st , 2016 and September 30 th , 2018. Only the outcomes of the first FET cycles performed after a freeze-all IVF or ICSI cycle with GnRH agonists or antagonists were assessed. The indications for a freeze-all cycle are as following: high progesterone concentration (> 1.5 ng/ml), prevention of OHSS (the number of oocytes retrieved > 20 or high estradiol concentration > 7000 pg/ml), hydrosalpinx (diameter > 3 cm), inappropriate endometrium environment. Exclusion criteria were: 1) blastocyst biopsy for preimplantation genetic diagnosis (PGD) or preimplantation genetic screening (PGS); 2) multiple COH cycle before FET; 3) frozen or donated oocytes; 4) patients with hypertension, diabetes mellitus, abnormal glucose tolerance or insulin resistance; 5) use GnRH agonist during frozen-thawed cycle. 6) uterine malformation.
Protocol of COH during freeze-all cycles
Conventional IVF or ICSI was conducted for all patients. The protocol of COH was determined individually combination with GnRH agonists or antagonists. Serial vaginal ultrasonography was used to observe ovarian response. When two leading follicles reached mean diameter ≥ 18 mm, HCG (10,000 IU, EMD Serono) was then used to trigger ovulation. On the day of HCG injection, serum concentrations of E 2 and progesterone were measured using an Immulite Automated Analyzer System (ECL2012, Siemens, Germany). Oocytes were obtained transvaginally 34-36 h after HCG injection [17].
Embryo culture, vitrification and warming
In IVF cycles, every oocyte was inseminated with 10,000 motile spermatozoa after 4 h oocytes retrieval. However, patients with severe oligospermia or difficult fertilization received ICSI. Then, fertilized oocytes were continuously cultured in G1 medium for 2 more days. All of the embryos from IVF or ICSI were checked on the morning of day 3 after oocyte retrieval (approximately 69 h after initial insemination) [18]. No embryo was transferred in fresh cycle. All available embryos were cryopreserved for subsequent frozen-thawed cycles and cryopreserved by vitrification [19]. Embryos vitrification were using the Cryotop device and commercially available vitrification solutions (Kitazato, Japan) and full-to-expanded blastocysts on day 5 or 7 during embryo culture. The best one blastocyst was warmed on the day of embryo transfer. During the warming procedure, vitrified embryos were warmed to 37°C using a vitrification-warming kit. Warmed blastocyst was then cultured for at least 2 h prior to further evaluation [18].
Endometrial preparation and embryos transfer
FET cycles for endometrium preparation was an artificially supplemented cycle monitored by vaginal ultrasonography. Estradiol valerate (Progynova, Germany) was administered orally at a dose of 2 mg twice daily from day 2 to day 10 of the menstrual cycle until the endometrium thickness exceeded 7 mm, and then 40 mg of progesterone intramuscularly and 20 mg of progesterone orally was given daily. Embryos transfer was performed according to the protocol described above except for the day of transfer. Only one blastocyst was transferred.
Definition of time interval
The time interval between a freeze-all cycle and the first FET cycle depended on the time interval between oocyte retrieval and the day of first frozen-thawed blastocyst transferred. As shown in Fig. 1, we divided all patients into two groups: Group 1, ≤40 days after oocyte retrieval; Group 2, more than 40 days after oocyte retrieval. This cutoff was devised by adding the interval between menstrual cycle (28-35 days) and an extra interval for embryo culture (5-6 days) and finally we chose 40 days as a cutoff value. If the patients were assigned to the Group 1, they had an immediate FET. Otherwise, they had a delayed FET in the Group 2.
Main outcome measure and statistical analysis
Basic demographic characteristics were compared between the two groups with the use of Mann-Whitney (for continuous variables) and chi square tests or Fisher exact tests (for categoric variables). Except for basic demographic characteristics, the rates of clinical pregnancy, biochemical pregnancy, live birth and pregnancy loss were also included in the pregnancy outcomes. A multiple logistic regression analysis was performed to compare the association between the two groups.
For the patients with singleton live birth, perinatal outcomes were the main outcomes of our study which included gestational age, birth weight, fetus gender, preterm birth (< 37 weeks), gestational hypertension, gestational diabetes mellitus (GDM), placenta previa, fetal malformation, macrosomia (≥4000 g) and low birthweight (< 2500 g). A multiple logistic regression analysis was also used to compare correlations.
Results
A total of 1025 FET cycles performed following a freeze-all IVF or ICSI cycle were included in the analysis. All patients were divided into two groups: Group 1 with 207 FET cycles; Group 2 with 818 FET cycles. The majority of FET cycles were initiated after more than a menstrual cycle (79.8%).
Patients general characteristics in freeze-all cycles
Patients general characteristics in freeze-all cycles between two groups were showed in Table 1. No significant differences were found between the two groups with regard to BMI, baseline FSH, AFC, infertility years, gonadotropins dose and number of oocytes retrieved. What' more, the rates of patients with > 20 oocytes retrieved and progesterone > 1.5 ng/ml were also comparable in the two groups. However, age, infertility diagnosis, method of fertilization, duration of stimulation, protocol for COH, E 2 and progesterone levels, and endometrial thickness were all significantly different between the two groups. When accounting for the indication for IVF/ICSI, there were no significant differences between the two groups with regard to the rates of pelvic and tubal factor, endometriosis and PCOS. The rate of male infertility factor was significantly higher in the Group 2.
Relationship between time interval and FET pregnancy outcomes
Patients baseline characteristics and pregnancy outcomes regarding the FET cycles were reported in Table 2. For the baseline characteristics, only age and method of fertilization were significantly different between the two groups. For the pregnancy outcomes, no significant differences were reported between the two groups regarding the rates of Fig. 1 The design of study. Note: The FET cycles were divided into two group: Group 1 (immediate FET cycles) and Group 2 (delayed FET cycles) clinical pregnancy, biochemical pregnancy, live birth and pregnancy loss. Therefore, in order to eliminate the influence of baseline characteristics on the pregnancy outcomes, we performed a separate multiple logistic regression model for each pregnancy outcome by adjusting for age, BMI, endometrial thickness, ICSI, male infertility, endometriosis, PCOS, protocol for COH, > 20 oocytes retrieved and progesterone > 1.5 ng/ml (Table 3). No association was found between the two groups in the rates of clinical pregnancy, biochemical pregnancy, live birth and pregnancy loss.
Relationship between time interval and FET perinatal outcomes
Of all 1025 cycles in our study, there were 507 FET cycles with live birth. As shown in Fig. 2, three patients with twin live births were excluded in the Group 1, and two patients with twin live births and one patient without perinatal outcome were excluded in the Group 2. Finally, 501 FET cycles with singleton live birth were included in the analysis. Further details regarding the patients with singleton live birth were presented in Table 4. Except for the rate of male infertility factor, BMI, endometrial thickness and others indications for IVF/ICSI did not differ significantly between the two groups. However, age and method of fertilization were significantly different. When accounting for perinatal outcomes, no significant differences were found regarding gestational age, birth weight, delivery mode, fetus gender. The incidence of preterm birth (< 37 weeks), gestational hypertension, GDM, placenta previa, fetal malformation and low birthweight (< 2500 g) were also comparable between the two groups. Only the incidence of macrosomia (≥4000 g) was significantly higher in the Group 2 compared with the Group 1 (3.0% in Group 1 versus 10.5% in Group 2). To further clarify the association between time interval and perinatal outcomes, as shown in Table 5, the incidence of caesarean delivery, male fetus, preterm birth, gestational hypertension, GDM, placenta previa, fetal malformation and low birthweight did not vary significantly between the two groups, even after using a multiple logistic regression model. Only the incidence of macrosomia was more frequently in the Group 2 compared with the Group 1 (AOR 3.886, 95%CI 1.153-13.103, P = 0.029).
Discussion
In this study, we analyzed 1025 FET cycles for single blastocyst transfer following freeze-all cycles and revealed that immediate FET cycles for blastocyst transfer following freeze-all cycles may result in pregnancy outcomes comparable to delayed FET cycles. However, postponement of FET cycles may increase the risk of macrosomia. To our knowledge, a successful pregnancy depends on a complex process involving interactions between the endometrium and embryos [20]. With the popularity of freeze-all strategy, embryos are transferred into a more physiological intrauterine environment, which avoids asynchrony between endometrium receptivity and embryos development caused by supraphysiological hormonal levels during COH [7,21]. The supraphysiological endocrine uterine environment and suboptimal endometrial development may lead to abnormal obstetric outcomes [8,10,11]. Meanwhile, from previous studies, fresh cycle may be associated with adverse perinatal outcomes compared with FET cycles, such as: perinatal mortality, low birthweight, preterm birth and so on [22,23]. A meta-analysis included 13 cohort studies with Fig. 2 The selection process for singleton live birth 126,911 women also found singleton pregnancy after FET may have a better perinatal outcome compared with that after fresh cycles [24]. On the other hand, this strategy could significantly decrease the risk of OHSS. Jarvela et al. [10] also compared the serum progesterone and E 2 levels in three groups (spontaneous pregnancies, fresh embryo transfer and FET) and found the serum progesterone and E 2 levels were significantly higher in patients with fresh embryo transfer which negatively correlated with birthweight of the newborn. However, other studies supported that FET cycles could lead to cryo-injury which may influence the genetic potential of embryos and blastomeres. Degenerated blastomeres may influence the embryos implantation [25]. Therefore, the duration of supraphysiological hormonal levels and embryo freezing time on subsequent outcomes deserved our attention and enhance physician treatment confidence who hesitated whether start FET cycle immediately. Postponement of FET cycles may be not only related to increase patients stress who eager to conceive as soon as possible, but also treatment burden. Maas et al. [26] found immediate FET cycles could result in higher pregnancy rates compared with delayed FET cycles. However, Ernest et al. [27] found high serum E 2 concentrations in fresh IVF cycles may adversely affect implantation and pregnancy rates, but it did not affect subsequently implantation and pregnancy rates in FET cycles. Santos et al. [1] conducted a retrospective cohort study including 1183 first FET cycles. They found FET cycles performed immediately after a failed fresh embryo transfer had similar clinical pregnancy rate to those postponed to a later time which supported deferring FET may not improve pregnancy outcomes. Soon afterwards, they focused on patients with freeze-all strategy and also found immediately FET cycles after freeze-all strategy appeared to result in similar clinical pregnancy rates comparable to FET cycles deferred to a later time [28]. With the popularity of single blastocyst transfer, it could significantly decrease the risk of multiple pregnancies. From our study, we paid attention to the freeze-all cycles and subsequent single blastocyst transfer in artificially cycles which could reduce the effect of multiple embryo transfer and embryo transfer type on eventually outcomes. We found immediately FET may not increase the risk of abnormal pregnancy outcomes compared with delayed FET cycles which supported Santos' viewpoint. However, their study did not focus on the potential carryover effect on perinatal outcomes, such as preterm birth, birth weight and so on. Therefore, we performed this study to further explore the effect of interval time after freeze-all cycles on perinatal outcomes and ensure the security of shortening FET intervals time.
Both the unadjusted and adjusted analyses showed that delayed FET may increase the risk of macrosomia without increased any other risks in perinatal outcomes when compared with immediate FET cycles. Some studies supported that abnormally high E 2 levels could inhibit the normal trophoblastic invasion of the decidual and myometrial spiral arteries which may cause abnormal placentation and subsequent abnormal pregnancy outcomes [10, 13,14,29,30]. What' more, supraphysiological hormone may directly affect the peri-implantation embryo and implantation process by modulating the differentiation and invasive activity of the trophoblast cells [8]. From our study, it suggested that supraphysiological hormone levels during COH did not affect the outcomes of subsequent FET cycles after a menstrual cycle, but extended freezing time may increase the risk of macrosomia. A cumulative meta-analysis suggested that the increased risk of large for gestational age (LGA) and high birth weight was associated with frozen embryos [31]. Other studies also supported that FET singletons were at an increased risk of being born LGA and of being macrosomia [32,33]. As we all know, frozen-thawed procedures could lead to cryoinjury. Capodanno et al. [25] supported that cryo-injury could influence the genetic potential of embryos and blastomeres. Therefore, we suspected whether delayed FET cycles may increase embryo freezing and in vitro time which eventually increased the risk of macrosomia. However, the number of singletons with macrosomia was small, so further research will be needed to elucidate it. Although our present study was derived from a large sample size in freeze-all cycles and accounted for potentially confounding factors between the two groups, there still had certain limitations included the following: First, it used a retrospective and single-center study that increased the likelihood of bias. A prospective randomized controlled study could decrease selection bias. Second, the abnormal perinatal outcomes rate was calculated only including patients without abnormal complications and young age (≤40 years old) before ART which may decrease the risk of abnormal perinatal outcomes. Third, we only focused on patients with freeze-all cycles and the results were unable to account for all patients with IVF. Finally, some potential confounders were missing, such as smoking, abnormal pregnancy history and gestational weight gain, and so on. However, our studies adopted a new approach to divided patients into two group according to the interval time between oocyte retrieved and the day of first frozen-thawed blastocyst transferred which could more truly reflect the time of embryo in vitro and offer a more accurate evidence for infertility patients seeking for their next FET cycles. What' more, we only included data from patients with single blastocyst transfer. It could decrease the impact of confounding factors and provide a critical evidence to elucidate the impact of FET time intervals after freeze-all cycles on subsequent pregnancy and perinatal outcomes.
Conclusions
Our study clearly verified that delayed FET cycles for blastocyst transfer following freeze-all cycles may not improve the pregnancy outcomes. On the contrary, postponement of FET cycles may increase the risk of macrosomia. Therefore, we suggested FET cycles for blastocyst transfer should be done immediately to avoid adverse effects of delayed time on perinatal outcomes. Note: Each perinatal outcome between the two groups was adjusted for age, BMI endometrial thickness, ICSI male infertility, endometriosis, PCOS protocol of COH, > 20 oocytes retrieved and progesterone > 1.5 ng/ml based on a multiple logistic regression model. OR odd ratio, AOR adjusted odd ratio GDM gestational diabetes mellitus *, P<0.05 National Natural Science Foundation of China supported the study in term of data analysis. These funds do not have any role in the design of the study, data interpretation and in writing the manuscript.
Availability of data and materials
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.
Ethics approval and consent to participate
The study has been approved by institutional review board of Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology. Informed written consent was obtained from all participants and inclusion criteria were described in detail.
Consent for publication
Not applicable.
|
2020-03-14T14:19:25.380Z
|
2020-03-14T00:00:00.000
|
{
"year": 2020,
"sha1": "93e564642aa0a36a2f2b67e8bff3a1a8a0a42ef4",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-020-02858-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "93e564642aa0a36a2f2b67e8bff3a1a8a0a42ef4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225616061
|
pes2o/s2orc
|
v3-fos-license
|
High‐Performance Static Induction Transistors Based on Small‐Molecule Organic Semiconductors
The polymeric organic‐static‐induction transistor (OSIT), a solid‐state vacuum triode, has been extensively studied as a promising vertical organic thin‐film transistor. By utilizing polymers as organic semiconductors in OSITs, important performance figures have been achieved, for example, a maximum on‐current output of about 10 mA cm−2, on/off current ratio as high as 105, and a large current gain of 1000. However, even though polymers with higher mobility have been developed, the performance of OSITs has not been significantly improved yet. In this work, record‐high performance OSITs with small‐molecule materials as organic semiconductors are demonstrated. Pentacene as a hole‐transport material for p‐type OSITs can be easily deposited into pinholes of the gate electrode, hence creating effective conducting channels. Excellent characteristics, such as a high on‐current greater than 260 mA cm−2, on/off current ratio up to 3.3 × 105, and a large transmission factor of 99.98% as well as high current gain of 7965, are attained. These results make the small‐molecule organic semiconductor a candidate material for vertical OSITs as well as for organic electronics.
DOI: 10.1002/admt.202000361
the achieved charge carrier mobilities in OTFTs could exceed 10 cm 2 V −1 s −1 , [9] which clearly surpasses the mobility of amorphous silicon (a-Si). Still, current densities and the operating frequency of planar OTFTs are behind those of inorganic transistors suitable for flexible substrates (e.g., amorphous gallium-indium-tin-oxide). [10] One attractive alternative to overcome the limitation of planar OTFTs is vertical organic transistors. These devices offer very short channels compared to their planar counterparts because the thickness of a semiconductor layer can range from a few nanometers to several hundred nanometers. [11][12][13][14][15][16] In the last few years, organic-staticinduction transistors (OSITs) widely known as one of the vertical organic transistors, have been developed and analyzed extensively due to great potential for high frequency and high power operation. [17][18][19][20][21] OSITs mimic the concept of inorganic SITs that were initially introduced by Nishizawa et al. [22][23][24] In terms of operation mechanism, inorganic SIT has the exponential characteristics, but does not follow the space-charge-limited conduction law. [25] In 1998, organic SITs were first presented by Kudo et al. [19] In contrast to inorganic SITs, most of the OSITs follow the spacecharge-limited conduction law. [18,26] In the following years, a growing body of literature has emerged to improve OSITs, like the device structure shown in Figure 1a. By using polymer semiconductors in OSITs, a maximum on-current output of about 10 mA cm −2 , a large on/off current ratio to 10 5 , and a great current gain around 1000 has been reached. [20,21,[27][28][29][30][31][32] In terms of the device structure, both OSITs and inorganic SITs resemble a vacuum tube triode in which a grid-like metal film is sandwiched between a hot cathode and a cold anode. [19,21,22] Vacuum triodes have no background carriers in the space from cathode to anode. All carriers in the vacuum triode are emitted from the hot cathode and modified by the grid-like metal film. However, it is difficult to fill the nanometersized pinholes (for OSITs shown in Figure 1a) with a solvent and producing a uniform layer formation, particularly when using solution-processed polymeric semiconductors. At present, only a few publications show that small-molecule semiconductors prepared by vacuum evaporation are also promising for using in such OSIT architectures. Pentacene with low dielectric constant, [33] low depletion layer, [34] good hole mobility, [35] The polymeric organic-static-induction transistor (OSIT), a solid-state vacuum triode, has been extensively studied as a promising vertical organic thinfilm transistor. By utilizing polymers as organic semiconductors in OSITs, important performance figures have been achieved, for example, a maximum on-current output of about 10 mA cm −2 , on/off current ratio as high as 10 5 , and a large current gain of 1000. However, even though polymers with higher mobility have been developed, the performance of OSITs has not been significantly improved yet. In this work, record-high performance OSITs with smallmolecule materials as organic semiconductors are demonstrated. Pentacene as a hole-transport material for p-type OSITs can be easily deposited into pinholes of the gate electrode, hence creating effective conducting channels. Excellent characteristics, such as a high on-current greater than 260 mA cm −2 , on/off current ratio up to 3.3 × 10 5 , and a large transmission factor of 99.98% as well as high current gain of 7965, are attained. These results make the small-molecule organic semiconductor a candidate material for vertical OSITs as well as for organic electronics.
Introduction
Since the first report on organic thin-film transistors (OTFTs) by Tsumura et al. in 1986, [1] OTFTs are attracting tremendous attention due to their advantages for wearable and flexible electronics, low-cost, and large-area device applications. [2][3][4][5] Furthermore, based on the encouraging improvements in device fabrication and interface engineering, [6][7][8] as well as simple processability, is a great alternative to polymer organic semiconductors. It was first reported by Fujimoto et al. in such OSIT architectures, [36] however, only an ordinary performance was observed, for example, a low on/off ratio of 2.62. Thus, to fully exploit the application of static-induction vertical organic transistors, OSITs need to be investigated further, not only in terms of suitable device architectures and processes but also of semiconductor materials.
In this work, a record performance of p-type OSITs using pentacene as an organic semiconductor is demonstrated. A high on-current greater than 260 mA cm −2 , an excellent on/ off current ratio as high as 3.3 × 10 5 , a large transmission factor up to 99.98%, as well as a high current gain of 7965 are achieved. Compared to solution-processed polymer films, vacuum-deposited films make it easier to control film thickness and stability. Hence, built-in parallel conducting channels allow for highly efficient charge carrier transfer from source to drain electrode, enhancing not only the transmission but also the current gain of the p-type OSIT device. Besides, the costeffective process used in this work is looking potentially in printing organic electronics. Thus, it is expected that advances in performance and robustness in this work can enable OSITs to be used at a higher level of integration in the future.
Results and Discussion
To fabricate OSITs, colloidal lithography is used to create pinholes in poly(methyl methacrylate) (PMMA) and aluminum (Al) films as shown in Figure 1a. Detailed fabrication procedures are summarized in the Experimental Section and Figure S1, Supporting Information. Pinholes in PMMA (dark brown) and Al film (grey) provide an ideal template for the formation of nanoscale devices with a density of the pinholes around 10 9 cm −2 for a solution density of the polystyrene spheres of 1.5 wt%. Moreover, PMMA also acts as an insulator to isolate adjacent conducting channels in an OSIT (cf. Figure 1a). Pentacene (sky blue) is used as organic semiconductor material offering good hole conduction properties. [37] The channel length of the OSIT is determined by the film thickness of the pentacene. Using the common source configuration in Figure 1a, the vertical current that flows between the two external electrodes is modulated by changing the voltage of a mesh-like gate (G). [21] Hence, the transmission factor relies upon the number of pinholes, which allow current from the source to flow through and reach drain (D) electrode by applying a drain-source voltage (V DS ) (Figure 1a). With a positive gate-source voltage (V GS > 0) applied, holes cannot transmit through the thin gate layer, and consequently are not able to arrive at the drain. When gatesource is applied to a negative voltage, charge carriers will accumulate at the gate electrode, and OSITs will turn on. Once accumulated, charge carriers diffuse through the pinholes and finally are collected by the drain. Apart from a slight current that flows into the gate electrode (causing gate leakage current I G ), the source current I S is transmitted through the openings and injected into the drain electrode as drain current I D . Figure 1b shows a photograph of a glass substrate including four independent OSITs (the active area is 0.6 × 0.6 mm 2 ), which are labeled by numbers and corresponding red squares, respectively.
To make sure the device can be turned off, it is better if the openings in the gate of the OSIT are not too big, however, if the openings in the gate are very small, on-current might be limited. In this regard, the density of unintentional dopants in the semiconductor is of great importance, since it defines whether a pinhole can be still fully depleted by the gate-source voltage. Diameters of 100-200 nm are a good choice of the channel size. [21] Hence, 100 nm diameter polystyrene spheres are utilized as evaporation masks. On the other hand, a device with high performance needs uniform surface coverage of polystyrene spheres, avoiding large-scale aggregation which leads to the formation of the undesired wide channels. Thus, negatively charged spheres are selected to exploit the electrostatic interaction between substrate and particle surface. As a result, the adsorption generally produces single dispersed particles. To restrain polystyrene particle aggregation, the film is dipped into a container of hot (90 °C) isopropanol for 15 s. This is an effective method to overcome the capillary forces, which are developed by the menisci among particles during solvent evaporation, and keep the dispersed nanoparticles in place. In this step, the nanoparticles on the surface are slightly out of shape and increase the contact area between particles and substrates. [38] Therefore, a film covered by well-distributed high-density polystyrene spheres is formed, as proven by the scanning electron micro- After removing the polystyrene spheres, a lot of nano-holes are left on the Al film without destroying the film as shown in Figure 2c. For one square centimeter, it has about 10 9 welldistributed holes, which is essential to acquire great reliability of device operation over a large area. According to the size distribution in the inset of Figure 2c, 87.8% of the nano-holes are between 90 and 100 nm, 9.5% of the nano-holes are between 100 and 200 nm, and 2.4% are between 200 and 300 nm. There are only 0.3% of the nano-holes larger than 300 nm because of aggregations of the polystyrene spheres. The size distribution provides good conditions for forming pinholes in the PMMA later, and is promising to achieve good gate control over the source to drain. Afterward, the PMMA without Al coverage is etched out by using a reactive ion etching system with a power of 20 W and a volume ratio of 2:1 for O 2 and Ar. Consequently, pinholes are left in the PMMA, as shown in Figure 2d. On the SEM image, the pinholes in PMMA seem larger than 100 nm because charging occurs when we are scanning with SEM, which makes the pinholes in PMMA look larger than they are. Finally, a 350 nm-thick film of pentacene and a layer of gold as a source electrode are deposited using a vacuum evaporation system.
The electrical properties of the OSIT are then characterized in the common source configuration with the electric circuit given in Figure 1a. Figure 3a shows the output curves of the p-type OSITs. The drain current densities are plotted against the drain-source voltage at different applied gate-source voltages. Considering the leakage from the drain to the gate, positive drain currents are clipped since they show an unreal situation for device operation. According to the output curves, an on/off ratio larger than 10 5 is achieved at low working voltages (<1.5 V). The gate-source voltage can effectively modulate the drain current of OSITs. However, the saturation behavior (in Figure 3b) is lost in OSITs due to short channel effects. [39] A transfer curve of the p-type OSIT is shown in Figure 3c. The gate-source voltage sweep gives a current-voltage characteristic with excellent rectifying behavior as V GS ranges from 0.5 to −1.5 V. The off-state of the drain current contains two contributions, the leakage of charges from source to drain which is not controllable by the gate and the reverse biased gate-drain diode. The current-voltage curves for the gate-drain diode are demonstrated in Supporting Information ( Figure S2, Supporting Information). By comparing the off-current of gate-drain diode with that of OSITs, it is evident that the former one governs the offstate of the OSIT (≈10 −3 -10 −4 mA cm −2 ). Particularly, only a few charge carriers leak from the source if the gate-source voltage is positive. For the forward bias, the drain current shows an exponential increase as the gate voltage increases, until V GS = −0.5 V, in this region, the drain current is limited by the potential of the pinholes determined by the gate, producing an exponential curve. At higher gate potential (−0.5 V < V GS < −1.0 V), the drain current increases with a lower slope because the charge channel around the gate is forming. When the gate potential larger than −1.0 V, externally applied gate potential does not affect anymore because of the shielding effect of the charge accumulation around the gate. The conductive channel is effectively depleted with applied gate-source potential. The operation mode is similar to that of bipolar junction transistors, which incorporate two back-to-back connected p-n diodes, contrary to the Schottky diodes as illustrated here. Finally, a great current density up to 260 mA cm −2 and the highest on-off current ratio about 3.3 × 10 5 are achieved for the best device at V GS = −1.5 V and V DS = −3 V. The performance is better than in most previous studies in similar structured OSITs, for example, the maximum current density of ≈10 mA cm −2 observed for polymer SITs. [27,29,40,41] Because of the nanostructures on the surface, the surface energy cannot be considered as a constant parameter but rather becomes a quantity that varies on the nanometer scale. This effect may cause local variations in the surface wetting of semiconductor inks. In particular, nanometer-sized pinholes might not be filled sufficiently with the semiconductor ink during solution-based processing, or the microstructure of the polymer within the pinholes. For vacuum-deposited smallmolecule pentacene, the film formation appears to be more uniform, which probably contributes to the effective filling of the conducting channels. [42][43][44] The current gain is plotted against current density in Figure 3d. The calculation of the current gain is similar to the bipolar junction transistor, which is performed by I D /I G . [45] To evaluate the maximum amplification factor of OSITs as V GS is smaller than V DS , the differential current transmission factor (α) and current gain (β) are calculated by following equations, (1) The highest differential gain (β MAX ) is 7965 when the current density is 40 mA cm −2 . Even in the maximum on-state (260 mA cm −2 ), the gain value remains close to 340. Besides, the differential transmission is as high as 99.98%. The large differential gain is not only a result of the low gate leakage current but also the high on-current density achieved for the device structure demonstrated here. Oxygen ion etching during the fabrication process is expected to create a high-density aluminum oxide film on the surface and the side edge of the gate electrode, which can effectively reduce the gate leakage.
To verify the reliability of the colloidal lithography technique, we compare the following transistors prepared with three different particle solution densities such as 0.5, 1.0, and 1.5 wt%. Figure 4a-c shows the SEM images of the PMMA surface coated with polystyrene spheres, which are prepared by immersing the samples in different solutions with corresponding particle densities. According to the SEM images, polystyrene spheres are uniformly distributed on the surface of PMMA. The particle concentrations calculated by Image J are 1.32 × 10 9 , 9.15 × 10 8 , and 5.50 × 10 8 cm −2 , respectively. The concentration ratio of the polystyrene spheres is very close to the ratio of solution density 3:2:1, which also indicates a quite uniform distribution of the polystyrene spheres. However, the particle aggregation phenomenon (more than three particles together) is very apparent in Figure 4a because of the higher particle solution density. On the contrary, only a few particle aggregations are present in Figure 4c due to lower particle solution density. Typically, if many aggregations occur, the device will not be easily turned off due to large pinholes, so the solution density of the polystyrene should not be too high.
The electrical properties of OSITs prepared with different solution densities are also investigated, as shown in Figure 4d,e. According to the gate voltage sweeps, the OSITs prepared with the highest solution density of 1.5 wt% and lowest solution density of 0.5 wt% also show the highest and lowest current densities in on-state at fixed gate-source voltage, respectively. Then, the current in every conducting channel is further calculated, assuming that all the polystyrene spheres are individually dispersed, and the current densities are divided by the corresponding particle densities for each solution density. Theoretically, the current in the conducting channels of three devices should be the same for each drain-source voltage and gate voltage. Figure 4e shows the average current in the pinholes, which is plotted according to the current densities divided by pinhole densities. The average current per hole curves for the OSITs with three different particle densities coincide surprisingly well. A small difference occurs because not all the polystyrene spheres are individually dispersed in the experiment. Overall, the colloidal lithography used here is a reliable technique.
Moreover, taking the average current value of 2 × 10 −7 mA (max. current density divided by pinhole density) and the size of 100 nm for the pinhole in Figure 4e, the current density in the pinhole is 2.5 A cm −2 . In the following, we use the SCLC equation where d is the semiconductor film thickness, V the applied voltage, μ the charge mobility, ε and ε 0 the relative and vacuum permittivity of the semiconductor, respectively. When V = V DS = −3 V, and assuming μ = 0.1 cm 2 V −1 s −1 , the current density in the pinhole is 7.5 A cm −2 , which is in the same range as the estimated value of 2.5 A cm −2 . Because of the Schottkycontacts between the metal and semiconductor, not all applied voltages are used to form the SCLC. Thus, some voltages drop across the contacts and will not contribute to the SCLC. Accordingly, the V − V bi should be smaller than 3 V, and the experimental current density in the pinholes should be smaller than 7.5 A cm −2 . Therefore, the current is limited by the SCLC in each channel of an OSIT. [21,27]
Conclusions
In summary, vertical OSITs based on p-type small-molecule organic semiconductors with record-high performance have been demonstrated. In addition to a large on/off ratio (3.3 × 10 5 ) and a large on-current density (260 mA cm −2 ), OSITs exhibit an excellent transmission factor of 99.98%. Colloidal lithography is used to build vertical conducting channels. Afterward, oxygen ion etching is used to remove the PMMA without Al coverage. During the process, a layer of aluminum oxide is formed on the gate electrode, effectively reducing the gate leakage current. Pentacene as the organic semiconductor is deposited by a vacuum evaporation system, which can easily tune the depositing rate and control the film thickness by changing heated temperature during evaporation. Our results show that pentacene can be filled into the pinholes and creates excellent conducting channels for OSITs. This work establishes the foundation for further research on high-performance OSITs with small-molecule semiconductors, and the improved fabrication process also enables the realization of OSITs to be used in integrated circuits.
Experimental Section
The glass substrates were first cleaned with N-methyl pyrrolidone, ethanol, and deionized water followed by an ultraviolet ozone cleaning system. In the ultra-high vacuum (< 10 −7 mbar) evaporation system, the bottom electrode including 5 nm Cr and 50 nm Au layer stack was realized by continuous deposition through stainless-steel shadow masks. Then, a layer of ≈200 nm-thick PMMA was prepared by spincoating a blend solution of anisole with PMMA on the Cr/Au coated glass substrate and dried with increasing temperature from 30 to 200 °C, finally kept at 200 °C for 3 min, so that uniform PMMA films were obtained. The 100 nm diameter negatively charged polystyrene spheres were adsorbed on the surface of PMMA by immersing in an ethanol suspension of polystyrene spheres for 3 min. The densities of the ethanol suspension of polystyrene spheres were 0.5, 1.0, and 1.5 wt%, respectively. After the spheres were adsorbed, a gate electrode (Al) was prepared by thermal evaporation on top of the particles at 1 Å s −1 . After then, the particles were peeled off by a tape (3M Scotch). Then, the PMMA at the location without aluminum coverage was removed by reactive ion etching with 20 W power and a volume ratio of O 2 :Ar equals 2:1, and consequently, the pinholes were formed. Then, the organic semiconductor pentacene (purchased from Sensient Technologies) was deposited atop with a rate of 0.3 Å s −1 and a thickness of 350 nm. Finally, the top electrode was deposited. The complete devices were subsequently encapsulated in a nitrogen glove box. The electrical transport performance was characterized using a parameter analyzer Keithley 4200-SCS. The scanning electron microscope images were captured using a Zeiss GeminiSEM 500 and an FEI Helios Nanolab 660. Atomic force microscope images were obtained by AIST-NT CombiScope.
Supporting Information
Supporting Information is available from the Wiley Online Library or from the author.
|
2020-07-16T09:02:48.270Z
|
2020-07-15T00:00:00.000
|
{
"year": 2020,
"sha1": "9e37c42d90fdc5ec5578cbcb1a6aad8f1f58b266",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/admt.202000361",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "aeb2a0bb95ab60efd22679a69142f1cadb36a833",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
3065183
|
pes2o/s2orc
|
v3-fos-license
|
Mutations underlying Episodic Ataxia type-1 antagonize Kv1.1 RNA editing
Adenosine-to-inosine RNA editing in transcripts encoding the voltage-gated potassium channel Kv1.1 converts an isoleucine to valine codon for amino acid 400, speeding channel recovery from inactivation. Numerous Kv1.1 mutations have been associated with the human disorder Episodic Ataxia Type-1 (EA1), characterized by stress-induced ataxia, myokymia, and increased prevalence of seizures. Three EA1 mutations, V404I, I407M, and V408A, are located within the RNA duplex structure required for RNA editing. Each mutation decreased RNA editing both in vitro and using an in vivo mouse model bearing the V408A allele. Editing of transcripts encoding mutant channels affects numerous biophysical properties including channel opening, closing, and inactivation. Thus EA1 symptoms could be influenced not only by the direct effects of the mutations on channel properties, but also by their influence on RNA editing. These studies provide the first evidence that mutations associated with human genetic disorders can affect cis-regulatory elements to alter RNA editing.
EA1-associated mutations alter RNA editing in vitro.
Three known EA1-associated mutations, V404I, I407M, and V408A, lie within the predicted 114-bp RNA duplex, which represents the minimum sequence required for site-specific editing of Kv1.1 transcripts by ADAR2 (Fig. 1a) 15 . Using an RNA-folding algorithm (mfold) 30 , we examined whether any of these mutations were predicted to grossly alter the structure of the duplex region. Results from this analysis revealed that each individual mutation predicted a single-nucleotide mismatch within the duplex at each mutation site, with no further perturbations to the predicted RNA secondary structure and only minimal alterations in the free-energy (Δ G) calculations for each duplex (data not shown) 31 . To test whether these mutations affected the rate of editing for Kv1.1 RNAs, each of the EA1-associated mutations was incorporated separately into constructs encompassing a 463-bp region centered on the known editing site. RNA transcripts were transcribed in vitro using these minigenes as a template and a range of concentrations for each RNA was subjected to an in vitro editing assay using ADAR2 protein derived from nuclear extracts isolated from HEK293 cells transiently expressing ADAR2 32 . The extent of editing was quantified by high-throughput sequence analysis as described by Hood et al. 33 and used to calculate the rate of editing (Fig. 1b). Results from these studies clearly demonstrated that introduction of any of these EA1-associated point mutations into the wild-type (a) The predicted secondary structure for a portion of the wild-type (WT) Kv1.1 pre-mRNA is indicated with the positions of the A-to-I editing site (I400 V) and three non-synonymous mutations associated with EA1 shown with inverse lettering. (b) Wild-type and mutant Kv1.1 RNA minigenes, encompassing the duplex region required for editing, were in vitro transcribed and incubated with nuclear extracts prepared from HEK293 cells transiently expressing rat ADAR2. The extent of editing was quantified by high-throughput sequence analysis as described previously 33 and used to calculate the editing rate. Single exponential curves were fitted to the data for emphasis. Statistical differences were determined for replicates at the 2 nM RNA concentration (mean ± SEM, n = 4 replicate reactions, *p ≤ 0.05; ****p ≤ 0.0001). Small error bars were obscured by the data symbols in some cases.
Scientific RepoRts | 7:41095 | DOI: 10.1038/srep41095 sequence was sufficient to decrease the editing rate for Kv1.1 transcripts in vitro. Furthermore, the magnitude of this rate decrease corresponded to the proximity of the mutation to the editing site (I400 V), with the most severe deficit observed for the V404I mutation (81% rate reduction at 2 nM RNA), and a 58% and 17% reduction in editing rate for the I407M and V408A mutations, respectively.
A mouse model of EA1 (V408A/+) alters RNA editing in vivo.
To date, only one mouse model of EA1 has been developed 34 . Mutant mice homozygous for the V408A allele die between embryonic day 3 (E3) and E9, whereas V408A/+ heterozygotes are characterized by stress-induced ataxia as well as attenuated cerebellar Purkinje signaling, which has been attributed to action potential broadening at basket cell boutons leading to increased GABA release 34,35 . To determine whether the presence of the V408A mutation inhibited Kv1.1 editing in vivo, we isolated RNA from multiple dissected brain regions and spinal cord of wild-type and V408A/+ mutant animals to determine RNA editing profiles by high-throughput sequence analysis of Kv1.1 transcripts. Since this deep-sequencing approach generates sequence reads covering both the V408A mutation and the editing site, it was possible to quantify allele-specific editing profiles in V408A/+ heterozygotes. Results from this analysis indicated that the extent of editing for the wild-type allele in V408A/+ mutant mice was similar to that observed in wild-type animals. Editing for the mutant V408A allele showed a 59% reduction in site-specific editing efficiency in all tissues examined when compared to either wild-type littermates or the V408A/+ wild-type allele (Fig. 2).
Gating properties are altered between non-edited and edited Kv1.1 channels harboring EA1 mutations. All three EA1 mutations can affect the rate of editing in vitro and the V408A allele can reduce the extent of editing in vivo (Figs 1b and 2). Although editing and EA1 mutations separately have been shown to alter the biophysical properties of Kv1.1 channels, it is unknown whether editing may cause unique effects when paired with these EA1-associated mutations. Similarly, it is unclear whether the phenotypic alterations observed in patients bearing the V404I, I407M, or V408A mutations result from changes in channel function mediated by these missense mutations alone or in concert with their affects upon editing. To address these questions, Xenopus oocytes were injected with in vitro transcribed RNAs encoding either the non-edited (N) or edited (E) isoforms of the wild-type, V404I, I407M, or V408A Kv1.1 subunits, expressed as homotetramers.
The voltage dependence of activation for each channel subtype was analyzed to determine editing-dependent changes, and representative traces for each channel are shown in Fig. 3a. The relationship between macroscopic conductance and voltage was quantified for each channel type. For most constructs, this was derived from normalized tail current measurements; however, V408A E closed too quickly for accurate measurements of tail currents so conductance was measured using outward currents (see equation (1) in Materials & Methods). Conductance (G) versus voltage (V) curves were fit to a Boltzmann function, equation (2), to estimate the midpoint of channel activation (V 1/2 ) and the relative voltage sensitivity (k) ( Table 1). Consistent with previous reports for I407M N channels 22 , we observed a 30 mV positive shift in the V 1/2 , but this change was not influenced by editing ( Fig. 3b and Table 1). By contrast, V404I also caused a positive shift, but it was more pronounced for the non-edited channel (a shift of 22.1 mV for the non-edited channel, 14.2 mV for the edited channel; Fig. 3c and Table 1). Thus, editing partially ameliorated the alteration in channel function caused by the V404I mutation. V408A channels did not exhibit altered voltage-dependence for either the non-edited or edited isoforms (Supplementary Fig. S1 and Table 1). Editing had little effect on voltage sensitivity for any of the wild-type or mutant homotetrameric channels (Table 1). The extent of editing for the wild-type and mutant alleles in heterozygous V408A adult mice (V408A/+ ), compared to wild-type littermates, was determined for RNA isolated from dissected brain regions and spinal cord by high-throughput sequence analysis (mean ± SEM, n = 4, ***p ≤ 0.001, ****p ≤ 0.0001). Cbl, cerebellum; Hyp, hypothalamus; Hip, hippocampus; Ctx, cortex; Str, striatum; Olf, olfactory bulb; Sp C, spinal cord. To examine how editing affected channel opening kinetics, the time to reach half-maximal activation across a range of voltages was determined. The only editing-dependent change was observed for the I407M mutation. Both I407M channels opened more slowly than their wild-type counterparts, but the slowing was more severe for I407M E channels (Fig. 4a,b and Supplementary Fig. S2), leading to channels with an exacerbated slow opening phenotype. In addition, I407M E and V408A E channels demonstrated non-linearity in their voltage dependence for outward currents, particularly at very positive voltages, reaching peak current amplitudes at 50 and 40 mV respectively, with further voltage steps resulting in decreasing current amplitudes ( Supplementary Fig. S3).
Closing (deactivation) kinetics were measured by fitting single exponential curves to tail current traces to obtain estimates of tau (τ ), the reciprocal of the closing rate constant. Editing resulted in wild-type channels closing slightly faster ( Supplementary Fig. S4). In addition, the I407M and V408A mutations greatly increased closing speeds on their own. The editing of I407M channels had only a small effect on deactivation kinetics, while the edited V408A channels closed so quickly that the closing rate could not be accurately measured and (c) V404I (mean ± SEM, n = 4-8 oocytes). Normalized conductance was measured from tail current amplitude. Small error bars were obscured by the data symbols. Table 1. Voltage-dependence of activation. Voltage-dependence of activation was determined by fitting data to a Boltzmann function, equation (2), to determine the midpoint of channel activation (V 1/2 ) and relative voltage sensitivity (k). All data are represented as mean ± SEM, n = 4-8 oocytes for each channel type. Edited (E) and non-edited (N) isoforms of the mutant channels were compared to WT E and WT N channels, respectively: ***p ≤ 0.001; ****p < 0.0001. All types of N channels were compared to their respective E channels: ( Supplementary Fig. S4). Unlike the other mutations, however, V404I led to slower closing speeds and editing partially ameliorated this phenotype (Fig. 4c,d).
Slow inactivation (C-type) was examined by analyzing channel function under conditions of long depolarizations. The I407M E and V408A E channels demonstrated editing-dependent dysfunction, with a prominent fast component of their inactivation appearing alongside the slow component. Thus, while a single exponential function was sufficient to describe the inactivation for the majority of the channels, I407M E and V408A E required a double exponential fit ( Supplementary Fig. S5a). Both the fast and slow components of the I407M E and V408A E channels were fast compared to their non-edited counterparts ( Supplementary Fig. S5b). By contrast, the extent of inactivation was predominantly mutation-driven, except for the V408A mutation, in which editing decreased the extent of inactivation, bringing it closer to wild-type levels ( Supplementary Fig. S5c).
Inactivation kinetics are altered between non-edited and edited EA1 mutant proteins.
Previous studies by Bhalla et al. 15 found that the most profound change in channel function between non-edited and edited isoforms of the wild-type Kv1.1 channel was a change in the rate of recovery from channel inactivation, presumably by altering interactions with an inactivating Kvβ subunit. To determine the effect of EA1 mutations on this Test potentials were elicited in 10 mV voltage steps from − 10 to 80 mV, from a holding potential of − 80 mV. (b) Activation kinetics were measured as the time to reach half-maximal current amplitude (mean ± SEM, n = 3-7 oocytes). I407M N and I407M E channels were significantly slowed in their time to half-activation compared to each other, in the voltage range − 10 to 70 mV (0.05 > p ≥ 0.0008). I407M N was significantly slower than WT N at all voltages (p ≤ 0.0001) and I407M E was significantly slower than WT E at all voltages (0.01 > p ≥ 0.0001). (c) Representative tail current traces, depicting whole-cell K+ currents, were recorded from oocytes expressing either the V404I N or V404I E channel. Following a holding potential of − 80 mV and a depolarizing pulse to 20 mV, test potentials were elicited in 10 mV voltage steps from − 120 to − 60 mV. (d) Closing kinetics were determined by fitting the tail currents with single exponential curves to determine the associated τ value; (mean ± SEM, n = 3-6 oocytes). V404I N channels closed slower than V404I E from − 120 to − 100 mV (0.05 > p ≥ 0.0066). V404I N channels closed slower than WT N at all voltages (p < 0.0001) and V404I E channels closed significantly slower than WT E channels from − 120 to − 80 mV (0.01 ≥ p ≥ 0.0005). Small error bars were obscured by the data symbols in some cases.
biophysical property, non-edited and edited isoforms of the wild-type, V404I, I407M, and V408A channels were co-expressed with Kvβ 1.1 to measure N-type, fast inactivation kinetics and recovery from inactivation.
Oocytes expressing each channel subtype, along with Kvβ 1.1, were subjected to short depolarizing pulses to different voltages and the resulting fast inactivation traces were fit to a single exponential. These studies identified a previously uncharacterized difference in wild-type channels where editing modestly slowed the rate of channel inactivation ( Supplementary Fig. S6). V404I N and E channels inactivated within the wild-type range, without exhibiting any editing-dependent changes ( Supplementary Fig. S6). Inactivation for non-edited isoforms of the I407M and V408A mutants resembled edited, wild-type channels, however, editing of I407M and V408A resulted in drastically slower rates of inactivation for both channels (Fig. 5). This effect was most extreme and apparent at all voltages for I407M E, whereas slowing was only observed for V408A E with shallow depolarizations. Interestingly, the V404I N channels also exhibited a low extent of inactivation where inactivation could not be measured in over half the oocytes tested (data not shown). This variability in the extent of inactivation for the mutant channels is consistent with previous studies demonstrating that the extent of inactivation could be manipulated by varying the aliphatic amino acid residues at the position of the editing site 17 .
Long depolarizing pulses were measured to determine the fast and slow components of Kv1.1 channels when co-expressed with Kvβ 1.1 (Supplementary Fig. S7a). Double exponential curves were fit to the inactivating traces, to determine the fast and slow τ values and the relative amplitude of the fast component of the inactivation (a,c) Representative β -inactivation traces, depicting whole-cell K + currents, were recorded from oocytes coexpressing the Kvβ 1.1 subunit and either the (a) I407M or (c) V408A channel, in the non-edited (N) or edited (E) isoform. Test potentials were elicited in 10 mV voltage steps from 10 to 80 mV, from a holding potential of − 80 mV. (b,d) Inactivation kinetics were measured by fitting single exponential curves to the test pulse currents, to determine the associated τ value (mean ± SEM, n = 3-6 oocytes). (b) I407M E channels were significantly slower to inactivate than I407M N channels at every voltage (p ≤ 0.0001) and both I407M N and I407M E channels were slower than WT N and WT E channels, respectively, at every voltage (p ≤ 0.0001). (d) V408A E channels were significantly slower than V408A N channels from 10 to 50 mV (0.05 > p ≥ 0.0005). V408A E channels were slower than WT E channels from 10 to 60 mV (0.05 > p ≥ 0.0001). V408A N channels were significantly slower than WT N channels at all voltages (p ≤ 0.0001). Small error bars were obscured by the data symbols in some cases.
(compared to the slow component). The fast and slow τ values largely corresponded to the results described for the β -inactivation of the short pulses and the slow inactivation of the long pulses without Kvβ 1.1 (data not shown). In wild-type channels, editing led to an increase in the relative amplitude of the fast component of inactivation. An editing-dependent change also was observed for the V404I channels, where editing brought the relative amplitude of the fast component closer to that of the wild-type channel (Supplementary Fig. S7b).
Finally, the rate of recovery from fast inactivation was measured using a two-pulse protocol, where the fractional recovery at specific time intervals was assessed after the onset of inactivation. A representative experiment for oocytes expressing either I407M N or E channels is presented in Fig. 6a. As previously reported in Bhalla et al. 15 , editing increased the rate of recovery when comparing non-edited and edited isoforms of the wild-type channel (Fig. 6b). All edited isoforms of the mutant channels exhibited a significantly faster rate of recovery than their respective non-edited counterparts ( Fig. 6b and Supplementary Table S1). Recovery from inactivation for the V404I E channel was significantly slower than that of the WT E channel, whereas for the I407M N, I407M E, and V408A N channels it was faster compared to its corresponding wild-type channel. Although the extent of the effect differed for each mutation, editing resulted in a unique and substantial contribution to the rate of recovery from fast-inactivation for each channel type.
Discussion
The conversion of A-to-I by RNA editing has been shown to represent an important post-transcriptional modification by which to modulate the function of numerous proteins critical for nervous system function 36 . Previous studies have shown that site-selective editing of transcripts encoding the Kv1.1 channel can affect the rate of recovery from channel inactivation, the binding of drugs and highly unsaturated fatty acids, the regulation of homotetrameric Kv1.1 channel trafficking, and seizure-susceptibility in chronic epileptic rats 15,17,[37][38][39] . While numerous EA1-associated mutations have been identified throughout the KCNA1 coding region, several of these mutations (V404I, I407M, and V408A) are within close proximity to the Kv1.1 editing site (I400 V) and also are predicted to disrupt the critical RNA duplex structure required for this post-transcriptional modification.
To our knowledge, the present studies represent the first demonstration that disease-associated mutations can disrupt critical cis-regulatory elements to change their gene's RNA editing profile, by altering the RNA structure required for site-selective A-to-I conversion. Results using both in vitro and in vivo model systems have shown significant reductions in the extent and rate of editing for Kv1.1 transcripts harboring specific EA1 mutations (Figs 1b and 2). Importantly, because the wild-type allele RNA was unchanged in the V408A/+ mouse model, it is likely that the observed changes in the editing of the V408A allele-derived RNA were solely due to the V408A mutation and not due to any developmental, compensatory changes. Our studies suggest that both synonymous and non-synonymous duplex-disrupting mutations and single nucleotide polymorphisms within Kv1.1 and other edited RNA targets may also affect the expression of their specific edited isoforms, thus altering the activity of the encoded protein products.
These studies also have revealed that the effects of EA1 mutations on Kv1.1 function are far more complex than originally anticipated, as each mutation produces channels with unique biophysical properties that depend the I400 V amino acid identity, mediated by RNA editing. The V404I mutation altered several electrophysiological parameters on its own, but the edited isoform demonstrated less drastic changes than the non-edited isoform, as observed for channel voltage sensitivity, closing kinetics, and the amplitude of β -inactivation (Figs 3c and 4c,d, Supplementary Fig. S7). Although it is tempting to speculate that editing could dampen the defects in channel function resulting from this point mutation, it also should be noted that this mutation largely Figure 6. Editing alters the recovery from Kvβ1.1-induced inactivation in V404I, I407M, and V408A channels. Whole-cell K + currents were recorded from oocytes co-expressing the Kvβ 1.1 subunit and either a non-edited (N) or edited (E) isoform of the wild-type (WT) or mutant Kv1.1 channel. (a) Representative I407M N and I407M E recovery traces are overlaid to depict the increased rate of recovery from β -inactivation, typical of an E isoform. A two-pulse protocol was used, eliciting a depolarizing pulse to 80 mV followed by a variable interpulse duration at − 80 mV before a final depolarizing pulse at 80 mV. Recovery from β -inactivation was plotted as the time for the second pulse to regain the current amplitude of the first pulse. (b) τ values were determined by fitting single exponential curves to the recovery plots (mean ± SEM, n = 3-7 oocytes, ***p ≤ 0.001, ****p ≤ 0.0001).
prevents the RNA from being edited in the first place (Fig. 1b). Thus, it is anticipated that edited V404I isoforms contribute little to the electrophysiological properties of Kv1.1 channels in those tissue where they are expressed. Unlike the V404I channel, however, editing combined with the I407M or V408A mutations led to more severe channel dysfunctions than the non-edited isoforms. Edited isoforms of both I407M and V408A exhibited unusually slow β -dependent inactivation kinetics (Fig. 5) and severe defects in activation at higher voltages ( Supplementary Fig. S3) that could possibly be caused by a significantly faster entry into, or slower recovery from, C-type inactivation (Supplementary Fig. S5) 40 . In addition, while the I407M mutation slowed the kinetics of channel opening, the effect was greater for the edited isoform (Fig. 4a,b). These studies also extended the characterization of the I407M mutation, as previous studies of the non-edited I407M channel reported only alterations in expression and voltage-sensitivity 41 , while the present study also shows changes in kinetics (Fig. 4a,b and 5). Further characterizations of edited EA1 mutant channels could help us better understand their physiological defects in vivo. These include stimulating the channels with action potential-like commands (trains of depolarizing pulses) to assess cumulative inactivation, as well as probing the voltage-dependence of their inactivation. Since these mutations also led to decreases in the editing of Kv1.1 transcripts, additional experiments will be required to test the relative contribution of editing-dependent and independent effects, especially when the Kv1.1 proteins are co-assembled into heterotetramers with other Kv1.x family members 20,41,42 .
Although our studies suggest that edited isoforms of mutant channels represent a smaller portion of the total Kv1.1 population, they may still exert functional effects, particularly in tissues with higher editing levels (such as cerebellum and spinal cord) (Fig. 2). This is supported by previous studies, which have shown that incorporating even one edited subunit into a Kv1.x heterotetramer was sufficient to alter its sensitivity to open-channel blocking molecules 37 . Alternatively, despite the many functional differences observed between edited isoforms of the mutant channels, all recovered from fast inactivation significantly faster than their non-edited counterparts (Fig. 6). As these EA1 mutations reduced their own isoform editing, it is predicted that the overall recovery from fast inactivation in vivo will be comparatively slow, possibly resulting in unanticipated effects that could prevent normal neuronal signaling.
While no clear correlation has been established between the diverse clinical phenotypes of EA1 patients and specific mutations within Kv1.1 [18][19][20][21][22][23][24][25][26][27] , part of the observed variability in symptoms might be explained by differences in RNA editing. These phenotypic differences could arise from EA1 mutations that disrupt the editing duplex, or from overall changes in Kv1.1 editing regulation. Although the mechanisms regulating Kv1.1 RNA editing are largely unknown, recent studies have demonstrated that inducing rats with chronic epilepsy led to a 4-fold increase in Kv1.1 editing in the entorhinal cortex 38 . Interestingly, once Kv1.1 editing was increased, recordings in isolated rat brain slices demonstrated that these animals had a decreased sensitivity to 4-aminopyridine-induced seizure-like events, suggesting that increasing editing might dampen seizure susceptibility. Similarly, analyses of patients undergoing surgery for mesial temporal lobe epilepsy revealed that having increased levels of Kv1.1 RNA editing was negatively correlated with the period of years that the patients had experienced epileptic activity 43 , suggesting that decreased Kv1.1 editing may represent a risk factor for long-term seizures. Graves et al. 27 clinically surveyed two families containing the same EA1 mutation (F414S), and found that one family exhibited seizures while the other did not, raising the possibility that additional factors, such as differences in editing regulation, could represent an explanation for these phenotypic differences. As previous studies have shown that open-channel blocking drugs interact less with edited Kv1.1 homo-and heterotetramers 37 , a precise therapeutic strategy for the treatment of Kv1.1-dependent seizures may require not only a knowledge of the specific mutation(s) involved, but also the editing profiles of Kv1.1 transcripts.
Materials and Methods
Kv1.1 and Kvβ1.1 constructs. A 463 bp-region encompassing the duplex required for Kv1.1 editing was amplified using the polymerase chain reaction (PCR) from human genomic DNA using sense (5′-GCGAAGCTTCCT CTTCATCGGGGTCATCCT-3′) and antisense (5′ -GCGGCGGCCGCAGTTTTGGTTAGCAGTGG-3′ ) oligonucleotide primers in exon 2. To aid in subcloning, the primers incorporated Hind III and Not I restriction sites on their 5′ -ends for the sense and antisense primers, respectively. The PCR amplicon was subcloned into the mammalian expression vector, pRc-CMV (Thermo Fisher) to generate a wild-type Kv1.1 minigene. To generate the V404I, I407M, and V408A minigenes, the wild-type Kv1.1 construct was mutagenized using the QuikChange II Site-Directed Mutagenesis kit (Agilent Technologies), where the PCR reactions were supplemented with 5% DMSO. Full-length mouse Kv1.1 (Addgene) and mouse Kvβ 1.1 (Thermo Scientific) cDNAs were subcloned into the Xenopus expression vector, pGEM HE 44 . The following full-length constructs were created by PCR mutagenesis from the full-length mouse non-edited Kv1.1 cDNA and validated by sequence analysis: wild-type edited Kv1.1 and V404I, I407M, and V408A mutant Kv1.1 (non-edited and edited) cDNAs.
In vitro analysis of RNA editing. RNAs were transcribed in vitro from the wild-type Kv1.1 minigene, as well as corresponding minigenes harboring the V404I, I407M, and V408A mutations using the MAXIscript kit (Ambion) with T7 RNA polymerase according to manufacturer's instructions. Nuclear extracts were prepared from transiently transfected HEK293 cells expressing rat ADAR2, as described previously, and stored at − 80 °C until required 32,45 . Immediately prior to in vitro editing analysis, nuclear extracts were diluted 1:10 in dialysis buffer [20 mM HEPES, 1 mM EDTA, 1 mM EGTA, 10% glycerol, 300 mM NaCl, 1 mM PMSF, 1 mM DTT, 1X complete, EDTA-free protease inhibitor cocktail (Roche)], before a 2-hour incubation at 30 °C with RNase inhibitors and RNA substrates varying in concentration from 0.125 to 2 nM. Nuclear extracts represented one-third of the total 50 μ L reaction volume which was diluted with the RNA substrate and water to reduce the glycerol concentration into a range necessary for ADAR2 activity. The incubation time was determined empirically by time-course analyses to ensure that editing of the wild-type Kv1.1 minigene was within the linear range of the reaction (data not shown). Reactions were terminated by the addition of TRIzol (Ambion) and RNA was extracted according to the manufacturer's protocol. RNA was reverse-transcribed with random primers using the High Capacity cDNA Reverse Transcription kit (Applied Biosystems) and the extent of RNA editing was quantified by high-throughput multiplexed sequence analysis as described previously 33 . The editing rate was calculated as the fmol RNA converted to the edited isoform divided by the duration of the reaction.
In vivo analysis of RNA editing. All animal care and experimental procedures involving mice were approved by the Vanderbilt University Medical Center Institutional Animal Care and Use Committee and were performed in accordance with relevant guidelines and regulations. Mice harboring the heterozygous V408A mutation (V408A/+ ) were generously provided by Dr. James Maylie (Oregon Health & Science University) 34 . At approximately 6 weeks of age, male V408A/+ and wild-type littermates were euthanized by cervical dislocation under anesthesia followed by decapitation. Six brain regions (cerebellum, hippocampus, hypothalamus, cortex, striatum, olfactory bulb) and spinal cord were dissected from each mouse. Tissues were flash-frozen in liquid nitrogen and RNA was isolated by sonication in TRIzol (Ambion) according to the manufacturer's instructions. RNA was reverse-transcribed and Kv1.1 editing was quantified by high-throughput sequence analysis as described for in vitro RNA editing analyses.
Electrophysiological recording in Xenopus oocytes. All animal care and experimental procedures involving Xenopus laevis were approved by the University of Puerto Rico Institutional Animal Care and Use Committee and were performed in accordance with relevant guidelines and regulations. Kvβ 1.1 and full-length, wild-type, V404I, I407M, and V408A Kv1.1 RNAs were transcribed in vitro, capped, and polyadenylated using the T7 mScript Standard mRNA Production System (CELLSCRIPT). Ovary sections containing several hundred oocytes were removed from adult specimens of Xenopus laevis obtained from Xenopus Express (Brooksville, FL). Oocytes were dispersed with type II collagenase and manually defolliculated. Stage V and VI oocytes were then selected by manual inspection for subsequent RNA injection. On day 1, oocytes were injected with 38.6 nL of one of the eight full-length Kv1.1 RNAs encoding edited and non-edited isoforms of wild-type, V404I, I407M, and V408A channels, with or without the Kvβ 1.1 RNA. Injection concentrations were optimized individually for each construct, with greater concentrations required for the I407M and V408A RNAs due to protein expression differences previously described in the literature 21,22 . Each α -subunit was injected at a concentration from 2 ng/μ L to 1 μ g/μ L and co-injected with Kvβ 1.1 when applicable; concentrations for the Kvβ 1.1 constructs were 10-fold more than each α -subunit, up to a maximum injection concentration of 500 ng/μ L. Electrophysiological analysis of oocytes were performed between day 3-5 post-injection using the cut-open oocyte voltage-clamp technique 46 . The external solution consisted of: 20 mM K-glutamate, 100 mM L-glutamate, 2.5 mM MgCl 2 , 2.5 mM CaCl 2 , 10 mM HEPES, pH 7.4. The internal solution consisted of: 120 mM K-glutamate, 2.5 mM EGTA, 10 mM HEPES, pH 7.4. The pH of the solutions was adjusted using N-methyl-D-glucamine, as an alternative to NaOH, to limit the introduction of sodium ions into the solutions. To gain electrical access to the oocyte interior, the internal solution was supplemented with 0.3% saponin and used for a brief permeabilization prior to recording. The oocyte membrane potential was controlled using a CA-1B High Performance Oocyte Clamp (Dagan Corporation). Analog current signals were digitized at 100 kHz using an SBC6711 A/D D/A board (Innovative Integration, Simi Valley CA) and filtered at 5 kHz. To avoid errors introduced by series resistance, only traces exhibiting less than 10 μ A were used for analysis. GPATCH M software, kindly provided by Dr. F. Bezanilla (University of Chicago), was used for data collection and clamp control. Leak currents were subtracted using a linear P/4 procedure. Data were analyzed using ANALYSIS software, also provided by Dr. F. Bezanilla, for fitting data with exponential functions and measuring current amplitudes. In addition, single exponential curves were fitted to recovery from the inactivation data using Graphpad Prism (Graphpad Software) to determine the rate constant, τ . As the channels encoded by edited V408A transcripts closed too rapidly for measurements of tail current amplitude, conductance (G) was calculated using Ohm's law, equation (1), (2) r where I represents the maximal current at the test potential (V) and V r signifies the reversal potential, determined empirically. For Figs 3-6 and Supplementary Figures S5 and S7, points arising from brief capacity transients were removed for clarity. (Fig. 1b) and in vivo (Fig. 2) editing analyses were determined by 2-way ANOVA with Tukey's multiple comparisons test. Boltzmann functions were fitted using non-linear regression to model conductance-voltage curves ( Fig. 3 and Supplementary Fig. S1) and to determine V 1/2 and k values associated with each replicate (Table 1). Two-sample Student's t-tests were used to compare voltage-dependent parameters (Table 1), long pulse characterization with and without Kvβ 1.1 ( Supplementary Figs S5 and S7), and recovery from inactivation τ values ( Fig. 6b and Supplementary Table S1). The above analyses were conducted using Graphpad Prism (Graphpad Software). To maintain the type I error rate for each experiment at 5%, a Bonferroni correction was applied to each test based on the number of comparisons within each experiment and statistical significance for any pair of treatment comparisons was redefined according to these adjusted p-values. For Table 1 and Supplementary Table S1, 10 comparisons were made and the significance was adjusted to p ≤ 0.005. For Supplementary Figure S5b, 42 comparisons were made and the significance was adjusted to p ≤ 0.0012. For Supplementary Figures S5c and S7b, 30 comparisons were made and the significance was adjusted to p ≤ 0.0017. Analysis of activation and deactivation ( Fig. 4 and Supplementary Figs S2 and S4), and inactivation kinetics ( Fig. 5 and Supplementary Fig. S6) were performed with linear mixed models using the natural log of the acquired data to better meet model assumptions. Individual group comparisons for p-values were based on the Wald tests of model-based predicted (least square) means and appropriate standard errors. Because these data indicate that the measurements of activation, deactivation, and β -inactivation were dependent on voltage, comparisons were made only between values obtained at the same voltage. Data are presented with their original scale to allow for easier interpretation and comparison with the existing EA1 literature. All statistical tests were two-sided and statistical significance was defined as p ≤ 0.05, unless a specified Bonferroni correction was applied.
|
2018-04-03T04:58:10.012Z
|
2017-02-20T00:00:00.000
|
{
"year": 2017,
"sha1": "18d8ff4b8681634ce0d293d08d955d445e5f9d2a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep41095",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "18d8ff4b8681634ce0d293d08d955d445e5f9d2a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
237571649
|
pes2o/s2orc
|
v3-fos-license
|
Kinetics and Nucleation Dynamics in Ion-Seeded Atomic Clusters
The time-dependent kinetics of formation and evolution of nano-size atomic clusters is investigated and illustrated with the nucleation dynamics of ion-seed Ar$_n$H$^+$ particles. The rates of growth and degradation of Ar-atomic shells around the seed ion are inferred from Molecular Dynamics (MD) simulations. Simulations of cluster formation have been performed with accurate quantum-mechanical binary interaction potentials. Both the nonequilibrium and equilibrium growth of Ar$_n$H$^+$ are investigated at different temperature and densities of the atomic gas and seed ions. Formation of Ar$_{n\leq 40}$ shells is the main mechanism which regulates the kinetics of nano-cluster growth and the diffusive fluctuations of the cluster size distribution. The time-evolution of the cluster intrinsic energy and cluster size distributions are analyzed at the non-thermal, quasi-equilibrium, and thermal equilibrium stages of Ar$_n$H$^+$ formation. We've determined the self-consistent model parameters for the temporal fluctuations of the cluster size and found coefficients of the diffusive growth mechanism describing the equilibrium distribution of nano-clusters. Nucleation of haze and nano-dust particles in astrophysical and atmospheric ionized gases are discussed.
I. INTRODUCTION
The nature and characteristics of phase transitions in atomic and molecular systems strongly depend on the inter-particle interactions. These binary interactions determine how rapidly new phases form, and specifically regulate a nucleation of solid or liquid particles from gas and liquid phases. The Classical Nucleation Theory (CNT) [1][2][3][4] was developed and successfully applied to the analysis of nucleation processes in macroscopic and submicron systems under conditions of thermal equilibrium. CNT predictions of the nucleation kinetics are essentially based on the Gibbs statistical rule and on stochastic (diffusive) formation of critical nucleation sizes [1,2].
In contrast to CNT, theoretical modeling of nucleation and growth of nano-particles under local nonequilibrium and non-homogeneous conditions requires precise knowledge of inter-atomic forces and a detailed description of relaxation processes. We found that non-homogeneous nucleation of noble gas atoms, seeded by ions, is regulated via building up of ion-atom clusters from a sequence of stable atom shells. On the other hand, we observe large size fluctuations of nano-clusters under thermal equilibrium which resemble the diffusive growth of critical clusters in CNT models. Nucleation of nanosize particles in ionized gases is an important process for a wide range of areas in physics, chemistry, astrophysics, and atmospheric sciences. Analysis of formation of nano-size haze and ice particles in the upper atmosphere of planets, satellites, exoplanets, and within debris disks is critically important for investigations of spec- * mitchell.bredice@uconn.edu tral properties of these atmospheres [5][6][7][8][9][10] and circumstellar disks [11][12][13]. Ions and charged nano-dust particles are major agents stimulating haze and ice cloud formations in the Earth upper atmosphere. The MD simulations, performed with accurate potentials of inter-particle interaction, can successfully describe both nonequilibrium and equilibrium processes of nucleation of nanosize clusters. Previous investigations were focused on the nucleation [14,15], structural properties [16], and phase transitions of pure argon clusters [17,18]. Meanwhile, there is a significant amount of nano-cluster research using supersonic beam experiments analyzing abundances of clusters with different numbers of atoms [19][20][21][22][23][24][25][26][27][28][29]. The theoretical analyses of cluster structure, Ar n H + [19,[30][31][32] and small Lennard-Jones (LJ) clusters [33], have been concentrated on the explanation of different "magic numbers" obtained in different experiments. There is also significant work on small noble gas clusters focused on their fragmentation after ionization due to electron or proton impact [34][35][36][37][38]. Although nonequilibrium energy relaxation of excited clusters and following fragmentation involve the same binary potentials as cluster nucleation, the dynamics of these processes differ significantly. In our research, the formation and growth of nano-size Ar n H + clusters is initiated by ionization of H atoms in the Ar and H gas mixture and is simulated with the use of the Large Atomic/Molecular Massively Parallel Simulator (LAMMPS) [39]. The nucleation of Ar n H + clusters have been investigated using the results of the MD simulations performed with quantum-mechanically calculated potentials and in the canonical (NVT) ensemble with Nosé-Hoover thermostat [39]. The time-dependent kinetics of the nucleation and growth of critical Ar n H + clusters and their approach to thermal equilibrium have been inferred from the results of our MD simulations. A role of ion seed particles in a phase transition has been analyzed in detail employing data of our simulations performed with different parameters of the Ar bath gas and densities of H + ions.
II. SHELL STRUCTURE, ENERGY, AND STABILITY OF NANO-SIZE CLUSTERS
The accurate binary potentials of the Ar-Ar and Ar-H + interactions have already been used in our investigations of nucleation and growth of Ar n H + m solid or liquid particles; where large size clusters include many protons [40]. We briefly provide information on the major characteristics of these potentials.
A. Binary interaction potentials
The comprehensive description of the binary interaction potentials used in our simulations is given in [40]. To summarize the details, the Ar-Ar binary interaction is a LJ 6-12 short-range van der Waals potential with a well depth of 0.012 eV and an equilibrium atomic separation of 3.75Å. The Ar-H + interaction at its deepest is more than 4 eV and it asymptotically leads to a r −4 polarization potential. This interaction is much stronger than the Ar-Ar interaction due to the ion Coulomb field polarizing the neutral atom. Thus, Ar-H + potential creates the bulk of the inter-particle attraction in our simulations, while the Ar-Ar interaction does not substantially contribute to any attraction forces in our simulations of Ar n H + cluster growth with n ≤ 40. On the other hand, the short range Ar-Ar repulsion together with Ar-H + potential controls parameters of the Ar-shells. The ion-ion interaction is Debye shielded and is modeled as a Yukawa-type interaction in LAMMPS. A sketch of the potentials is given in Fig.1. All potentials used in our modeling do not include explicit three-body or higher many-body contributions.
B. Geometrical structures and cluster energies: emergence of primitive symmetry structures.
Nucleation of nano-size clusters has been studied in detail for neutral noble gases [41]. Neutral atoms at low temperatures form cluster shells minimizing the cluster energy. Larger cluster shells with increasing numbers of atoms can be transformed to regular crystal structures, if the reservoir temperature is below the melting point. Stability of nano-cluster shells and their geometrical parameters are sensitive to characteristics of inter-atomic potentials [41]. Nucleation of atoms seeded by ions or charged nano-particles differs significantly from cluster formation in homogeneous neutral gases. The strong attraction force between a seed ion and Ar atoms dominates the dynamics during the initial stages of Ar n H + formation. This ionic force creates tighter confinement of the Ar n H + cluster with larger binding energies for Ar The screened H + -H + interaction is Debye shielded and modeled as a Yukawa-type potential, due to the electron and proton fields. The Ar-Ar interaction is modeled by a LJ potential and describes the short-ranged vdW interaction between Ar atoms. The Ar-H + interaction asymptotically leads to a r −4 polarization potential and is the longest range attraction in our simulation. The repulsion and attraction energies of the Ar-Ar and Ar-H + interactions are approximately equalized at ∼ 2.5Å. atoms in close proximity to the H + . The distance between Ar atoms and ion, i.e. the shell radius r, is the same for all atoms belonging to the same shell, and the shell radius r practically does not depend on the number of atoms in the shell. The number of Ar atoms n s in the specific shell is restricted due to conditions minimizing a cluster potential energy. The maximal value n s occurs for the closed shells with n s =4,6, .... The specific geometrical configurations, tetrahedral and octahedral, are the primitive symmetry configurations around the central seed ion that minimize the potential energy U (n) of clusters with closed shells. The primitive symmetry of the first Ar atomic shell, tetrahedral, disagrees with the structure found in the literature [19,[30][31][32], which has the innermost layer of the cluster being a linear Ar-H + -Ar cluster. Nevertheless, we expect that simulations of the cluster growth and cluster size distributions cannot be affected strongly by the semiclassical approximation used in our MD model. This is because in the both semi-classical and quantum-mechanical models, the cluster growth of Ar n−1 H + + Ar → Ar n H + occurs via captures of Ar atoms into highly excited configuration states with a following relaxation of the cluster intrinsic energy to the ground states. The energies, U (n), which correspond to the minimal total energies at zero temperature, are given in Table 1 together with an indication of the symmetry of the cluster configurations (shells), Ar-H + inter-atomic distances r, and shortest Ar-Ar distances within each shell that provide these minima. The Ar atoms from the same shell have identical parameters, such as binding energy or an averaged distance from the cluster ion r . The minimal energies u(n) of Ar atoms in the cluster shell, u(n) = U (n) − U (n − 1) = dU/dn, were calculated for small Ar n<25 H + clusters and results are shown in Fig.2 as a function on n. The detachment energy required to remove an Ar atom from a Ar n H + cluster is |u(n)|. The values of u(n) take into account the mean potential field created by all cluster Ar atoms and the H + . The single particle energy u(n) plays the role of the chemical potential at T = 0. The value of u(n) at n≤4 rises sharply with n and then plateaus at n ≥4. The four Ar atoms form the stable tetrahedral shell of atoms closest to the seed ion. The energy required to remove any of the Ar atoms from the Ar 4 H + shell at T=0 is u(n = 4) 0.64 eV that corresponds to the temperature ∼ 7300 K. This energy is significantly higher than thermal energies considered here, and the Ar 4 H + clusters are therefore stable in our simulations. A detachment of an Ar atom from the next tetrahedral shell (2T 4 in Table.1) requires 0.1 eV∼ 800 K. This shell should be stable at low temperatures (T∼90 K) but can be depopulated under thermal equilibrium at T∼200 K at sufficiently dilute Ar densities [42]. The ion field removes the diffusion barrier [2,3] of nucleation for small clusters and accelerates their nucleation. The charge-induced growth of small clusters occurs via the capture of free Ar atoms into cluster shells. The growth of Ar n H + particles is restricted by thermal detachment of Ar atoms from cluster shells or by formation of new phases; for example, the stable Ar n H + m large clusters or crystals at low temperatures [40]. In our current simulations, the stable cluster shells with large number of Ar atoms were not observed at low densities or high temperatures. This is because the rate of evaporation of cluster-bound Ar atoms increases under these conditions significantly with respect to the shell growth rate causing depopulation of outer shells. The energies u(n) of Ar-particle in the cluster shells, u(n) = U (n)−U (n−1) = dU/dn, as a function of the number of Ar atoms n in the cluster. The minimal energy required at T=0 to remove an Ar atom from the ArnH + cluster is |u(n)|. The sharp increase of u(n) function at n ≤ 4 corresponds to construction of the deepest 1T Ar shell. The plateau at n ≥ 5 shows that shell energies u(n) of larger clusters are weakly sensitive to the value of cluster size n. The energy |u(n)| is the maximal energy which can be transferred to the thermostat in Arn−1H + +Ar → ArnH + transitions. The irregular oscillations of u(n), shown in the insert, reflect energy n-alternation of Ar atoms in the open shells. The value of these oscillations is reduced at n 1.
III. METHODS AND RESULTS OF MD SIMULATIONS OF ArnH + CLUSTER FORMATION
Each MD simulation begins by randomly generating coordinates for the atoms and ions; the atoms and ions are separated by a minimum of 3Å. The simulations are run at temperatures, 90 < T < 600K, with a variable number of ions from 0 to 200, and contain a fixed number of Ar atoms (1000) in the simulation box. Ar atom velocities are initialized with the Maxwell-Boltzmann distribution appropriate to the selected temperature, using LAMMPS built in "create velocity" function, but the ini-tial velocity of the H + ions is assigned to be 0. To achieve different densities of Ar atoms and H + ions the size of the simulation box is adjusted, with the majority of the simulations performed at the Ar atom density of 10 20 cm −3 . This density is chosen to represent a dense gas and allows for a faster convergence of the cluster growth. All simulations are run using the canonical ensembles (NVT) with the Nose-Hoover thermostat function contained in LAMMPS. The time step for our simulations was 1 fs and the thermostat temperature damping timescale was 100 fs to avoid sharp changes of the kinetic energies of atomic particles.
A. Cluster Definition
The method to extract the clusters from the results of simulations is done in two stages, by first a geometrical selection of groups of close atoms, clusters, using the DBscan algorithm [43], with its implementation in the Julia programming language through the Clustering.jl package, and next computation of the cluster intrinsic energies to verify that all atoms in these clusters are bound.
Stage 1: The DBscan algorithm searches for neighbors around every atom in the simulation, and if another atom is within a defined cutoff distance d then it is selected to be a part of a possible geometric Ar n H + cluster. In this first step we only search for clusters that include a single ion and use a cutoff distance of d=4Å. This cutoff was chosen from the energy minimization calculations, to ensure that the outer Ar shells with n < 40 can be detected. The geometric cutoff selection could include "false" selections, e.g. a group of atoms and ions which are accidentally close to each other in one specific snapshot of the simulation, but not the next. Such groups of unbound particles are dissolved in a short time. These candidates for the Ar n H + clusters can be analyzed and rejected on the second stage of the cluster-verification process.
Stage 2: We implement another step to our definition, that the cluster total energy in the Center of Mass (CM) frame must be negative, since this would indicate a truly bound system. The Stage 2 verification yields values of atomic binding energies; this allows the comparison to the minimal cluster energies obtained in minimization calculations for each cluster size (Fig.2). The irregular n-variations of u(n) reflect the reconstruction of cluster shells with the addition of new atoms. The n-variations of the optimal configuration of cluster shells have been established in classical and quantum calculations of atomic binding energies [19,30,31]. These variations should vanish at "n → ∞" when u(n) provides the minimal potential energy of atomic particle on the surface of a macroscopic Ar crystal at T=0. The real total potential energies for clusters obtained in our simulations are slightly larger than the minimum potential energy. The intrinsic cluster energy obtained in simulations is typically a few percent higher than the minimal potential energy. This excess of the intrinsic energy arises from the vibration of Ar atoms in the cluster (the kinetic energy of thermal motion), since our minimal potential energy is calculated at T=0.
B. Simulations of the Nucleation Kinetics
We have performed a set of MD simulations of the Ar n H + cluster growth at different temperatures and concentrations of Ar atoms and charged particles H + . Nascent H + ions produced at t=0 in the thermal Ar gas quickly become centers of cluster nucleation. The formation of first atomic shells around charge centers is accompanied by a local release of significant energy. This initial stage of the nucleation process can be described as a non-thermal phase of the cluster formation. The large fraction of released energy is transferred to light particles, the protons, and later distributed between atomic particles and absorbed by the LAMMPS thermostat. Growth of cluster shells leads to diminishing binding energies of Ar atoms in Ar n H + clusters. Thus, the energy release is reduced with cluster growth and the entire system (Ar gas and Ar n H + clusters) approaches thermal equilibrium.
The MD simulations provide near complete information about spatial and velocity distributions of all atomic particles in the free Ar gas and in Ar n H + clusters. Every time interval δt = 20 ps, data is dumped for the entire simulation.This value of δt allows to observe different phase space configurations at considered densities and temperatures of atomic particles. Analysis and calculations of physical and geometrical parameters are performed using these 20 ps-snapshots.
C. Cluster formation and velocity distributions of
Ar and H + particles The initial stages of cluster growth occur in a nonequilibrium regime, lasting for the first ∼ 1-5 ns. The duration of this nonequilibrium stage depends on the initial conditions, the Ar gas parameters, and density of H + ions. This is apparent from the analysis of Ar and H + velocity distributions. We see that after the first few steps (each step is 20 ps long), the Ar and H + velocity distributions are non-Maxwellian. In each case the majority of the atoms/ions have small velocities, but with long high-velocity tails. The energy source for the hot particles in high-velocity tails is the formation of small clusters: capture of free Ar atoms into an open Ar n H + cluster shells with n ≤ 8 is an exothermic process with a large energy release, as shown in Fig.2. Energy release decreases with growth of outer cluster shells and becomes comparable with values of typical thermal energy kT .
As the simulation progresses, the peak of the velocity distribution shifts until the distribution becomes nearly Maxwellian. Although an insignificant tail of high velocities persists in the v-distributions for the entire duration of the simulations, the bulk of the atoms/ions remain in the thermalized Maxwellian-type distribution. The timeevolution of Ar and H + velocity distribution functions are show in Fig.3a and Fig.3b. The argon atoms appear cold at 200 ps due to the large energy losses: Ar atoms transfer their kinetic energy to the initially "frozen" light particles (the ions) at the beginning of the simulation. This specific aspect of the velocity relaxation does not influence formation of the first Ar n H + shell, because atomic binding energies in the cluster shell are roughly two orders of magnitude larger than thermal energies.
Stages of ArnH + cluster formation
Analysis of the results of MD simulations shows at least three different phases in a formation of nano-size Ar n H + clusters. In our cluster growth simulations, all small clusters with n ≤ 40 have distinct shell structures minimising their potential energies.
The growth of the cluster size n corresponds to a consequent filling of unoccupied Ar n H + shells described in Section 2B. In Figs.4a and 4b, we show an example of the Ar n H + growth and degradation for the time t ≤ 100 ns after 200 H atoms have been ionized and mixed with the Ar gas. The numbers of clusters N c (n, t) with the specified number of Ar atoms n are shown as functions of the time t with the regular snapshot interval of 20 ps. Simulations have been carried at the Ar atom density of 10 20 cm −3 and the temperature T= 200K. In Figs.4a and 4b; we can identify three distinct phases of nano-particle nucleation and growth. The first phase of nucleation is: (a) Nonequilibrium nucleation (t 2 ns ).
During this stage, Ar atoms are "captured" into the deepest shells. Protons create a strong potential field, and the process of capturing Ar atoms into closest cluster shells releases energies comparable with u(n). Thermal energy fluctuations cannot detach Ar atoms from the first deeply bound shell (Fig.2). The stage (a) is an irreversible nonequilibrium process of the formation of the inner atomic shells of Ar n H + . The reduction of small cluster population N c (n, t) for n = 1 − 3 shown in Fig.4a is explained by a capture of free Ar atoms into the closed cluster shells 1T 4 . Outer cluster shells are formed at around t ∼0.1 -1 ns. The cluster nucleation dynamics can be illustrated by the time-dependent abundance of Ar n H + (1 ≤ n ≤ 10) clusters shown in Figs.4a and 4b. At large t, non-significant populations of larger clusters with n ≥ 5 reflect an efficient thermal evaporation of Ar atoms from outer cluster shells at T=200 K. Ar 5 H + clusters are more abundant than other states of 2T n shells due to the n−diffusive behaviour of the relatively stable Ar 4 H + clusters. Details of the diffusive regime will be described in the Section IV.
(b) Quasi-equilibrium growth ( 2 ns t 60 ns ). The second stage in the interval between 2 ns < t < 60 ns, is characterized by distinct fluctuations of the cluster size shown in Fig.4a. During this quasi-equilibrium stage of the nucleation kinetics, the tetrahedral Ar 4 H + clusters became the most abundant particles. The fluctuations of the cluster abundances relate to captures and losses of Ar atoms into specific cluster shells. Averaging of these fluctuations over statistically significant timeintervals with values between 10 ns and 20 ns, yields the smooth functions describing the average number of clusters N c (n, t) with n Ar atoms. The number of clusters depicted in Fig.4a have been averaged over statistically significant interval ∆t=10ns and obtained results for N c (n, t) are shown in Fig.4a's inset axes. The steady growth of N c (n = 4, t) of tetrahedral clusters during the stage (b) can be considered as a quasi-equilibrium nucleation and growth of a new phase.
(c) Thermal equilibrium growth and size evolution (t 60 ns).
The growth of N c (n = 4, t) is stopped in the stage (c) at t ≥ 60 ns, when the system {free Ar atoms + Ar n H + clusters } has reached the steady state regime in cluster nucleation and evaporation, i.e. the system has relaxed to the state of full thermal equilibrium between different phases (Fig.4a). The averaged number of Ar n H + remain constant for the entire duration of stage (c) 60 ns < t < 100 ns, and typical fluctuations of the n-number of Ar atoms bound by H + ions are described by thermal fluctuations. The time boundaries for these stages, in Figs.4a and 4b, are given as approximates; since the actual time-boundaries between stages depend on Ar and H + densities and temperatures used in each specific simulation.
Dynamics of the independent growth of ArnH + clusters
Under conditions of the independent growth of Ar n H + clusters, the averaged time-dependent characteristics of a single cluster describe parameters of the ensemble of independent protons. This ergodic statement is not valid for interacting clusters, when consolidation of Ar n H + clusters leads to formation of a new phase like large Ar n H + m nano-crystals or liquid droplets [40]. The nucleation and growth of independent clusters have been simulated for a single H + ion at different temperatures and at the constant number of Ar atoms N =10 3 in the simulation box. Nucleation of the smallest Ar n H + clusters and consequent growth of Ar shells have been studied up to t = 100 ns. In Fig.5, the actual time-evolution of the cluster size n(t) is shown for a single Ar n(t) H + cluster at T= 90 K. The dashed red curve shows the theoretical data on the cluster growth based on a simplified two state approximation [44]. In this model, the shell growth and cluster size oscillations are described by transitions between two states: the state of free Ar atoms in the simulation box of volume V and the bound state of Ar atoms in the outer shell of the cluster of size n(t). We only consider the binding energies (n) of Ar atoms in the outer cluster shells and substitute instead n(t) its values averaged over fluctuations: n → n(t). The (n) energy in a specific shell depends on a mean-field created by H + and all Ar atoms attached to the ion. The self-consistent energy u(n) is an essential part of the binding energy (n), though (n) may include contributions of different shell configurations mixed by thermal fluctuations. We assume, for simplicity, that cluster growth occurs in subsequent captures/detachments of a single Ar atom n(t) → n(t) ± 1 and the number of Ar atoms in the gas is large N n(t). For the two states model, each Ar atom can occupy the outer shell of the Ar n(t) H + cluster or stays in the free gas state. The partition function Z of an ensemble of Ar atoms can be written as: where z Ar is the partition function for a Ar atom under condition of the Maxwell-Boltzamnn statistics, and λ T = 2π 2 /mkT is the thermal de-Broglie wave of Ar atoms. The first term in the expression for z Ar corresponds to the occupied number of states in the classical gas with the fixed temperature and particles density (N n(t)), and the second one corresponds to the contribution of the outer cluster shell with the energy (n(t)) and statistical weight g(n(t)). The values of g(n(t)) are approximately proportional to the product of geometrical cluster volume V c (n(t)) and the thermal momentum space volume: g(n(t)) ∼ V c (n(t))/λ 3 T . The presence of N H + independent H + ions in the simulation box could increase the cluster statistical weight by N H + times.
The two-state approximation describes the populations of these states via the effective chemical potential µ ef f , defined for Ar atoms in the outer cluster shell. The µ ef fvalue takes into account different statistical weights of the gas and cluster states [44]. Our simulations include the strong nonequilibrium initial condition: all deep cluster shells are empty at t = 0. Under this condition, the value of µ ef f , regulating population of the cluster shells should depend on time: µ ef f (t) = µ ef f (N, T, t). This effective chemical potential will asymptotically approach the equilibrium value µ(N, T ) when the relaxation processes are accomplished. The population of cluster shells n(t) can be expressed as: where n c = n(t → ∞) is the average number of Ar atoms in the cluster under condition of thermal equilibrium, and ∆(t, T ) = . The parameter t T represents the scaling time of cluster growth during the non-Maxwellian and quasi-equilibrium stages of nucleation. The scaling time-shift parameter t d takes into account a time-delay in formation of first tetrahedral shells and formally describes a motion of µ ef f (t) towards the upper cluster shells with the atomic binding energy (n). The full derivation of the time-delay in the formation of ArH + , Ar 2 H + , and Ar 3 H + molecules needs to include an accurate analysis of few-body collisions, which are out of the scope of the simple two-state model. Thus, the simplified time-dependence of n(t) from Eq.2 cannot provide accurate initial conditions at t=0, but it describes well an evolution of n(t) for the entire time of simulations.
The reduction of the absolute value |µ ef f (t)| with time leads to the population of cluster shells with smaller binding energies | (n)| and thus stimulates an increase of the average cluster size n(t). At t → ∞ the asymptotic value of µ ef f (t) matches to the equilibrium chemical potential µ(N, T ): µ ef f (t) → µ(N, T ), and n(t → ∞) → n c . In Fig.5, the value of µ ef f (t) has been shifted up to energies (n) of Ar n H + clusters with n ∼ 10 − 14 for the time interval of few units of scaling time t d , i.e approximately for 3t d ∼10 ns. This reflect the process of the time-dependent population of the three deepest cluster shells of Ar atoms. From Eq.2 we can conclude, that at t = t d an average size n(t) has to be about 50% of its thermal equilibrium value n c = 21.6 at T=90K. The values of kinetics parameters, t d 3.2 ns and t T = 1.4 ns, are inferred from the data depicted in Fig.5. To illustrate the efficiency of the two state model, we show in Figs.6a and 6b the initial stages of cluster growth for two time intervals 0 < t 2 ns and 0 < t 10 ns.
The nonequilibrium stage (0 < t 2 ns) in Fig.6a does not indicate any detachments of Ar atoms from the first, deepest cluster shell. This nonequilibrium stage of growth cannot yield notable fluctuations of the cluster size n(t) because the binding energies of Ar atoms in small clusters (n ≤ 8) are significantly larger than the thermal energy at T=90K and thermal fluctuations cannot remove Ar atoms from these shells. Rare changes of n(t) in Fig.6a are considered as attempts to increase the cluster size. Time of formation of the first two tetrahedral shells is estimated as 1 -1.5 ns. The scaling value of thermalization time t T =1.4 ns is in good agreement with our evaluation of the velocity relaxation time of Ar atoms. The t T value can be identified in Figs.4a, 4b, 5, 6a, and 6b as the onset of the fast and strong up-and down-fluctuations of the cluster size n(t) with a simultaneous steady increase of the mean cluster size n(t). The quasi-equilibrium stage of cluster growth begins after 1.5 -2 ns. The mean cluster size n(t) increases slowly until it shows a saturation at 10 -15 ns since the beginning of the nucleation process, as is depicted in Figs.5 and 6b.
IV. EQUILIBRIUM SIZE DISTRIBUTION OF INDEPENDENT ArnH + CLUSTERS AND CLUSTER FLUCTUATIONS
The size distribution of solid or liquid nano-particles is a fundamental characteristic required for analysis and modeling of many astrophysical and atmospheric phenomena. The size-distribution of small clusters depends on inter-atomic interactions and the nucleation kinetics. The attachment and detachment of atoms or molecules create time-dependent fluctuations of the cluster size even under the thermal equilibrium condition. Detailed analysis of these fluctuations can provide information on the cluster size distribution in an ensemble of indepen-dent particles.
A. Fluctuations of cluster size under the thermal equilibrium condition
The "flat" behavior of n(t) at large time t and intensive fluctuation regime are indicators of the thermal equilibrium between "cluster-bound" and free Ar atoms. The number of Ar atoms n(t) in a Ar n H + cluster are shown in Fig.7a for two clusters formed by two independent protons under the thermal equilibrium conditions. Two "trajectories" of MD simulations n 1 (t) and n 2 (t) for this independent clusters are shown as functions of time inside the time interval between 40 ns and 44 ns. Cluster size fluctuations are characterized by different time scales and amplitudes. Averaging of n(t)-function over different time-intervals shows separate typical frequency and inside the time interval 0 < t < 10 ns. The gas temperature and density are T= 90K and ng= 10 20 cm −3 respectively. Theoretical curve is computed using thermodynamic formula for the two state approximation given by Eq.2 with the kinetic parameters t d 3.2 ns, tT = 1.4 ns, and nc=21.6. amplitudes of cluster size-fluctuations. For example, the results of averaging (filtering) of the cluster size fluctuations are shown in Fig.7b for different Gaussian filtering intervals 2σ = 40 ps, 0.2 ns, and 1 ns. Fast fluctuations correspond to the thermal attachment/detachment of Ar atoms from cluster shells. Slow fluctuations can be attributed to a long-term relaxation of thermo-dynamical parameters of the gas environment around clusters. The long-term fluctuations exist for all Ar n H + clusters and they have similar scale of typical fluctuations, but happen at different times and cluster locations.
The fluctuation pattern of the cluster size-distribution N c (n, t) has also been found in simulations of cluster growth for an ensemble of H + ions embedded into Ar gas (Figs.4a and 4b). Different stages of nucleation processes shown in Figs
B. Equilibrium size-distribution of ArnH + clusters
Parameters of the time evolution of the average cluster size n(t) and time-dependent fluctuations, n(t)-n(t), around the average value yield unique information on the cluster size distribution during the quasi-equilibrium and equilibrium stages of the cluster growth. Fluctuations can be considered as the bases for the diffusion process in the parametric {n}-space of the Ar n H + clusters [1][2][3][4]. Exchange of Ar atoms between cluster shells and free Ar gas during the quasi-equilibrium or equilibrium stages of nucleation is an example of a random walk in the space of cluster size n.
The diffusion characteristics of the Ar n H + growth and the stationary cluster size distributions P (n, T ) have been inferred from the results of MD simulations at different temperatures and ion concentrations. The empirical probabilities to detect Ar n H + cluster with n Ar atoms during specified time interval τ is given by the normalized Probability Distribution Function (PDF) P (n, t, τ, T ). The empirical probability P (n, t, τ, T ) = N (n, t, τ, T )/N (t, τ, T ) is defined as a ratio of the number of N (n, t, τ, T ) realization of Ar n H + clusters with n atoms for the time interval τ to the total number of realization N (t, τ, T ) = n N (n, t, τ, T ). The time-interval τ has been selected to be high enough for an accurate statistical evaluation of n-distributions. The defined PDF does not depend on the time, under condition of the thermal equilibrium and describes stationary thermal probabilities P (n, T ).
We have computed P (n, t, τ, T ) at different time t to investigate the kinetics of the independent cluster growth under quasi-equilibrium and equilibrium conditions. Results are shown by curves A, B, and C in Fig.8. The error bars in Fig.8 indicate a standard deviation of the MD simulation results. Cluster size distributions at different times t reflect different phases of the cluster shell formation. A long tail of small clusters with n ≤ 16 as is shown in Fig.8 (curve A), is formed during earlier stages of cluster growth under nonequilibrium conditions and this is clearly reflected in P (n, t, τ = 20ns, T ) for t ≤ 20. More detailed information about the cluster ndistributions at short time t ≤ 20 ns requires significantly shorter intervals τ due to the fast non-thermal growth of clusters (Fig.5) and so, an increasing number of independent simulating trajectories {n(t)}.
The quasi-equilibrium and equilibrium stages of the cluster formation arise after t ≥ 20 ns (B and C curves in Fig.8). They show broad distributions of the cluster sizes n between 15 and 30 Ar atoms.
FIG. 8. The empirical probability P (n, t, τ, T ) to find the ArnH + clusters with n Ar atoms in the cluster shells. The total simulation time t= 100 ns for a single H + ion at T=90K. The curves A, B, and C are shown at different times: curve A, the earlier stage of cluster formation 0≤ t ≤20 ns, curve B just after thermalization 20≤ t ≤60 ns, and curve C 60≤ t ≤100 ns at the end of the MD simulation process. The averaging interval for the later distributions is τ =40 ns, and for the early stage A is 20 ns. The 0-20 ns time interval includes essentially nonequilibrium stage of the cluster nucleation and this causes a visible fraction of small clusters in the empirical probability given by the curve A.
These probabilities are controlled by the distribution of the cluster Ar-detachment energies u(n), which have deeper local minima near n ∼ 19-23 as it is shown in Fig.2. The low temperature T=90K allows Ar atoms to occupy outer cluster shells. These shells intensively exchange by Ar atoms with the thermal Ar gas creating large n fluctuations (Fig.5) with the standard deviation σ(n, T ) 4 Argon atoms. The value of σ(n, t), inferred from the analysis of time-dependent fluctuations at the thermal equilibrium stage, allows to compute the theoretical value of the full width at half maximum (FWHM) of the thermal distribution: FWHM 2.34 σ(n, t) 9.3. This matches well to the simple estimation of the FWHM C value of the thermal distribution P(n,T) (curve C in Fig.8): FWHM C 10, if the n-distribution of the C-curve is approximated by a Gaussian. The peak of P (n, t, τ, T ) = P (n, T ) around the minimal values of u(n) at the region n ∼ 19 − 23 and strong n-fluctuations are mostly formed at t > 20 ns.
V. KINETICS OF THE CLUSTER FORMATION FOR ENSEMBLE OF H + IONS
The kinetics of cluster nucleation and their thermal equilibrium parameters can be modified by the mutual influence of Ar n H + in the cluster formation process. At high concentration of H + ions, the capture of free FIG. 9. The probability distribution function P (n, N H + ) to find ArnH + clusters with n Ar atoms in the cluster shells as a function the number of argon atoms n. ArnH + clusters are formed by the initial ensemble of N H + ions. The total simulation time is t= 100 ns. The gas temperature T= 90 K, with either N H + =1 (circles), or N H + =20 (triangles) H + ions, all at Ar density 10 20 cm −3 . Data is averaged over the time interval 60-100 ns, error bars for the 20 H + ion curve are on the order of the marker size. The introduction of more ions leads to a reduction of the mean cluster size compare with the independent proton growth.
Ar atoms by different clusters becomes a competitive process. Additional complexity of nucleation arises at low gas temperature, when strong correlations between Ar n H + clusters lead to their aggregation to a new phase, the large scale Ar n H + m droplet or nano-crystal [40]. This transition may occur via several paths, such as consolidation of strongly-bound Ar 4 H + clusters or as a coalescence process of absorption of small and medium clusters by larger nano-particles [1]. We have simulated the cluster nucleation using different initial ensembles of H + ions, N H + from 1 to 200 H + ions at the temperatures T=90K and 200K.
To clarify the influence of an increasing H + density on the nucleation process, we have performed simulations of the Ar n H + nucleation at N H + =1 and 20 H + ions at the temperature T=90K and the constant number of Ar atoms N Ar =10 3 in the simulation box. Results are shown in Fig.9. The significant increase of the number of H + ions (triangles in Fig.9 for N H + = 20) leads to a reduction of the mean cluster size compared with the independent proton growth (circles). At T=90K, the 20 H + ions have captured about 36% of free argon atoms. The density of the free Ar gas became around 64% of the initial gas density and that shifts the negative chemical potential of free Ar atoms down towards Ar cluster binding energies on the value ∆µ gas (T ) ln[0.64] kT -0.45 kT. Argon atoms from the largest size clusters are easily evaporated due to such shifts. In this simplified estimations we have neglected changes of the mean field potential.
The cluster size distribution for the ensemble of 200 H + ions has been inferred from our MD simulations at the temperature T=200K and results are shown in Fig.10 by the triangles obtained with time and ensemble averaging. The squares with corresponding error bars shows the same data for T=200K averaged only over the ensemble of 200 H + at the single snapshot of t=80 ns. This narrow size-distribution is peaked around Ar 4 H + clusters. Clusters with large atomic shells cannot be observed at high temperatures, because thermal fluctuations destroy these shells. For example, at the temperature T=200K only small cluster with the typical size between 2-7 Ar atoms in the shells can be stable. For comparisons between multi-proton and single proton nucleation, the equilibrium distribution function of the cluster size P (n, T = 200K) has been inferred from the single H + ion simulations of the independent cluster growth at T= 200K and the results are shown in Fig.10 with red down-triangles. The narrow P (n, T = 200K) distribution is sharply peaked at n=4 (the tetrahedral clusters) in contrast with the low temperature P (n, T = 90K), which is favorable to a broad range of larger cluster size with n ∼ 15-30. For comparison, the low temperature n-distribution of independent clusters at T=90K is also shown in Fig.10 by circles.
The similarity of the high-temperature distributions for the single H + and 200 H + ions at T= 200K shows that although the cluster chemical potentials are sensitive to both the density of the H+ ions and Ar atoms; the large value of the Ar binding energy of the Ar 4 H + cluster causes the cluster size distribution to be weakly sensitive to the proton density in the specific interval of temperature around 200K considered in the article. Therefore, cluster growth at chosen parameters can be seen mostly as a nucleation kinetics of independent small size clusters. This statement is strongly supported by results of time dependent kinetics of cluster nucleation in the ensemble of 200 seed H + ions previously shown in Fig.4a and 4b. The abundances (n-distribution) of all small clusters with sizes n ≤10 have reached their equilibrium values around 2 ns. However, the quasi-equilibrium stage of nucleation (the orange circles and gold upside-down triangles (curve in Fig.4a for the clusters Ar 4 H + and Ar 5 H + respectively) shows the steady growth of stable tetrahedral clusters (the circles in Fig.4a). The development of thermal fluctuations is clearly seen in Fig.4a for Ar 4 H + and Ar 5 H + clusters. The anti-phase of Ar 4 H + and Ar 5 H + fluctuations results in an effective single Ar atom exchange between the tetrahedral clusters and free Ar gas. Any capture of free Ar atoms by Ar 4 H + leads to formation of new Ar 5 H + clusters and to reduction of Ar 4 H + . The thermal equilibrium detachment of a single Ar atom from Ar 5 H + produces a new stable tetrahedral cluster. The thermal energy fluctuation cannot destroy the deepest shell even at T=200 K. The Ar 4 H + are most stable and abundant clusters for both the single-and multi-proton nucleation process as it is shown in Fig.10.
VI. CONCLUSIONS
Our analysis of MD simulations provide a consistent scenario for a new nucleation phase initiated by ions in neutral gases. The growth of ion clusters occurs via the formation of atomic shells around the ion seed particle. MD simulations show evidence for three distinct stages of cluster nucleation: nonequilibrium, quasi-equilibrium, and equilibrium stages. In the first nonequilibrium stage, the strong ion field removes the barriers to nucleation; thus, the probability to capture a gas atom into the deeply cluster-bound states (1T and 2T) is significantly higher than the probability for all detachment processes. This leads to the build up of inner shells during this stage. When a gas atom is captured into inner shells, a local release of high kinetic energies occurs. This causes the energy distributions of the atomic particles to become non-Maxwelllian. Once these energy distributions relax to a Maxwellian like distribution; the system moves into the second quasi-equilibrium stage. During this stage the equilibrium between the new phase (clusters) and the free gas has yet to be established. Thus, we see a steady growth of the cluster size and number of clusters in multiion systems, up to the final thermal equilibrium stage. The notable feature of the quasi-equilibrium stage is the onset and evolution of fast and strong fluctuations of the atom number. The size of the cluster increases with simultaneous decrease of binding energies in the outer cluster shells. This stimulates detachment processes and the exchange of atoms between the clusters and the free gas. Then system moves into the final thermal-equilibrium, stage, where the cluster size distribution reaches a steady state value. The density of gas and ions, temperature, and parameters of cluster shell structures regulate the cluster size-distribution. The time-dependent fluctuations of the cluster size can be used to predict parameters of the steady state distribution, specifically, the cluster size dispersion in an ensemble of growing clusters.
Nucleation of nano-particles in astrophysical and planetary environments include more complicated physics of the cluster formation in molecular gases, such as N 2 , CO 2 , H 2 O, CH 4 and others; due to the internal molecular structure and more complicated intermolecular and intramolecular interactions. We intend to investigate kinetics of nano-cluster formation in important astrophysical gases and apply new theoretical parameters of nucleation for modeling observational data. Our results, reported in this article, will be useful in analysis of the cluster growth rates and size distributions of charge-seeded clusters as function of the gas parameters and molecular structures.
|
2021-09-21T01:15:34.168Z
|
2021-09-19T00:00:00.000
|
{
"year": 2021,
"sha1": "95e69c6ba29cd4f0a4a8dd2fa4c7fcce73312fed",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "95e69c6ba29cd4f0a4a8dd2fa4c7fcce73312fed",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233396413
|
pes2o/s2orc
|
v3-fos-license
|
Analysis of Ionic Domains on a Proton Exchange Membrane Using a Numerical Approximation Model Based on Electrostatic Force Microscopy
Understanding the ionic channel network of proton exchange membranes that dictate fuel cell performance is crucial when developing proton exchange membrane fuel cells. However, it is difficult to characterize this network because of the complicated nanostructure and structure changes that depend on water uptake. Electrostatic force microscopy (EFM) can map surface charge distribution with nano-spatial resolution by measuring the electrostatic force between a vibrating conductive tip and a charged surface under an applied voltage. Herein, the ionic channel network of a proton exchange membrane is analyzed using EFM. A mathematical approximation model of the ionic channel network is derived from the principle of EFM. This model focusses on free charge movement on the membrane based on the force gradient variation between the tip and the membrane surface. To verify the numerical approximation model, the phase lag of dry and wet Nafion is measured with stepwise changes to the bias voltage. Based on the model, the variations in the ionic channel network of Nafion with different amounts of water uptake are analyzed numerically. The mean surface charge density of both membranes, which is related to the ionic channel network, is calculated using the model. The difference between the mean surface charge of the dry and wet membranes is consistent with the variation in their proton conductivity.
Introduction
Proton exchange membrane fuel cells are a core technology of green energy devices for several reasons. They do not emit carbon dioxide; they can operate continuously under different environmental conditions without change in performance, and they have a relatively high energy conversion efficiency. However, many limitations must be overcome before they can be adopted, such as high cost, low reliability, and a lack of hydrogen gas infrastructure. Solving the low reliability issue is imperative; however, this is difficult because a proton exchange membrane's reliability is related to its morphological structure.
Proton exchange membranes typically act as proton conductors because of their heterogeneous structures, which is the combination of a hydrophobic backbone with hydrophilic sulfonic acid groups. Sulfonic acid groups create ionic clusters that have an inverted micellar structure and can form a network under hydration. Typically, protons move through the ionic network through vehicle-type and Grotthuss-type mechanisms. In the vehicle-type mechanism, the protons pass into the medium with a solvent. Thus, proton conductivity is related to the solvent diffusion rate. In the Grotthuss-type mechanism, the protons move into the medium by creating and breaking hydrogen bonds without any solvent. In general, these mechanisms are not independent. In the proton exchange membrane, the vehicle-type mechanism is predominant, and the Grotthuss-type mechanism is observed Polymers 2021, 13, 1258 2 of 13 because of the water absorbed into the membrane by hydration [1]. Thus, the structure of the ionic channel network, which exhibits morphological change, is directly related to proton conductivity.
Understanding the morphological structure of Nafion is as important as developing novel membranes. This is because the ability of proton movement to mirror morphological structures such as the ionic channel network is the essential function of proton exchange membranes. Since the 1980s, many research groups have attempted to understand the morphological structure of Nafion [2][3][4]. Gierke et al. introduced a cluster-network model of Nafion based on small-angle X-ray scattering and wide-angle X-ray scattering measurements [5]. According to this model, the ionic channel network is formed by the hydration of ionic clusters, which under dry conditions consist of sulfonic acid groups in a semicrystalline matrix. These ionic clusters are 4 nm diameter spheres in an inverted micellar structure, with a narrow 1 nm channel connecting each cluster. The ionic channel network becomes more widely interconnected as water uptake in the Nafion increases, and the structure becomes more complex as protons move through the network. The most recent of these is Klaus and Chen's cylindrical water channel model [6], based on simulation studies conducted using existing scattering data. According to Klaus and Chen, cylindrical crystallites of 2-5 nm and cylindrical water channels with a radius of 2-3 nm are formed in the polymer matrix. Each cylindrical water channel increases in size as the volume of water in Nafion increases, and the existence of cylindrical crystallites contributes to the mechanical strength of Nafion. Despite numerous studies on the morphology of Nafion, the structure of the ionic channel network and the proton transport mechanism are still unclear. The morphology of Nafion changes depending on the synthesis process [7]. The morphology varies under hydration and dehydration. In a Nafion-based composite membrane, the morphology is varied with wt% and by using different types of pillar materials [8].
Atomic force microscopy (AFM) can map a specimen's surface with nanoscale resolution without damage, by using a vibrating tip technique. In addition, it can measure various physical properties such as mechanical, thermal, and electrical properties by using the extended mode [9][10][11]. AFM has been widely used for understanding the morphological characteristics of the proton exchange membrane. Typically, this membrane has a charged/uncharged domain, and its phase separation characteristic is crucial for understanding its characteristics. Thus, conductive AFM techniques, such as electrostatic force microscopy (EFM), current sensing atomic force microscopy, or Kelvin probe microscopy, have attracted attention as efficient means of studying proton exchange membranes. Numerical approaches have been proposed for understanding the morphology of proton exchange membranes; in these, the local charge density and dielectric constant are based on AFM measurements. Thus far, several current-sensing AFM and EFM studies have been conducted [12][13][14][15].
The technique of EFM has great potential for understanding the surface electrical characteristics. It is widely used in studies of the surface charge distribution and dielectric constant of locally charged materials [16][17][18][19]. In EFM, the local charge distribution appears as a phase lag value distribution. For extracting detailed information from the measurements, the decoding process from the recorded phase lag value is required. However, this is difficult because the phase lag occurs owing to the net electrostatic force, which is the summation of all Columbic forces between the tip and the sample surface. Thus, an analytical model is required for EFM measurements, and many models have been suggested. Mélin et al. [16] developed an analytical model for estimating the amount of charge stored on a surface using EFM. They assumed that the tip and sample surface created a parallel-plate capacitor; further, they determined the force gradient of stored charge and dipole-dipole interaction due to the electric field between the tip and the sample surface. By calculating the ratio, the amount of stored charge was derived. Further, they extended this model to consider the tip and sample surface with other capacitor shapes. Han et al. [17] studied the movement and diffusion of natural and injected charges using EFM to understand the interface of a nano-dielectric. They analyzed EFM images using a widely accepted methodical model [18] to explain local charge movement at the SiO 2 /LDPE boundary. In this model, the phase value reflects the net force between the tip and the sample surface and the net force caused by local charge. They used low-density polyethylene (LDPE) as an insulating matrix material to minimize electrical interaction between the tip and the sample surface. Thus, the phase value refers to the amount of local charge, and its movement can be clearly seen. Shen et al. [19] studied the degree reduction of a monolayer graphene oxide (GO) sheet by utilizing electrostatic force spectroscopy; they considered the difference in the dielectric constant of graphene and mica. They assumed that a tip and sample surface can create a parallel capacitor with a dielectric material and derived a capacitive force that includes the dielectric constant between the tip and sample surface.
Previous studies that attempted to understand the local charge distribution and dielectric constant by analyzing EFM signals have obtained remarkable results. EFM signal interpretation is based on characterizing the capacitive force between a tip and sample surface. This capacitive force is due to the electrical interaction between the conductive tip and surface charge of the sample. For calculating the net force, individual electrical interactions that contribute to the net force have to be specified, and this requires a deep understanding of the system. Each suggested analysis based on the capacitive force agreed well with specific systems.
In several studies, EFM signals are used to provide additional morphology information. Thus, the phase value distribution on the surface is used for observing conducting/nonconducting areas of composite membranes [20,21]. A few groups have studied the ionic structure of proton exchange membranes using EFM [22,23]. One such remarkable study is that of Barnes and Buratto [22]. They measured several individual ionic channels of Nafion by using EFM under different bias voltages and analyzed the obtained results using a well-known simple parallel capacitor model. They found particular channel shapes such as connected cylindrical channels, dead-end cylinder channels, and bottleneck channels by characterizing the differences in the EFM signal.
In this study, we derived a numerical approximation model (NAM) for interpreting EFM signals from proton exchange membrane measurements. The subject of our study is similar to the work of Shen et al., whose method focused on understanding locally charged areas, encompassed by non-conducting areas. Further, their approach for analyzing EFM signals was systematic and logical. However, the proton exchange membrane structure is more complicated. The sulfonic acid groups in the membrane, which create ionic clusters, are scattered over the entire surface. Owing to hydration, the ionic clusters are connected with each other, and they create ionic channels. In ionic channels, free charges from ionized sulfonic acid groups or that are externally supplied exist and move. The polarized external electric field caused by applying bias voltage between the tip and surface of the proton exchange membrane causes free charges to coexist near ionic channels. Thus, the capacitive force between the tip and proton exchange membrane simultaneously includes both electrical interactions. To analyze the EFM signal from a proton exchange membrane, the NAM was derived by considering two assumptions. First, the conductive tip and proton exchange membrane surface creates a nanoscopic capacitor, and the geometry of this capacitor can be simplified as a parallel plate. Second, a polarized surface and free charge independently interact with the conductive tip. The electrical interaction of free charge is also considered. NAM considers the sum of two independent electrical interactions: electrostatic force between a conductive tip and polarization surface, and that between a tip and free charges. By considering these two terms, the ionic domain structure can be analyzed. Using this numerical model, we characterize the ionic channel network of proton exchange membranes with different amounts of water uptake. We also extract quantitative information relating the ionic channel network to the proton exchange membrane. Furthermore, we attempt to provide a general model for interpreting changes in the morphology of a proton exchange membrane.
Experimental Setup and Model Development
Nafion 212 membranes were studied under two different conditions in our experiments. The first membrane, called the dry membrane, was baked in an oven at 80 • C overnight. The second membrane, called the wet membrane, was soaked in water overnight. Before measurement, the dry membrane was exposed to ambient conditions for 2 h, while the wet membrane was soaked in water.
Both membranes were mapped and analyzed systematically in several steps. First, each membrane was scanned at a frequency of 1 Hz as the sample bias voltage was changed from −3 V to 3 V in 1 V intervals by using Park systems XE-150 AFM (Park Systems, Suwon, Korea). Phase images and topography were simultaneously mapped in this step. The mean phase lag value of each image was subsequently calculated and plotted. Finally, these mean phase values were analyzed using an approximation model based on Shen et al.'s study [19].
An electrical interaction occurs when a bias voltage is applied between the tip and the sample surface, as the dielectric sample becomes polarized. The capacitive force that is induced between the tip and the sample surface can be expressed as [24] where F is the capacitive force, C is the capacitance of the space between the tip and the sample, V is the applied voltage, and z is the distance between the tip and the sample surface. The capacitance of the tip (C tip ), which is modeled as a plate, is [19] where R tip is the radius of the tip and ε 0 is the permittivity of free space. From the capacitance equation, the charge accumulated in the tip is [19] Q The tip and sample create a nanosized parallel-plate capacitor filled with air and Nafion, as shown in Figure 1.
numerical model, we characterize the ionic channel network of proton exchange membranes with different amounts of water uptake. We also extract quantitative information relating the ionic channel network to the proton exchange membrane. Furthermore, we attempt to provide a general model for interpreting changes in the morphology of a proton exchange membrane.
Experimental Setup and Model Development
Nafion 212 membranes were studied under two different conditions in our experiments. The first membrane, called the dry membrane, was baked in an oven at 80 °C overnight. The second membrane, called the wet membrane, was soaked in water overnight. Before measurement, the dry membrane was exposed to ambient conditions for 2 h, while the wet membrane was soaked in water.
Both membranes were mapped and analyzed systematically in several steps. First, each membrane was scanned at a frequency of 1 Hz as the sample bias voltage was changed from -3 V to 3 V in 1 V intervals by using Park systems XE-150 AFM (Park Systems, Suwon, Korea). Phase images and topography were simultaneously mapped in this step. The mean phase lag value of each image was subsequently calculated and plotted. Finally, these mean phase values were analyzed using an approximation model based on Shen et al.'s study [19].
An electrical interaction occurs when a bias voltage is applied between the tip and the sample surface, as the dielectric sample becomes polarized. The capacitive force that is induced between the tip and the sample surface can be expressed as [24] where F is the capacitive force, C is the capacitance of the space between the tip and the sample, V is the applied voltage, and z is the distance between the tip and the sample surface. The capacitance of the tip (Ctip), which is modeled as a plate, is [19] ≅ where Rtip is the radius of the tip and ε0 is the permittivity of free space. From the capacitance equation, the charge accumulated in the tip is [19] = The tip and sample create a nanosized parallel-plate capacitor filled with air and Nafion, as shown in Figure 1. The capacitance of this parallel-plate capacitor is calculated as [24,25] Polymers 2021, 13, 1258
of 13
A parallel-plate capacitor filled with air and Nafion can be assumed as a dielectric filled capacitor and it can be expressed as [25] where S is the area under the tip, t is the thickness of the membrane, and ε r is its relative permittivity. Then, where A is the area of the tip, ∂C ∂z and the capacitance force is Local free charges exist in Nafion due to the ionic domain. Thus, an electrostatic force is also induced between the tip and free charge and is expressed as Hence, the net force between the tip and the sample surface is the sum of the capacitance force of polarized surface and free charges, given as and the force gradient is ∂F ∂z The frequency shift of Nafion is derived by substitution of the net force gradient to the frequency shift [19] while the phase shift is given as The variables k and f 0 represent the spring constants of a tip and resonance frequency, respectively.
If a sample is uniform and does not contain local surface charges, the polarity of the surface charge of the dielectric sample is opposite to that of the tip charge. Hence, if a positive bias voltage is applied, the tip charge is negative, and the sample surface is positively charged, and vice versa. Thus, the force between the tip and the sample surface is always attractive, as shown in Figure 2, even if the polarity of the bias voltage is changed. In this case, the first term in (13) is dominant. Thus, there is a parabolic relationship between the phase shift and the bias voltage, as shown in Figure 2, which also depends on ε r . Under identical experimental conditions, if the sample is homogeneous, the phase shift is similar in all scanned areas because the relative permittivity is the same. However, the phase shift changes in a heterogeneous material because of local differences in the relative permittivity. When experimental conditions such as temperature and humidity change, different phase shifts are measured because of these local changes in the relative permittivity.
Polymers 2021, 13, x 6 of 14 positive bias voltage is applied, the tip charge is negative, and the sample surface is positively charged, and vice versa. Thus, the force between the tip and the sample surface is always attractive, as shown in Figure 2, even if the polarity of the bias voltage is changed. In this case, the first term in (13) is dominant. Thus, there is a parabolic relationship between the phase shift and the bias voltage, as shown in Figure 2, which also depends on εr. Under identical experimental conditions, if the sample is homogeneous, the phase shift is similar in all scanned areas because the relative permittivity is the same. However, the phase shift changes in a heterogeneous material because of local differences in the relative permittivity. When experimental conditions such as temperature and humidity change, different phase shifts are measured because of these local changes in the relative permittivity. The behavior of ion exchange membranes can be explained by the combination of the polytetrafluoroethylene (PTFE) backbone and the ionic channel network created by the interconnection of ionic clusters, which consist of sulfonic acid groups. When water binds with the negatively charged sulfonic acid groups, protons are solvated, and free charges exist in the membrane. Since locally charged regions exist in ion exchange membranes, the phase shift is affected by both the first and second terms of (13). Here, Qfree is the local charge related to the ionic cluster; in this case, it is the proton movement into the ionic channel network. The distribution of ionic channel networks on a surface is random, and it changes with surface hydration. Thus, the characterization of ionic clusters in an ion exchange membrane is complicated, and it is even more difficult in composite membranes. However, measuring the force gradient, which is related to free charge, provides a simple quantitative method for characterizing the ionic channel network. Quantitative information on the homogeneity and distribution of the ionic domains on a membrane can be provided by estimating local variations in free charge and relative permittivity. Figure 3 shows the topography and line profile of dry and wet membranes with the bias voltage ranging from -3 V to 3 V in 1 V steps. The brightness in the images indicates height, with brighter (whiter) regions representing higher position. Neither of the membranes show any remarkable structure variation under hydration. Both membranes show smoothly varying surfaces, except the left side of the image. In the wet membrane, the entire surface is smoothly grooved compared with the dry membrane. This is also observed in the line profile; in the wet membrane, the line is gradually curved. It indicates that the morphology of the wet membrane is rougher than the dry membrane. This can also be proved by root mean square (rms) roughness, which can be calculated by the standard deviation of the height variation. The rms roughness of the dry and wet membranes is 12.5 nm and 24.8 nm, respectively. This indicates that the surface becomes rougher after swelling. On top of both images, two blurry lines are observed, which are The behavior of ion exchange membranes can be explained by the combination of the polytetrafluoroethylene (PTFE) backbone and the ionic channel network created by the interconnection of ionic clusters, which consist of sulfonic acid groups. When water binds with the negatively charged sulfonic acid groups, protons are solvated, and free charges exist in the membrane. Since locally charged regions exist in ion exchange membranes, the phase shift is affected by both the first and second terms of (13). Here, Q free is the local charge related to the ionic cluster; in this case, it is the proton movement into the ionic channel network. The distribution of ionic channel networks on a surface is random, and it changes with surface hydration. Thus, the characterization of ionic clusters in an ion exchange membrane is complicated, and it is even more difficult in composite membranes. However, measuring the force gradient, which is related to free charge, provides a simple quantitative method for characterizing the ionic channel network. Quantitative information on the homogeneity and distribution of the ionic domains on a membrane can be provided by estimating local variations in free charge and relative permittivity. Figure 3 shows the topography and line profile of dry and wet membranes with the bias voltage ranging from −3 V to 3 V in 1 V steps. The brightness in the images indicates height, with brighter (whiter) regions representing higher position. Neither of the membranes show any remarkable structure variation under hydration. Both membranes show smoothly varying surfaces, except the left side of the image. In the wet membrane, the entire surface is smoothly grooved compared with the dry membrane. This is also observed in the line profile; in the wet membrane, the line is gradually curved. It indicates that the morphology of the wet membrane is rougher than the dry membrane. This can also be proved by root mean square (rms) roughness, which can be calculated by the standard deviation of the height variation. The rms roughness of the dry and wet membranes is 12.5 nm and 24.8 nm, respectively. This indicates that the surface becomes rougher after swelling. On top of both images, two blurry lines are observed, which are indicated by red arrows. They indicate the boundary of bias voltage change and are found in EFM images at the same position. Outside the boundary, the morphology does not show any difference. This result implies that applying a bias voltage has an insignificant effect on the topography. Figure 4 depicts the EFM phase images of dry and wet Nafion with the bias voltage ranging from -3 V to 3 V in 1 V steps. The colors in the image indicate the phase lag value, which represents the force gradient. From the image, the color is darker with bias voltage. When the same bias voltage is maintained, the color is uniform except on the left side of the image. This indicates that the areas with homogeneous morphological characteristics have similar phase lag values. The color is brighter from bottom to top of both images. It indicates that the phase lag value is systematically changing. However, the phase lag in each colored region does not follow the parabolic shape that is typical of changes to the force gradient due to induced charge, as shown in Figure 2. Figure 4 depicts the EFM phase images of dry and wet Nafion with the bias voltage ranging from −3 V to 3 V in 1 V steps. The colors in the image indicate the phase lag value, which represents the force gradient. From the image, the color is darker with bias voltage. When the same bias voltage is maintained, the color is uniform except on the left side of the image. This indicates that the areas with homogeneous morphological characteristics have similar phase lag values. The color is brighter from bottom to top of both images. It indicates that the phase lag value is systematically changing. However, the phase lag in each colored region does not follow the parabolic shape that is typical of changes to the force gradient due to induced charge, as shown in Figure 2. Figure 5 depicts the line profiles of the dry and wet proton exchange membranes, providing numerical information on the phase shift at each bias voltage. Both images show small changes for a phase shift of~0.2 • when the same bias voltage is maintained, and a relatively large phase shift of 1 • is observed when the bias voltage changes. Both membranes have positive phase shift values between −3 V and 0 V, indicating that the net electrostatic force between the tip and the sample surface is repulsive. In the negative bias voltage configuration, the tip is positively charged, and typically, the force between the tip and the sample surface is attractive, owing to the negatively polarized membrane surface. The result depicts the opposite phenomenon, implying that the sample surface is positively charged. For negative bias voltages, phase lag values are slightly higher for dry membranes than those for wet membranes. The phase shift is negative between 2 V and 3 V, indicating that the force is in the attractive regime. With these bias voltages, both membranes have similar phase lag values. Figure 5 depicts the line profiles of the dry and wet proton exchange membranes, providing numerical information on the phase shift at each bias voltage. Both images show small changes for a phase shift of ~0.2° when the same bias voltage is maintained, and a relatively large phase shift of 1° is observed when the bias voltage changes. Both membranes have positive phase shift values between -3 V and 0 V, indicating that the net electrostatic force between the tip and the sample surface is repulsive. In the negative bias voltage configuration, the tip is positively charged, and typically, the force between the tip and the sample surface is attractive, owing to the negatively polarized membrane surface. The result depicts the opposite phenomenon, implying that the sample surface is positively charged. For negative bias voltages, phase lag values are slightly higher for dry membranes than those for wet membranes. The phase shift is negative between 2 V and 3 V, indicating that the force is in the attractive regime. With these bias voltages, both membranes have similar phase lag values. x(nm) For more detailed analysis, the mean phase value at each bias voltage was plotted for both the dry and wet membranes. From the analysis, it can be observed that the phase lag Figure 5 depicts the line profiles of the dry and wet proton exchange membranes, providing numerical information on the phase shift at each bias voltage. Both images show small changes for a phase shift of ~0.2° when the same bias voltage is maintained, and a relatively large phase shift of 1° is observed when the bias voltage changes. Both membranes have positive phase shift values between -3 V and 0 V, indicating that the net electrostatic force between the tip and the sample surface is repulsive. In the negative bias voltage configuration, the tip is positively charged, and typically, the force between the tip and the sample surface is attractive, owing to the negatively polarized membrane surface. The result depicts the opposite phenomenon, implying that the sample surface is positively charged. For negative bias voltages, phase lag values are slightly higher for dry membranes than those for wet membranes. The phase shift is negative between 2 V and 3 V, indicating that the force is in the attractive regime. With these bias voltages, both membranes have similar phase lag values. x(nm) For more detailed analysis, the mean phase value at each bias voltage was plotted for both the dry and wet membranes. From the analysis, it can be observed that the phase lag For more detailed analysis, the mean phase value at each bias voltage was plotted for both the dry and wet membranes. From the analysis, it can be observed that the phase lag value varies linearly with bias voltage in both membranes, as shown in Figure 6. There are locally charged regions on the membrane, the behavior of which is characterized by the second term in (13). As the phase lag is the sum of both terms in (13), a positive phase lag value indicates that the second term, related to the local surface charge, is dominant. When the bias voltage is reduced, the phase lag decreases. For the wet membrane, the phase lag values of 3.4 • , 2.5 • , and 1.8 • were noted at bias voltages of −3 V, −2 V, and −1 V, indicating that both terms in the equation decreased as the bias voltage was reduced. For the dry membrane, the phase lag values were 4.0 • , 3.4 • , and 2.5 • at −3 V, −2 V, and −1 V, respectively. Wet membranes typically have higher proton conductivities than dry membranes, and a high ionic channel network density, because of their creation of a new ionic channel network. The difference between the phase lag values of wet and dry membranes is thus related to the second term in (13). At 1 V, this value is close to zero. In contrast, at 2 V and 3 V, both membranes have similar negative phase values. Both membranes have similar phase values at 2 V and 3 V. Specifically, dry and wet membranes have the same lag values at 3 V. This result implies that the electrical interaction is only between the charged tip and the polarized surface charge. Thus, the second term in (13) does not have any effect on the phase lag in this case.
Results
indicating that both terms in the equation decreased as the bias voltage was reduced. For the dry membrane, the phase lag values were 4.0°, 3.4°, and 2.5° at -3 V, -2 V, and -1 V, respectively. Wet membranes typically have higher proton conductivities than dry membranes, and a high ionic channel network density, because of their creation of a new ionic channel network. The difference between the phase lag values of wet and dry membranes is thus related to the second term in (13). At 1 V, this value is close to zero. In contrast, at 2 V and 3 V, both membranes have similar negative phase values. Both membranes have similar phase values at 2 V and 3 V. Specifically, dry and wet membranes have the same lag values at 3 V. This result implies that the electrical interaction is only between the charged tip and the polarized surface charge. Thus, the second term in (13) does not have any effect on the phase lag in this case.
Analysis
Local charge density, which reflects the ionic channel network, can be approximated based on the first and second terms of (13). For this, the phase lag value at each bias voltage must be related to a microscopic electrostatic phenomenon. To understand the generation of positive phase lag at a negative sample bias voltage, the operation of a tip when bias voltage is applied during scanning must be analyzed. There is typically a water layer between the tip and the sample surface. When a bias voltage is applied, hydrolysis occurs, hydrogen is produced, and protons are created because of the Pt-coated tip. Figure 7 depicts the local variation in the current flowing through the Pt tip, and the half membrane electrode assembly as bias voltage is swept. Current flows when the magnitude of the bias voltage is larger than 1.5 V, indicating that protons are created when a voltage is applied to the Pt tip.
Analysis
Local charge density, which reflects the ionic channel network, can be approximated based on the first and second terms of (13). For this, the phase lag value at each bias voltage must be related to a microscopic electrostatic phenomenon. To understand the generation of positive phase lag at a negative sample bias voltage, the operation of a tip when bias voltage is applied during scanning must be analyzed. There is typically a water layer between the tip and the sample surface. When a bias voltage is applied, hydrolysis occurs, hydrogen is produced, and protons are created because of the Pt-coated tip. Figure 7 depicts the local variation in the current flowing through the Pt tip, and the half membrane electrode assembly as bias voltage is swept. Current flows when the magnitude of the bias voltage is larger than 1.5 V, indicating that protons are created when a voltage is applied to the Pt tip.
The phase lag generated at negative bias voltages includes a contribution from the interaction between the released protons and the ionic domains on the membrane surface. As the membrane is negatively charged, owing to polarization, it attracts protons that cover its surface. Thus, positive phase lag values are measured, because a repulsive force is induced between the positively charged tip and the proton-covered surface. The magnitude of the repulsive force is related to the density of the activated ionic channel network. When water uptake in the membrane increases, an ionic channel network is developed, as the number of interconnections between the ionic channels grows. Protons are accelerated into the ionic channel by the external electric field, as shown in Figure 8 The phase lag generated at negative bias voltages includes a contribution from the interaction between the released protons and the ionic domains on the membrane surface. As the membrane is negatively charged, owing to polarization, it attracts protons that cover its surface. Thus, positive phase lag values are measured, because a repulsive force is induced between the positively charged tip and the proton-covered surface. The magnitude of the repulsive force is related to the density of the activated ionic channel network. When water uptake in the membrane increases, an ionic channel network is developed, as the number of interconnections between the ionic channels grows. Protons are accelerated into the ionic channel by the external electric field, as shown in Figure 8. The number of ionic domains increases as the number of protons on the membrane surface decreases. Thus, the repulsive force between the tip and the membrane and the area of the ionic domain have a reciprocal relationship. Table 1 lists the mean phase lag values for dry and wet membranes, and the values when there are no protons on the membrane surface. The latter values were calculated using only negative bias voltages. With both dry and wet membranes, the phase lag increased with the bias voltage, which can be explained as the increase in proton The phase lag generated at negative bias voltages includes a contribution from the interaction between the released protons and the ionic domains on the membrane surface. As the membrane is negatively charged, owing to polarization, it attracts protons that cover its surface. Thus, positive phase lag values are measured, because a repulsive force is induced between the positively charged tip and the proton-covered surface. The magnitude of the repulsive force is related to the density of the activated ionic channel network. When water uptake in the membrane increases, an ionic channel network is developed, as the number of interconnections between the ionic channels grows. Protons are accelerated into the ionic channel by the external electric field, as shown in Figure 8. The number of ionic domains increases as the number of protons on the membrane surface decreases. Thus, the repulsive force between the tip and the membrane and the area of the ionic domain have a reciprocal relationship. Table 1 lists the mean phase lag values for dry and wet membranes, and the values when there are no protons on the membrane surface. The latter values were calculated using only negative bias voltages. With both dry and wet membranes, the phase lag increased with the bias voltage, which can be explained as the increase in proton Table 1 lists the mean phase lag values for dry and wet membranes, and the values when there are no protons on the membrane surface. The latter values were calculated using only negative bias voltages. With both dry and wet membranes, the phase lag increased with the bias voltage, which can be explained as the increase in proton generation due to hydrolysis. At all negative bias voltages, dry membranes have a larger phase lag value than wet membranes, which is consistent with our assumptions. Hence, the area of the ionic domain on the membrane can be approximated using a phase lag difference. The net electrical charge of the protons at each bias voltage and membrane condition was estimated using (13). This approximation is conducted in several steps. First, because the phase lag value obtained for each membrane includes a contribution from the polarization-induced charge, the phase lag when there are no protons on the membrane surface is subtracted from this value. Then, the tip radius is calculated for each membrane using the blind tip reconstruction method [26]. Finally, the net charge of the protons is calculated using the second term of (13).
The calculation results for the net charge are summarized in Table 2. In the dry membrane, the net charge is 8.71 × 10 −18 C, 6.07 × 10 −18 C, and 3.99 × 10 −18 C at −3 V, −2 V, and −1 V, respectively. Hence, the net charge increases as the bias voltage is increased. The value at −1 V is much smaller than the net charge at other voltages, owing to the relatively small amount of proton generation at −1 V. This is consistent with the variation in local current with a swept bias voltage. However, the latter result does not provide absolute numerical information about the ionic domain. In the wet membrane, the net charge is 1.87 × 10 −18 C, 1.28 × 10 −18 C, and 8.06 × 10 −18 C at −3 V, −2 V, and −1 V, respectively. This trend is similar to that observed for dry membranes. However, the amount of electrical charge is much smaller than that with dry membranes, possibly because of the partial movement of protons into the ionic channels. This result indicates that wet membranes have a larger ionic domain than dry membranes. Here, the repulsive force is only due to the protons that do not move into the ionic channel network. The difference between the net charge of dry and wet membranes is similar at each bias voltage and is~79-80%. This result implies that 80% of the liberated protons move into the wet membrane; only 20% of the protons interact with the tip, and this ratio is independent of the bias voltage. From these results, it can be surmised that the area of the ionic channels on the surface of a wet membrane increases by~80% compared with that on a dry membrane. Previous experimental results have shown that there is an approximately 80% difference between proton conductivity under ambient conditions and fully humid conditions [27,28]. Hence, our calculations are consistent with the literature. In this study, we derived an NAM for analyzing proton exchange membranes. Based on Shen's study [19], we used an interpretation method for EFM signals. We assumed that the capacitive force is a summation of two dominant electrostatic interactions: electrostatic force of induced charge-charged tip and free charge-charged tip. We derived the force gradient, which was recorded as the phase lag value on the EFM image, based on these two interactions. Thus, the NAM considers two terms: the polarization dominant term and the free charge dominant term. The backbone is ruled by the polarization dominant term, and the free charge dominant term is related to the ionic domain structure of proton exchange membranes. Thus, the structural change of the ionic domain can be characterized by adapting the NAM to measure the phase lag value of EFM. To examine the NAM, we determined the local charge density of a proton exchange membrane, which is directly related to the ionic domain, by using an approximation model. The wet and dry Nafion was scanned by increasing the applied bias voltage in intervals and applying protons from hydrolysis. The characterization by the NAM charge density of protons on the surface shows a clear difference between the dry and wet membranes. The results are in good agreement with those of previous studies [21]. Thus, we conclude that the NAM can be applied for studying proton exchange membranes. The enhancement of proton conductivity is the prime purpose for developing the proton exchange membranes. Proton conductivity is governed by the morphological structure of the ionic channel network. Thus, the characterization of the ionic channel network is mandatory for developing the novel proton exchange membranes. The NAM for local charge density derived using electrostatic force microscopy has become an important tool for characterizing a novel proton exchange membrane.
Conclusions
In this study, we proposed a NAM that focusses on free charge movement into the ionic channel network of a proton exchange membrane, based on the capacitance force between a conductive AFM tip and the proton exchange membrane surface. The model is expressed as a summation of induced charge distribution, which is connected with the backbone of the proton exchange membrane and free charge distribution which is related to the ionic channel network. This model can be used for ionic channel network variation under various conditions, such as hydration, as well as for composites with filler materials, by calculating induced and free charge distribution change. The NAM was verified by analysis of the experimental results, which were phase lags measured under different bias voltages for dry and wet proton exchange membranes.
The enhancement of proton conductivity is the prime purpose of developing proton exchange membranes. Proton conductivity is governed by the morphological structure of ionic channel networks. Thus, the characterization of ionic channel networks is essential for developing novel proton exchange membranes. The NAM is shown to be an important tool for characterizing novel proton exchange membranes.
|
2021-04-27T05:12:52.083Z
|
2021-02-08T00:00:00.000
|
{
"year": 2021,
"sha1": "4532d207d2e74d1ca0aa81de1f1cc4eeb1f840c0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/13/8/1258/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4532d207d2e74d1ca0aa81de1f1cc4eeb1f840c0",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
253039128
|
pes2o/s2orc
|
v3-fos-license
|
Research on the problems and Countermeasures of teachers' Continuing Education from the perspective of educational informatization
. Teachers' continuing education is an important link in the development and training of teachers' professional talents in China. Under the background of the COVID-19 epidemic, the wave of educational informatization has provided more opportunities and challenges for teachers' continuing education. This study compares the current situation of teachers' continuing education in China and proposes targeted countermeasures for problems in three areas: training design, training content and form, and training management and evaluation.
Introduction
In the new era, education informatization has brought a broader development path for teachers' continuing education and training. With the continuous development of emerging technologies such as artificial intelligence, blockchain and "Internet+", more and more information technology is used in teaching, which provides more opportunities and challenges for teachers' continuing education and training. In the era of education informatization 2.0 [1], teachers' continuing education and training have become more focused on the "innovative integration" of information technology and education, and on the ways and means to effectively improve teachers' teaching and learning with the support of information technology.
During the period of COVID-19, the country adopted the long-term and large-scale online teaching method of " stopping classes without stopping teaching", which reflected the important value of education informatization, but also made teachers' continuing education faces new problems. The action plan for education informatization 2.0 emphasizes on vigorously improving teachers' information literacy and promoting teachers' self adaptation to technological changes [2]. Therefore, how to further improve the continuing education level of teachers under the background of educational informatization has become a hot issue that needs to be studied.
Current situation of teachers' continuing education development under the background of educational informatization
Training is an important way for teachers to improve their professional level and enhance their teaching skills. In this study, the concept of teachers' continuing education specifically refers to inservice school teacher training, that is, training for post-service continuing teacher education [3]. Teachers' continuing education is an educational investment activity that improves teachers' teaching and research, professional satisfaction and academic career development through the systematic design, organization, and implementation of training programs. Teacher continuing training is is an important way to promote teachers' professional development and literacy, and is the embodiment of lifelong learning for teachers. Obtaining higher professional skills, professional identity and professional development through post-service training is an important purpose of teacher continuing training [4].
At present, China has formed a five-tier training system of "national training, provincial training, municipal training, county training and school-based training", which basically covers all forms of training for primary and secondary school teachers [5]. In 2010, China began to implement the National Training Program for Primary and Secondary School Teachers (the "National Training Program"), which is the most typical national initiative in China's teacher professional development practice. It is a large-scale in-service training program for teachers in the central and western regions. "The National Training Program" has contributed to the balanced development of basic education and the fight against poverty in China, and has won the high recognition of the majority of educators, and has had a wide social impact.
On April 13, 2018, China's Ministry of Education released the Education Informatization 2.0 Action Plan. This marks the official arrival of the education informatization 2.0 era in China, represented by emerging information technologies such as big data, artificial intelligence and cloud computing. In such a policy context, diverse informatized teaching media, electronic teaching aids, digital training management programs and networked teacher training platforms can provide a new perspective for the development of teachers' continuing education. Currently, China is aiming to build a new training system that deeply integrates information technology and teacher training, build and promote shared online course resources, focus on hybrid training that combines online and offline, create a teacher online training community, implement the requirements of creating a high-quality professional and innovative teaching force in the new era, improve the level of teacher training, and enhance the quality of teacher training [6].
Practical problems of teachers' Continuing Education under the background of educational informatization
In the context of the COVID-19 epidemic and in combination with the requirements of the era of education informatization, teachers' continuing education is mostly conducted online. Online training is a systematic and complex process, including preparation before training, training implementation, tracking after training and other stages. Each stage should cover many participants and a large number of training work. Scientific system design, course development, evaluation process and the efforts of relevant personnel are important factors to ensure the effectiveness of online training [7]. Combined with the national conditions of China, the current teachers' continuing education is mainly faced with the difficulties of training design, training content and form, and training evaluation.
First of all, in terms of training design, some studies have pointed out that the top-level design of teachers' continuing education training projects needs to be optimized [8]. In the five level training process of " The National Training Program ", "Provincial training", "City training", "county training" and "school-based training" stipulated by the state, the curriculum design of each level is independent, and the problem of repeated key contents often occurs. How to connect and develop needs to be explored in depth. Due to the extensive interaction of information, whether at home or abroad, continuous education reform has become the norm, which makes teachers face the need of updating teaching concepts and behaviors. Some major public emergencies, such as the large-scale online learning caused by the covid-19 pandemic, have forced teachers to learn and adapt to online teaching in a short time. However, in the course training of continuing education, we often pay attention to the reform of education content and neglect the corresponding adjustment of the change of training form. Taking school-based and county-based training as an example, a large number of rural teachers are actually unprepared and forced to contact information-based teaching. They are unfamiliar with the whole training and are difficult to integrate quickly. However, the actual situation of rural teachers was not taken into account in the training design, and the lack of training on information-based education technology made rural teachers spend a lot of time on equipment debugging, which greatly reduced the effect of continuing education.
Secondly, the content and form of training still need to be further improved. On the one hand, in terms of training content, there are some problems in " The National Training Program ", such as non specialization of project content design and weak theoretical foundation [9]. On the other hand, the change of training mode forces the change of training form. At present, online training is mainly in the form of "point-to-point" classroom teaching. In the training process, students often passively accept knowledge and lack initiative. Teachers are often constrained by the online teaching form and BCP Social Sciences & Humanities
ERSS 2022
Volume 20 (2022) focus on theoretical learning [10]. The probability of actual operation courses is low. In fact, education informatization 2.0 has changed the traditional physical learning environment, and the material boundary between school, classroom, family and society in the traditional learning environment has been broken. Teachers are no longer satisfied with learning knowledge in the classroom. They are eager to construct knowledge through digital learning space. However, the current teacher continuing education training database has not been fully formed, which makes the needs of teachers in the process of continuing education greater than the support they can provide. Not only that, in the output of the final training results, there are often problems that the output results are despised and the results are less developed and utilized. The results often become a formal result proof, but they can not be really applied in teaching, and can not be shared and further developed.
In addition, there are many problems in training evaluation. As an important means to measure the training effect, evaluation has the functions of diagnosis, encouragement, regulation and feedback [11]. In order to minimize the problems such as different standards and value conflicts in the evaluation process, it is necessary to determine the corresponding guiding standards and value guidelines before engaging in teacher education evaluation activities. However, at present, due to the lack of effective system guarantee mechanism, there are some problems in the process of training evaluation, such as outdated evaluation content, single evaluation method, improper application of evaluation mode and lack of feedback. These problems directly affect the professional development of teachers.
The Countermeasures of teachers' Continuing Education under the background of educational informatization
In response to the various problems facing teacher training in the current process of education informatization in China, this study proposes countermeasures as follows.
First of all, a more targeted training design should be developed based on the needs of the training targets, with the government taking the lead in exploring a collaborative linkage training model with integrated design for counties, institutions, and teachers. Before setting the overall plan for training, in-depth research should be conducted to clarify the goals, content, form, time, institutions, and the development of assessment criteria for training effectiveness of in-service teacher training. In the formulation process, special attention should be paid to the educational needs of teachers. For some teachers, the improvement of the new version of the curriculum standards at the compulsory education stage, the promulgation of the core quality of Chinese students' development, and the development of education evaluation reform have a profound impact on the education industry. These hot topics are urgently needed to understand. For some teachers, the primary dilemma is how to quickly realize resource docking, understand the information support that can be obtained, and know what learning resources can be obtained. In this regard, the use of pre questionnaire surveys and fixed-point interviews can make it easier for education departments and schools below the provincial level to grasp the key points of teacher training at various stages, promote the training to be more effective, and truly improve the efficiency of teacher training.
Second, the content of education should be enriched and the form of education should be expanded. The background of education informatization provides a broader platform for training. Training should take teachers who receive continuing education as the main body, closely integrate with teachers' teaching reality, and encourage teachers to conduct results-oriented curriculum exploration with independent combing. Education informatization not only promotes the change of education and training media, but also should promote the training of teachers' continuing education to become an ecosystem of self-renewal and self-development potential [12]. In this process, continuing education should use different new forms of learning, such as micro class, Mu class, flipped class, mixed learning, etc., to carry the function of collecting and sorting learning resources and the function of data recording in the learning process. In the process of teachers' participation in training, it should provide the formulation for their learning activities, so as to promote the training quality of teachers' continuing education. Based on the high freedom of information training in time and space and the massive resources in learning content, on the one hand, teachers can choose more flexible learning time and place; on the other hand, teachers can be liberated from the original fixed learning content and flexibly arrange the learning content according to their actual teaching needs, so that teachers have sufficient space for learning and exploration.
Third, the concept of developmental evaluation should be established. Results are important, and the process evaluation of training is equally important. For the evaluation of Chinese teachers' continuing education, the quality evaluation of evidence-based training needs to be optimized, and the quality evaluation system of continuing education training needs to be established and improved. In China, it is necessary to establish an evidence-based evaluation standard system of teacher education quality and improve the necessary standardization and scientificity.The implementation of training credit management and the establishment of a credit bank are good ways of process evaluation. In the form of credit system, teachers break down the training tasks and complete the goals in a targeted manner according to the evaluation requirements, which can help teachers to better enhance their professional development. In the training, teachers' subjective evaluation should be emphasized to give them a sense of access; training evaluation steps should be improved, evaluation responsibility mechanisms should be implemented, and evaluation information management should be strengthened. Let the evaluation serve as the basic data support for the next training development in order to promote the continuous development of teachers' continuing education and training for the better.
Conclusion
For the field of education, it has obviously become an urgent task to build a teacher team with comprehensive quality under the current policy support. This study concludes that the main problems of teachers' continuing education under the perspective of education informatization include: independent training design at all levels and poor articulation; weak theoretical foundation of training content and single training form; and unsound education evaluation system. In response to the above problems, this paper proposes three practical and relevant countermeasures for solving them: to explore a synergistic linkage training model with integrated design of counties, institutions and teachers under the leadership of the government according to the needs of the training targets; to enrich the educational contents and expand the educational forms; to improve the evaluation system and focus on developmental evaluation, etc.
|
2022-10-21T15:09:14.066Z
|
2022-10-18T00:00:00.000
|
{
"year": 2022,
"sha1": "ede2e1e0fa38daa9743d197074d57cf93ef89402",
"oa_license": "CCBY",
"oa_url": "https://bcpublication.org/index.php/SSH/article/download/2167/2153",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "15a81935ffb4395932fe03c7037c47ab0251b7a1",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
3220784
|
pes2o/s2orc
|
v3-fos-license
|
Solutions to health care waste: life-cycle thinking and "green" purchasing.
Health care waste treatment is linked to bioaccumulative toxic substances, such as mercury and dioxins, which suggests the need for a new approach to product selection. To address environmental issues proactively, all stages of the product life cycle should be considered during material selection. The purchasing mechanism is a promising channel for action that can be used to promote the use of environmentally preferable products in the health care industry; health care facilities can improve environmental performance and still decrease costs. Tools that focus on environmentally preferable purchasing are now emerging for the health care industry. These tools can help hospitals select products that create the least amount of environmental pollution. Environmental performance should be incorporated into the evolving definition of quality for health care.
It is difficult to categorize health care professionals who work with environmental issues. Some concentrate all of their time on environmental health issues, and some juggle other responsibilities such as housekeeping and safety; some facilities have recycling programs, and some do not. The amount of waste generated and where the waste is being treated are known for some health care facilities but not for others. The American Hospital Association and U.S. Environmental Protection Agency have set goals to reduce both the volume and toxicity of wastes by 2010. Are these goals sufficient to deal with the environmental problems associated with our modern health care industry?
Currently, medical waste incinerators are ranked among the top four sources for dioxin and anthropogenic mercury emissions in the United States (1,2). These contaminants are capable of traveling long distances and can be easily transferred between air, land, and water (3).
According to estimates from the early 1990s, medical waste is generated at a rate of 3.5 million tons per year (4). This statistic is amplified by the increasing prevalence of home health care, which currently generates waste at about 50,000 tons per year (5). With proper waste segregation practices, roughly 15% (by weight) of hospital waste can be classified as infectious, requiring treatment before disposal (6). To reduce its infectious potential, hospitals in some regions treat much of this waste. Although many treatment options exist, over the years hospitals have chosen medical waste incinerators to treat wastes. This infectious segment of the health care waste stream is called by many different names; however, for this discourse it will be referred to as "medical waste." In this paper, the term "health care waste" refers to all of the waste that is produced through health care activities.
The link between health care waste and pollution is not readily apparent. The issue is highly complex and sometimes controversial. It includes a web of relationships and decisions encompassing product suppliers, health care workers, and hospital waste treatment choices. Pollutants with the potential to have harmful effects on human health have been identified with health care waste. Two of these substances, mercury and dioxin, have been detected in significant amounts in air and ash emissions from medical waste incinerators (7). Some health care facilities, recognizing the links between human health and the environment, are implementing precautionary plans of action to improve their environmental performance. In essence, the precautionary principle states, "better safe than sorry." Or, in terms more appropriate for health care facilities, "an ounce of prevention is worth a pound of cure." According to this approach, some risks should be avoided, especially where the level of scientific uncertainty is high and knowledge in the area of concern is limited (8).
To acknowledge the problem and publicly address the solution, in June 1998 the American Hospital Association agreed to work with the U.S. Environmental Protection Agency, using a memorandum of understanding to set goals for waste volume and toxicity reduction. Two key points of this memorandum of understanding are a 50% reduction in volume of all wastes by 2010 and the virtual elimination of mercury from health care facilities by 2005 (9). This agreement addressed not only volume reduction of health care wastes but also toxicity reduction. Toxicity reduction is the more important of the two because adverse impacts on human health have been demonstrated for several pollutants associated with health care wastes. Volume reduction can lower disposal costs and result in smaller amounts of waste that require special treatment such as incineration or autoclaving, which contribute to various forms of pollution.
Life cycle considerations. The life cycle concept is useful when assessing the environmental impacts of medical products and services. Life cycle assessments of products and services provide a description of the environmental effects of the product or service and its materials during manufacture, distribution, use, and end-of-life or disposal.
Many environmental issues currently associated with health care are directly related to waste generation patterns and disposal methods. Most health care administrators now address only the costs directly related to waste disposal. These costs are associated with collection, transport, treatment, and disposal of waste. Many health care administrators have realized that the waste generated in their facilities can have indirect impacts on human health and the environment after disposal. The immediate hazards associated with disposal of medical products are obvious because the waste presents a practical problem. However, end-oflife is only one of several stages in the life cycle of a product where costs are incurred; indirect costs can also be incurred during the manufacture and use of a product.
The key tasks for health care professionals who wish to improve their facilities' environmental profiles include reviewing by-products of waste disposal methods and developing criteria for environmental screening of products. In the United States, the current purchasing effort lacks environmental criteria in the decision-making process. The prime factors traditionally considered in purchasing decisions include cost, quality, efficacy, and availability.
Health care waste treatment is linked to bioaccumulative toxic substances, such as mercury and dioxins, which suggests the need for a new approach to product selection. To address environmental issues proactively, all stages of the product life cycle should be considered during material selection. The purchasing mechanism is a promising channel for action that can be used to promote the use of environmentally preferable products in the health care industry; health care facilities can improve environmental performance and still decrease costs. Tools that focus on environmentally preferable purchasing are now emerging for the health care industry. These tools can help hospitals select products that create the least amount of environmental pollution. Environmental performance should be incorporated into the evolving definition of quality for health care.
Commentary
Personnel responsible for procuring health care products and services (materials managers or purchasing agents) come from varying backgrounds. Many have worked in auxiliary fields within health care such as nursing or another technical skill area, or they may have business or legal backgrounds to effectively handle finance and contracts. Environmental background or training is not a prerequisite for the individuals responsible for securing health care products and services.
The overall health care supply chain management process should be revised to incorporate other criteria that directly link product selection, product use, product disposal, and environmental and community health impacts. Further, product acquisition should also include the evaluation of upstream life cycle steps in terms of resource use, energy demands, and global impacts. Without this holistic perspective, the industry charged with promoting health and healing contributes to environmental problems, which in turn adversely impact human health.
Environmental education in health professions. The gap in the knowledge of the environmental impacts of health care products and services underscores the need for increased understanding among health professionals of the integral links between human health and environmental health. The average physician receives little if any occupational health training in medical school (10). A 1994 survey of medical school deans indicated a "minimal" emphasis on environmental education (11). Nurses are in a similar situation, with curricula in nursing programs that normally do not include environmental education. This educational gap is particularly problematic because it concerns not only the potential impacts of health care product choices but also the understanding of contributing factors to disease processes. Some researchers claim that 40% of deaths worldwide "can be attributed to various environmental factors, especially organic and chemical pollutants" (12). Environmental information should be integrated into the education of health care professionals to match the changing trends in disease and illness and to increase their consciousness of appropriate use and disposal of resources.
Perspectives on risk. Few hospitals in the United States have made the commitment to employ full-time environmental managers or waste managers, despite the fact that health care has evolved into one of the most intricate organizations and has an extremely complex waste stream. Solid waste, medical waste, hazardous waste, radioactive waste, recyclable waste, compostable waste, controlled-substance waste, confidential paper waste, and construction and demolition waste are all created at health care facilities in the process of supporting patient care services.
Looking into the future, the evolution of the complexity of health care waste streams will proceed at an even more rapid pace. New materials, new technologies, and new power sources will emerge. The disposal options for these new products and technologies will barely keep pace with the latest innovations in health care. The regulatory milieu that has evolved in health care settings is staggering, with more than a dozen regulatory agencies imposing requirements on even the smallest facilities. Life cycle thinking, from a design and purchasing standpoint, holds the promise of decreasing environmental risks and costs.
Upstream Tactics: Environmental Purchasing for Health Care
By focusing its activities upstream, a health care facility can reduce the environmental impacts of the products and services it uses before regulatory problems arise or waste disposal costs increase. Upstream activities usually focus on reducing environmental impacts of products and services and where they come from instead of managing these impacts after they have occurred. For example, reducing mercury emissions by purchasing mercury-free products is an upstream tactic. Solving environmental problems will require a broader view, one that requires professionals from different areas of health care to work together to meet the challenge. Effective action to eliminate persistent bioaccumulative contaminants will require proactive activities such as engaging product manufacturers and waste treatment processors. Purchasing approaches bridge gaps by providing a dialogue within the supply chain on environmental attributes.
One promising channel for action is through purchasing. This approach, which has been used to shift U.S. government agencies toward using environmentally preferable products (13), has yet to permeate the health care industry. Health care facilities can use "green" purchasing initiatives to secure environmentally preferable products.
One important caveat of the purchasing approach is that alternative products must clearly be shown to have superior environmental performance. For example, a polyolefin intravenous (IV) bag does not contain chlorine, so it has less potential to produce dioxins through incineration than an IV bag containing polyvinyl chloride (PVC). It is also imperative that the alternative product has equal or superior clinical performance. For instance, a recent comparison of polyolefin and PVC platelet storage containers showed "no consistent differences" in the parameters observed (14).
Negotiating with product suppliers.
Many health care facilities work with at least one group purchasing organization (GPO) to secure lower prices by buying products along with a group of hospitals. By clearly communicating to GPOs and other product vendors the desire for environmentally preferable products, facilities can alter the composition of the products they buy. For example, if a facility chooses to invoke the precautionary principle by minimizing the use of PVC IV bags, it can seek contracts with suppliers who make non-PVC IV bags.
A health care facility can negotiate with GPOs and suppliers to identify products the facility deems problematic and to find alternative products. Catholic Healthcare West (Phoenix, AZ), for example, incorporated the following points, and others, into its newly created partnership with Premier (San Diego, CA), a large GPO: a) Premier will assist Catholic Healthcare West in identifying products that contain mercury and PVC; and b) Premier will communicate to manufacturers the desire for environmentally favorable products (15).
Changing purchasing policy. Facilities can also stimulate the purchase of environmentally preferable products by mandating certain practices in their purchasing policy. Butterworth Hospital (now Spectrum Health) in Grand Rapids, Michigan, adopted a purchasing policy that required the purchase of mercury-free products whenever possible. The hospital switched to mercury-free blood pressure gauges and stopped sending mercury thermometers home with new mothers (16). This formal commitment to environmentally preferable products is a powerful example of "green" purchasing practices.
Evaluating medical products. Changes in purchasing policy are easy to make if the benefits are clear and the costs are minimal (e.g., replacing mercury thermometers with mercury-free thermometers). If a health care facility desires to move toward integrating environmental criteria into purchasing decisions, it may benefit from the use of a decision support tool, such as the assessment of the environmental impact of a medical product through all of its life cycle stages-manufacturing, packaging, distribution, use, and end-of-life.
In the United States, decision support tools such as quantitative supplier assessments are not widely available to health care facilities that wish to evaluate the environmental profiles of the products they purchase. As part of a research team at the University of Wisconsin-Madison, we recently completed testing of newly developed "Health Care Environmental Purchasing Tool" at nine health care facilities. The results
Commentary • Kaiser et al.
of this study have not yet been released, but indicate that the capacity to incorporate environmental changes needs to be expanded. This expansion can happen through increased environmental awareness and toxic-specific education.
Downstream Tactics: Waste Management
There are many other opportunities for the health care industry to assess and improve its environmental performance while reducing costs. These opportunities downstream of the health care facility involve waste treatment. Some facilities have implemented recycling programs, segregating their waste streams for optimal end use such as recycling and materials recovery (17). In addition, other facilities have instituted upstream programs to prevent pollution, such as focusing on reducing mercury use. Reducing mercury emissions by installing pollution control equipment such as mercury traps in drains can be considered a downstream tactic.
Beth Israel Medical Center (New York, NY) has a program to rigorously reduce the amount of solid waste going into the designated "red bags" for biohazardous waste. This effort saves the hospital $900,000/year in disposal costs by reducing the amount of waste that must be treated (16). Albany Medical Center (Albany, NY) distills waste chemicals for reuse, saving $250,000/year in chemical disposal and purchasing costs (16). Naples Community Hospital (Naples, FL) switched from incineration to autoclaving of medical waste, thus reducing disposal operating costs by more than 80% and improving its relationship with the community (16).
The Medical Center Hospital campus of Fletcher Allen Health Care in Burlington, Vermont, implemented a recycling collection and education program to recover over 20 materials, from glass to stretch wrap and kitchen grease. Food waste from the hospital cafeteria is composted at a nearby farm; the end product is used to enrich the soil of organic vegetable gardens belonging to a nonprofit foundation. Blue wraps were donated for reuse in veterinarian clinics and collected for recycling. The hospital saved between $18,000 and $20,000 annually for the first years on avoided landfill fees (18).
Conclusions
The health care industry is in a state of rapid change, with a multitude of internal and external factors driving the changes. As new priorities and technologies are created, new guidelines for environmental performance and efficiency must be introduced. Health care is responsible for the generation of two particularly harmful pollutants that adversely affect human and environmental health. These pollutants, mercury and dioxin, largely result from product and waste disposal. The irony in this situation is that the majority of health care providers and professionals are unaware that this problem exists; thus they focus mainly on recycling programs and compliance with waste regulations.
The necessary management transitions will not be easy, but other industries, such as the electronics and chemical sectors, can be used as models for health care. Like these other industries, health care can deal with environmental issues in a systematic way. How much waste is generated? How much water does the facility use, and what is the quality of the wastewater effluent coming out? How much energy does the facility consume, and do opportunities exist to eliminate unnecessary uses of energy? What types of pollutants are a result of care delivery and operations? All of these concerns are really part of a total quality effort in which health care organizations comprehend their role in the community, including the benefits they have to offer and the liabilities they may be creating. Some of the tools available to health care include environmental purchasing tools, environmental management systems, and waste management programs. Hospital administrators should look to good management techniques, with firmly set goals and effective metrics, to monitor progress and ensure success.
Current social and technical forces will continue to offer administrative challenges to delivering quality care. Health care is a unique sector with many commitments, including support of community health. Most communities cherish access to quality health care and list it among the most valuable attributes of their community. Some boards of health care organizations are increasingly being held accountable for the health of the community. Part of that accountability includes the environmental performance of the organization.
Optimizing solutions to environmental issues in the health care industry requires holistic approaches that incorporate not only health care facilities but also the supply chain and end-of-life disposal strategies. This means understanding environmental outputs and inputs and identifying opportunities to provide better service and quality care in a cleaner, greener way. In the creative reconstruction that seems to typify current health care, it is necessary to shift the focus of environmental issues away from disposal costs alone to a focus on broader systems. We do not suggest that the quality of health care should be sacrificed for the environment. Incorporating environmental performance is part of the natural evolution of quality in health care.
|
2014-10-01T00:00:00.000Z
|
2001-03-01T00:00:00.000
|
{
"year": 2001,
"sha1": "29e8b45dc6ddf59b0d78921311356d8a33504995",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.01109205",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2d1e33393faaeece23864bd90db74207773a9db",
"s2fieldsofstudy": [
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Business",
"Medicine"
]
}
|
56474799
|
pes2o/s2orc
|
v3-fos-license
|
Power System Node Loadability Evaluation Using Flexibility Analysis Method
System transfer capability evaluation is an important research topic in power system analysis. However, as the uncertainty of load increasing direction, researches on the upper bound of system transfer capability appear to be too optimistic, whereas researches on the lower bound of system transfer capability appear to be too pessimistic. Actually, as load increasing direction cannot be forecasted accurately in practical power systems, obvious flaws in evaluating system transfer capability using deterministic analysis methods exist. In this paper, a method is proposed to give an approximate evaluation on power system node loadability using flexibility analysis method. Conventional rigid node voltage constraints are expressed into flexible form, and a set of critical node voltage values are obtained by solving a standard nonlinear optimization model using interior point algorithm. With the adjustment strategies given, such critical node voltage values show a strong positive correlation with the loadability on the corresponding nodes. Besides, compared with conventional methods, the method proposed can significantly improve the calculation efficiency for the same research purpose. Case study on IEEE 30-bus test system validates the effectiveness of the method proposed. DOI: http://dx.doi.org/10.5755/j01.eee.20.6.7267
I. INTRODUCTION
The notion of transfer capability is widely used in power system online dispatching as well as network planning.The computation results can give clear evidences for grid managers how many load margins the system remains.Existing researches mainly focus on system transfer capability within voltage stability constraints.And the research hotspots are the upper bound of system transfer capability (maximum transfer capability PLmax) as well as the lower bound of system transfer capability (min-max transfer capability PLmin-max [1]).
A. Maximum Transfer Capability
Power system maximum transfer capability is investigated in two different ways.
One is that the load increasing direction is determined, namely load on each node increases on a same ratio at one Manuscript received September 23, 2013; accepted February 6, 2014.This research was funded by a grant (No. 14YF1410100) from "Yangfan Project" of Shanghai, China; and grants (No. 51177099 and 51207092) from the National Natural Science Foundation of China.
time.In such cases, the solution for maximum transfer capability is just the same as the solution for maximum load growth factor.In [2], interior point algorithm was used to determine the maximum transfer capability of a power system.In [3], ordinal optimization method was used to determine the best location of FACTS (flexible AC transmission system) to enhance transmission system transfer capability.
The other is to determine the optimal load increasing direction using optimization methods.On such an optimal load increasing direction, system transfer capability can be maximized within stability and safety constraints, so that the existing system can be fully used.In [4], the problem of total transfer capability evaluation was investigated using a probabilistic approach, and a nonlinear programming model for calculating transfer capability was presented.
In brief, load increasing directions are vested or optimized in maximum transfer capability researches.The results obtained are ideal situations which can hardly be fully realized in practical power system developments.Thus, such vested or optimized load increasing directions only have theoretical significances, but appear to be too optimistic in practical operations.For such reasons, a more conservative concept in power system transfer capability evaluation was proposed, namely the min-max transfer capability.
B. Min-Max Transfer Capability
Compared with maximum transfer capability analysis, power system min-max transfer capability evaluation tries to find out the lower bound of system transfer capability.In other words, min-max transfer capability researches focus on the worst load increasing direction, on which the maximum load increment is minimal.The main research difficulty is that such a min-max optimization problem is hard to solve using conventional optimization methods.
In [5], a method for calculating the distance from a point on the loadability surface to the closest point of nonsmoothness of the loadability surface was proposed.In [6], using two consecutive scalar local measurements, a simple local index for online estimation of closeness to loadability limit was introduced.In [7], using particle swarm optimization algorithm, the shortest distance from the operating point to the boundary of voltage stability was determined.In [8], a new nodal loading model called "hyper-cone" model was proposed and the worst cases were defined and solved.
Compared with maximum load increment, min-max load increment is much more conservative, as such a min-max load increment can be realized on any load increasing direction without causing system instability or safety constraint violations.However, such an evaluation on system transfer capability is regarded as too pessimistic.And on such a load increasing direction, it is not conducive to make full use of power system resources.
Node loadability evaluation has a strong relationship with system transfer capability evaluation, as system load increment is the linear combination of single node load increment.But the calculation time will be huge if node loadability is evaluated one by one.It is eager that node loadability distribution can be given conveniently.
On the other hand, almost all the existing researches focus on the maximum or min-max power system transfer capability within system voltage stability constraints.But actually, most of the electrical equipment in power systems cannot work properly with a voltage lower than the lower voltage limit, which is usually higher than the voltage at collapse point.Such a problem was also realized in voltage stability index researches [9], where voltage limits, especially their lower limits were considered.Besides, other system operation constraints such as generator power output constraints and transmission line capacity constraints should also be taken into consideration in system transfer capability analysis.
Aiming at the disadvantages in conventional system transfer capability analysis, both PLmax and PLmin-max, the motivation of this paper is first introduced.Then, the basic principle, namely the relationship between node voltage and its loadability is elaborated.After that, conventional rigid node voltage constraints are expressed into flexible form, a system transfer capability estimation method using flexibility analysis is proposed, and the adjustment strategies are given.Finally, case study on IEEE 30-bus test system is made, and some conclusions are given.
II. MOTIVATION
From the introduction above, it can be found that both the upper bound and the lower bound of system transfer capability (PLmax and PLmin-max) are extreme cases in load increasing.Their numerical relationships in system transfer capability analysis are shown in Fig. 1.In Fig. 1, point 1 L1 L1 M ( , ) P Q refers to the current system operating point, where L1 L1 , P Q are the sums of system active and reactive power loads respectively.
Assume that load on each node keeps a constant power factor when load increases, and then the system load increasing direction is constrained by the maximum and minimum load power factor angles among the nodes.Namely in Fig. 1, the angle Lmin between line L1 and line and the angle Lmax between line L2 and line Q Q L1 can be obtained from (1) as follows: where Li is the load power factor angle of node i .
Thus, the possible system load increasing directions are between line L1 and line L2.
Point 2 L2 L2 M ( , ) P Q refers to the min-max system transfer capability point, where L2 L1 Lmin-max M M refers to the worst system load increasing direction.
From the analysis above, it can be known that PLmin-max refers to an arbitrary load increment that will not cause system instability or safety constraint violations.So the grey area surrounded by line L1, L2, and .In other words, there is huge difference in system transfer capability evaluation between the most ideal and the worst load increasing directions.Correspondingly, power system asset utilization can also be different in different load increasing modes.The area surrounded by line L1, line L2, line L2 P P , and line L4 P P is the possible feasible system load increasing area, where the feasibility depends on the load increasing directions (load increasing mode).
In practical power systems, power loads will increase along neither M M (optimal direction), as there exist huge amount of uncertainties in power load increasing.In Fig. 1, the absolute feasible system load increasing area is certain, but it is a pity that the system is much more likely to operate in the possible feasible system load increasing area when power load increases.Because such an area can access system load increment without increasing extra power equipment investments; thus, system equipment utilization can also be improved.As such an area is intension unclear and extension clear, the system transfer capability problem is actually a "grey" problem [11].For the reasons above, existing power system transfer capability researches are theoretical researches focus on two extreme cases.As for the "grey" area between the two extremes, it can hardly be described clearly.
III. NODE LOADABILITY EVALUATION
Node loadability analysis has a significant reference value to system transfer capability analysis.Thus, some analysis on a simplest 2-bus system as shown in Fig. 2 is first made.In Fig. 2, node voltage 1 are resistance and reactance of the transmission line is the load power.
From KCL (Kirchhoff current law) and KVL (Kirchhoff voltage law), the relationship between 1 2 , U U can be formulated as follows As node voltage amplitude mainly depends on the real part of the voltage, an approximate equation can be obtained as follows as ( 3), from which it can be seen that the voltage drop is approximately proportional to On the other hand, if 1 U is determined, the node loadability on node 2 considering lower limit of 2 U can be approximately expressed as (4) Then the limit of node loadability can be obtained as ( 5) From ( 5), it can be seen that the maximum node loadability . And from ( 3) and ( 5), it can be concluded that, when regarding node voltage limit as the main factor that affects node loadability, the node voltage drop at current node load power can reflect the potential node loadability to some extent.However, practical power systems cannot be as simple as the system above.There may be more than one transmission path for a load, and power load on one node may receive different amount of power quantities from different power resources according to grid dispatching modes.In such situations, it is doubtful that whether node voltage amplitude can still reflect the node loadability.
For a certain power system with determined power loads and operation constraints, the maximum transfer capability on one node is also determined on condition that generation power output adjustment is considered.But node voltage amplitude at current system load distribution can be different in different grid dispatching modes.Some measures must be taken to guarantee the uniqueness of node voltage amplitude if it is attempted to be used as the estimation evidence of node loadability evaluation.
In complex power systems, power transfers from power resources to loads through power transmission network.If power load only increases on a single node and its node voltage lower limit is regarded as the main factor that hinders the load increasing, situation may be that load increases until the node voltage reaches its lower limit.Situations may be similar on other nodes when single node loadability is analysed on them.
Since single node loadability has a great relationship with the voltage drop when the power is transferred from power resources to load through the transmission network, what will happen if the upper voltage limits of the whole system are artificially reduced to a critical level?Results may be that voltage level of the whole system becomes lower with the reduction of upper voltage limits until voltage on one or some load nodes reach their lower limits.
There are reasons to believe that, compared with load nodes with comparatively higher voltages, load nodes with comparatively lower voltages in such a situation remains less node loadability.Due to the grid structure and load distribution, more voltage drop has been consumed on these nodes than on other nodes.So the critical node voltage amplitude distribution of load nodes at this time may give evidences on the approximate evaluation of transfer capability on each node.
IV. METHODS AND PROBLEM FORMULATION
In this section, conventional rigid node voltage constraints are first expressed into flexible forms and system voltage flexibility index is defined.Then, a standard nonlinear optimization model is established to solve the system operation state with a lowest voltage level.Finally, some adjustment measures are taken to make the estimation evidences more accurate.
A. Flexible Expression of Node Voltage Constraints
Node voltage constraint is one of the most important as well as the most widely considered safety constraints in power system analysis and optimization.Node voltage constraints in power system analysis are usually expressed as (6) in rigid form as follows min max , where min max , i i U U are the lower and upper voltage limits of node i respectively.From (6), it can be seen that both the upper and lower limits of node voltage constraints are artificially set and their values are fixed.In other words, they are rigid constraints with not any flexibility.
As it is introduced above, the upper limits of node voltage constraints are considered to be artificially reduced to find a critical system operation state with a lowest voltage level.
Node voltage constraints are then expressed into flexible form in (7) as follows min max , where i U is the variation of the upper voltage limit of node i , and i is its variation factor.
If i U in each flexible node voltage constraint is valued as shown in ( 8), the value range From ( 7) and ( 8), it can be seen that 0 i corresponds to the largest voltage constraint domain, whereas corresponds to the smallest domain.So i can be regarded as the node voltage flexibility index of node i .From the method above, conventional rigid node voltage constraints are transformed into flexible forms, and the critical system operation state desired can be obtained from a standard nonlinear optimization model as follows.
B. Solution Model of Critical System Voltage Level
Assume that i is equal for each node, namely i , then a system operation state with critical node voltage level can be obtained by solving the optimization model ( 9) as follows: ..., , 1, 2,..., , where , u x refer to the vectors of control and state variables respectively; equality constraints are node active and reactive power balance respectively; inequality constraints are generator active and reactive power output constraints, flexible node voltage constraints, and transmission line capacity constraints respectively; , n L are the total node and transmission line number the system respectively.
In model ( 9), the upper voltage limit of each node is on a same ratio.With such a reduction, a unique critical node voltage amplitude distribution can be obtained when is maximized within the constraints.
In the power flow results obtained, system power loads are transferred in a way that the overall system voltage loss is minimized.So the critical node voltages obtained roughly reflect the comprehensive node voltage loss level of its power receiving channels.
Optimization model ( 9) is a standard multi-dimensional nonlinear programming problem.Interior point algorithm [12] is used to solve it in this paper.
C. Adjustment Strategies
In the section above, node voltage constraint is regarded as the most important factor that affects node loadability.But as model ( 9) described, there are other constraints such as generator power output constraints and transmission line capacity constraints in practical power system analysis, which may also affect the results of node loadability analysis.
Thus, adjustments to the critical node voltage values obtained from model ( 9) according to generator power output distribution and transmission line power flow distribution are necessary to make the results more suitable for node loadability distribution estimation.
The adjustment objects are load nodes directly connected with generators or load nodes connected with generators through a node without loads; besides, the power flow direction must be from generators to the nodes.Such load nodes will be called as "related load nodes" in the adjustment strategies introduction as follows.
The adjustment strategies proposed follow the three rules as follows.
1. Rule 1: if both the generator and the transmission path remain margins, replace the node voltage of the related load node with the generator node voltage; 2. Rule 2: if either the generator or the transmission line remains little or no margins, take no adjustments on the related load nodes; 3. Rule 3: if the related load node acts as both load node and intermediate node (or generator node), take flexible adjustments according to the power flow results.
V. CASE STUDY IEEE 30-bus test system [13] is taken as the case study in this paper.It contains 24 PQ nodes, 5 PV nodes, and 1 Vθ node.Base value of the per-unit system is SB = 100 MVA, UB = 135 KV.Assume that the voltage limits are as follows: min max max 0.95p.u., for all nodes, 1.05p.u., for PQ nodes, 1.10p.u., for PV and V nodes.
The system connection diagram is shown as Fig. 3. System operation state with critical node voltage distribution is obtained by solving model (9).The critical situation appears when 0.509 , and the node voltages obtained are listed with per-unit values in Table I.And the generator power outputs and their limits are listed with per-unit values in Table II.From Table III, it can be seen that there are 20 nodes with loads in initial state, namely node 2, 3, 4, 7, 8, 10, 12, 14 to 21, 22, 23, 25, 28, and 29.The power factors of the increased loads are just the same as their initial loads.Compare the maximum transfer capability (active power) of each node with the corresponding node voltages listed in Table I; the schematic comparison is shown as Fig. 4. The black line refers to node voltage distribution, and it corresponds to the left vertical axis; the grey line refers to single node loadability distribution, and it corresponds to the right vertical axis.
In Fig. 4, there appears an intuitive correlation between the two series.Regression analysis tools in MS Excel 2003 are used to make mathematical analysis.
Regression analysis results show that the correlation coefficient is 0.6515, which corresponds to a moderately strong positive correlation [14].And the probability of type 1 error (Significance F) is 0.00186, which means the confidence of the model is greater than 99.8 %.
System transfer capability is then calculated, the optimal result obtained is L max From such a result, it can be seen that single node loadability analysis is significant basis of system transfer capability analysis, as node 2 and 23 undertake more than 80 % of the active power load increment when system maximum transfer capability is considered.
Then adjustment measures are taken according to the strategies proposed.The adjustment measures are listed in Table IV.Schematic comparison between adjusted node voltage and node loadability is shown in Fig. 5. From Fig. 5, it can be seen that there appears a much stronger positive correlation between the two series.Regression analysis shows that the correlation coefficient is 0.8493, and the probability of type 1 error is 2.174 × 10 -6 at this time.In other words, there exists a quite strong positive correlation between the two series, and the confidence of the model is now greater than 99.999 %.
A more detailed regression analysis result about the estimation accuracy is compared in Table V.From Table V, it can be seen that there appears a stronger positive correlation after the adjust measures, which shows that the adjust measures are effective.
VI. CONCLUSIONS
System transfer capability analysis is an attractive reach topic in power system evaluation.Existing methods are deterministic methods, which try to find out the optimal or the worst load increasing directions.And system voltage collapse is usually used as the limit criterion.There are two main flaws in such analysis methods as they are analysed in the paper.Besides, other system safety constraints such as generator power output limits and transmission line capacity limits can also make significant effects on system transfer capability, so they should also be taken into consideration in system transfer capability evaluation.
Considering that node voltage loss is the main factor that hinders node loadability, node voltage constraints are first transferred from conventional rigid form into flexible form.And the upper limits of node voltages are continually reduced until a critical system voltage level appears.Such a critical node voltage distribution show moderately strong positive correlation with single node loadability.Then, some adjustment measures are taken to the critical node voltage values according to the situation of other system operation constraints.After that, the correlation becomes much stronger.
Compared with the one by one node loadability calculation in a power system with nl load nodes, the method proposed can save as much as about (nl-1)/nl calculation times in evaluating node loadability distribution.Further researches may focus on how to further improve the estimation accuracy.
Fig. 5 .
Fig. 5. Schematic comparison between adjusted critical node voltage distribution and node loadability distribution.
TABLE I .
CRITICAL NODE VOLTAGES.
TABLE II .
GENERATOR POWER OUTPUTS AND THEIR LIMITS.On the other hand, single node loadability is evaluated one by one.The optimization results for each node are listed with per-unit values in TableIII.Node loadability for nodes those have not loads in initial state is not considered.
TABLE III .
NODE LOADABILITY FOR EACH NODE.
TABLE IV .
ADJUSTMENT STRATEGIES.Node 6, 8, 24, 26, 29, and 30 are not adjusted as generator node 27 is the only power resource at that area, and it remains no active power margins (Rule 2); 7. Other node voltages are adjusted according to Rule 1.
Comments for the adjustment measures: 1. Node 2 is a load node as well as a generator node, and it is linked with generator node 1, so 2 U is replaced with the greater one between 1 U and 3. Node 12 is adjusted although generator node 13 remains no active power margins, as most of the active power generated by generator 13 in such a situation are transferred to other loads through node 12.When load increases on node 12, generator 13 can undertake the load increment and the power vacancy can be filled by other generators (Rule 3); 4. Node 21 is not adjusted as line 22-21 undertaking a
TABLE V .
COMPARISON OF ESTIMATION ACCURACY.
|
2018-12-07T18:01:37.675Z
|
2014-09-06T00:00:00.000
|
{
"year": 2014,
"sha1": "a93b1a6d41524819793aa48efae572456ddb7d4b",
"oa_license": "CCBY",
"oa_url": "https://eejournal.ktu.lt/index.php/elt/article/download/7267/3685",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a93b1a6d41524819793aa48efae572456ddb7d4b",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
225382105
|
pes2o/s2orc
|
v3-fos-license
|
ATVNP: ANTHROPOGENIC TEMPORAL VARIATION OF NO2OVER PAKISTAN
Life on the Earth exists because of atmosphere that surrounds it. As with the passage of time population increases and with this increases anthropogenic activities increases which is adversely affecting our atmosphere. That is why temperature of cities is soaring up. As our atmosphere is occupied by different gases, whose increase or decrease can substantially affects our environment. The major air pollutants, due to human activities, are carbon monoxide(CO), carbon dioxide (CO2), nitrogen dioxide (NO2), ozone (O3), sulfur dioxide (SO2) and particulate matter (PM).Among these pollutants, NO2 plays a big role as it can be produced due to road traffic and combustion of fossil fuels. In this paper, we investigated NO2 in Pakistan troposphere through Sentinel-5 Precursor (S5-P) satellite. Data from the S5-P, with TROPO phosphoric Monitoring Instrument (TROPOMI) as payload, became available in July 2018, having spatial resolution nine times higher than that of OMI. S5-P launched by European Space Agency (ESA) with one-day revisit cycle, has the capability to sense all atmospheric gases. Our area of study is Pakistan. We processed S5-P datasets in Google Earth Engine (GEE) and produced results of four seasons, during 2018-2019, of NO2. Different regions of Pakistan, which have excess NO2in its troposphere, are also shown. This increase is supported by the fact that with time the increase in urban population causes dramatic negative effects on the atmosphere. Compared to traditional methods, this study will substantially increase the capability of the government and policy makers to take timely action on anthropogenic activities in mentioned cities, in order to mitigate emission of NO2. Our findings illustrate the decrease of NO2in summer, and surges in autumn and vice versa. In autumn Karachi, JOURNAL OF MECHANICS OF CONTINUA AND MATHEMATICAL SCIENCES www.journalimcms.org ISSN (Online) : 2454 -7190 Vol.-15, No.-8, August (2020) pp 586-600 ISSN (Print) 0973-8975 J. Mech. Cont. & Math. Sci., Vol.-15, No.-8, August (2020) pp 586-600 Copyright reserved © J. Mech. Cont.& Math. Sci. Nasru Minallah et al 587 Sheikhupura, Raiwind, Lahore, Jamber, Faisalabad and Rawalpindi have highest concentration of NO2 . In winter excess NO2 spots over Karachi, Sheikhupura, Lahore, Raiwind, Jamber and Rawalpindi are detected. After winter, spring season shows further decrease in NO2 concentration in which Karachi, Dera Ghazi Khan Sheikhupura, Rawalpindi and Lahore have highest NO2 concentration and in summer NO2in Pakistan troposphere is further reduced to Sheikhupura, Raiwind and Jamber cities.
I. Introduction
In science, the word environment stands for composition of Earth's atmosphere. Earth's atmosphere is a combination of five sub layers. These are troposphere, stratosphere, mesosphere, thermosphere and exosphere. There are no physical boundaries between these layers, but an imaginary line at different heights, as shown in Fig. 1, from where the next layer starts [XX].
An imaginary line, known as Karman line, at height of 100km from Earth surface separate space from atmosphere [XXIV]. The part of the atmosphere which is an area our study is known as the troposphere. Above Earth surface, the first layer is Troposphere which is, approximately, up to 20km from earth surface. "Troops" means "change" [XX]. The name is due to constantly changing weather and most of atmosphere gases are mixing in this portion [XX].
Our environment plays a key role in health of the inhabitants of Earth. It is essential for us to keep our eyes on our environment and monitor it for anomalies. The major air pollutants are carbon dioxide (CO 2 ), carbon monoxide (CO), nitrogen dioxide (NO 2 ), ozone (O 3 ), sulfur dioxide (SO 2 ) and particulate matter (PM). A pollutant, which can severely affect living things health is nitrogen dioxide NO 2 [XIII ]. The NO 2 is red-brown acidic gas [I].
It is among the highly reactive group of gasses known as "oxides of nitrogen" or "nitrogen oxides (NOX)" with nitrous acid and nitric acid [VII]. Its presence in troposphere adversely impacts human health and visibility. It also contributes to the formation of tropospheric ozone (O 3 ), fine particle pollution, summer smog and acid rain [XXVIII]. Previous studies claimed that exposure of NO 2 to crops can alter their growth rate and may increase the growth rate of fungal pathogens and herbivorous insects [XXVI]. Among many sources of NO 2 , the mostly produced sources are natural lightening, soil emissions, biomass fuel burning, industrial burning processes, and crop residue burning [XXII]. According to environmental specialists the climate conditions variability is mostly linked with NO 2 [XXXI ].
As we have 78% of Nitrogen gas(N 2 ) in the atmosphere, their oxidation in air gives Nitrogen oxides but some are produced when organic nitrogen fuels are burnt which is anthropogenic process [VIII]. The NO 2 in troposphere increases exponentially since the middle of 20th century [XXXI]. Not only is NO 2 damaging the ecosystem, it also causes significant health issues contributing to respiratory problems ranging from
Fig. 1: The Earth's Atmosphere
Pakistan is 2nd most polluted 5th most populated country in the world [V]. Due to rapid growth in industries, population, deforestation, and energy crises leading to a massive increase of NO 2 , as shown in Fig. 2, in troposphere Pakistan. Therefore, in order to develop strategies for reduction of NO 2 in troposphere of Pakistan, a temporal analysis of NO 2 in Pakistan troposphere is much needed. Traditional methods of NO 2 measurements(ground based and airborne) are temporally and spatially limited but satellite can measure NO 2 temporally and spatially with global coverage [VI]. Compared to traditional methods, remote sensing has substantially increased the capability of the decision makers to take appropriate measures [II].
Fig. 2: Sources of in Troposphere
Recently, satellite remote sensing of tropospheric NO 2 has been effectively used to study the spatial patterns of NO 2 at local, regional, and global scales [XVIII], [XIX], [XXX]. One of the emerging applications of remote sensing is pollution monitoring through S5-P.The Copernicus S5-P satellite, with TROPOMI as payload, was successfully launched in October 2017 by European Space Agency (ESA) [XXIX]. The TROPOMI is a spectrometer measuring in the UV, visible, near-infrared and short-wave infrared, which allows the retrieval of trace gas species like O 3 , NO 2 , HCHO,SO 2 , CO,CH 4 and aerosol aspects like the aerosol index [XIV]. Error! Reference source not found.shows details of the products of S5-P.TROPOMI has a full global coverage each day, but with a much improved resolution (3.5 x 7 km 2 ) compared to the one providing measurements since 2004 [XXV]. Compared to OMI and GOME (as shown in Error! Reference source not found.), the S5-P observations are expected to be of significant importance for estimating pollutant concentrations and emissions at the scale of smaller towns, individual power plants, wildfires and major infrastructures [XI]. The rest of the paper is organized as follows. Section.3, which is related work, sheds light on recent literature of collection of NO 2 through satellites. After that section.4, Experimental Setup (section.5) states about tools used in this work. Our study area is discussed in section 6 which shows our study area. Following section.6, section 7 is results and discussion which shows the hotspots of NO 2 over Pakistan. The last section which is conclusion and future work will summarize our work.
II. Related Work
As our area of study is Pakistan, so there is no enough research work of satellite based environment monitoring of Pakistan. However, in this section, we will provide some of up-to-date remotely detection of NO 2 in Pakistan and some other countries.
In [XXVIII], authors used OMI (ozone monitoring instrument) dataset from December 2004 to November 2008 to detect NO2 over Pakistan .The results showed that Islamabad, Rawalpindi, Lahore, Dera Ghazi Khan and Karachi have highest concentration of NO2 in Pakistan Troposphere. Along with this, the authors explored the main causes of NO2 in these cities which were soil emissions, fossil fuel burning, industrial burning and motor vehicles.
In [XXI], authors measured NO 2 , SO 2 , and CO concentrations in Dalian, China and Faisalabad, Pakistan from January to December 2013. The measured values were cross matched with ambient air quality standards such as National Environmental Quality Standards (NEQS) Pakistan, NAAQSUSEPA, CNAAQS-China, and global standard WHO. The comparison showed that the annual average NO 2 concentration in Faisalabad Pakistan was higher than NEQS, USEPA, CNAAQS, and WHO and there was a little decrease for Dalian. While comparing the concentration of NO 2 in both cities, Faisalabad and Dalian, authors stated to mitigate negative health impacts air quality in Faisalabad Pakistan should be controlled.
In [III] troposphere air quality of Lahore city of Pakistan was studied. The study period was from June to August. The highest concentration of NO 2 was recorded for month of June in troposphere of Lahore city. Also the industrial areas were observed more polluted as compared to residential and commercial areas as fuel combustion is in excess in industrial areas as compared residential areas.
To investigate spatiotemporal variability of NO 2 in troposphere of South Asia OMI (ozone monitoring instrument) data from October 2004 to January 2015 is used in [XXVII]. NO 2 Hotspots over some most populated cities and industrial areas are shown. Which shows an average increase of 14% N02 concentration over study region? Highest increase was found for Dhaka (Bangladesh) and lowest for Karachi (Pakistan). Strong seasonality of NO 2 concentration is observed with the highest value in March and the lowest in August.
In [XXXI] NO 2 concentration in troposphere of China, of 2018, were investigated using S5-P satellite. The coherence analysis with the NO 2 surface monitoring concentration released by China Urban Air Quality Monthly Report reflects the high correlation between the NO 2 column concentration inverted by TROPOMI and the measured surface concentration, which reveals the great potential of the TROPOMI NO 2 column concentration in indicating urban surface air pollution conditions. The results showed a seasonal NO 2 variation in which high concentration were recorded for winter season and lowest for summer along with this NO 2 spatial distribution pattern were east high and west low.
In [X] authors investigated NO 2 concentration in the troposphere of India using Global Ozone Monitoring Experiment(GOME) and Scanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) during time period of 1996 to 2006. Different regions having maximum seasonal concentration of NO 2 were identified. Concentration of NO 2 was recorded maximum during winter and summer seasons and lowest during monsoon season. The results were cross matched with the surface level measurement of NO 2 . The excess of NO 2 in summer and winter was due to enhanced biomass activity in thermal power plants, densely populated region, large urban and industrial regions in India. Excess NO 2 were found over Mumbai-Gujarat industrial corridor, Delhi and east and northeastern India coal mine regions. The Increasing rate was due to industries and exponential growth of population.
The [IV] authors used Global Ozone Monitoring Experiment (GOME) to measure weekly cycles of troposphereNO 2 concentration globally. Minimum NO 2 on Sunday was recorded for industrial regions and cities in US, Europe and Japan. Compared to working days, Sunday NO 2 is about 20-50% lower. Weekly NO 2 patterns were correlated with religious and culture background. China NO 2 concentration are shown independent of weekly cycle pattern. Shifted NO 2 cycles were also observed, depending on religion and culture like, Israel showed minimum NO 2 concentration on Saturday and some Islamic countries showed minimum weekly cycle of NO 2 on Friday due to minimum anthropogenic activities on Friday.
The study in [IX] identified NO 2 hotspots, using GOME and SCIAMACHY, over different regions of world during 1996 to 2006 .The study suggested that tropospheric NO 2 column amounts is higher in some industrial developing regions. Furthermore, Hotspots of NO 2 were shown for China, South Asia, Middle East, Eastern US, Europe and South Africa troposphere. Among them the highest concentration was recorded for South Asia and lowest for Europe.
In [XVII], authors investigated the troposphere of Turkey for NO 2 vertical column densities (VCD) using Sentinel-5P which is recently launched by ESA. They collected July 2018 to January 2019 data of S5P and the mean value of NO 2 during this period over different regions were calculated. Theses NO 2 VCDs over different regions were compared with its respective population which was recorded in 2017.A high correlation of about 0.72 was found between NO 2 values and population.
The study in [XVI] compares column NO 2 measurements from PSI(ground-based spectrometers) with those from OMI and TROPOMI during the 2018 OWLETS-2 campaign. Comparisons are performed at two sites: NASA Goddard Space Flight Center (GSFC) and the University of Maryland, Baltimore County (UMBC). TROPOMI's higher resolution allowed for mean satellite-PSI agreement to fall within 10% at the less-polluted GSFC site and within 20% at the more-polluted UMBC site. In addition, statistically significant correlations between satellite and ground-based NO 2 measurements were found at both sites.
III. Experimental Setup
We processed S5-P level 3 datasets in Google Earth Engine (GEE). GEE is an open source tool and is a cloud-based platform for planetary-scale geospatial analysis that brings Google's massive computational capabilities to bear on a variety of high-impact societal issues including deforestation, drought, disaster, disease, food security, water management, climate monitoring and environmental protection.
It is unique in the field as an integrated platform designed to empower not only traditional remote sensing scientists, but also a much wider audience that lacks the technical capacity needed to utilize traditional supercomputers or large-scale commodity cloud computing resources [XII]. We processed S5-P datasets of 4 seasons of 2018 to 2019 as shown in Table 2. For individual season, its temporal data mean is calculated. TheNO 2 concentration is shown, for individual cities of Pakistan, by different colors like "blue", "purple", "cyan", "green", "yellow", "red". As The color becomes more reddish that area will have excess amount of NO 2 .
IV. Study Area
Our area of study is Pakistan as it is 2nd most polluted in world. Pakistan latitude is 30.3753 o N , which shows positioning of Pakistan in the northern hemisphere, and longitude is 69.3451 o E , which denotes that Pakistan is part of eastern hemisphere. According to the recent research [V], Pakistan is found to be 5th most populated country having a total area of 796,095 km 2 (770,875 km 2 of land and 25,220 km 2 of water).
V. Observation and Discussions
We retrieved season wise S5-P NO 2 data of Pakistan using time period for seasons as given in Table 2. The seasonal behavior of NO 2 in Pakistan Troposphere is discussed. Approximately 23 major cities troposphere of Pakistan is investigated which are listed in Table 3.
VI. Conclusions
In this paper, we investigated seasonal behavior of NO 2 concentration in Troposphere of Pakistan through Sentinel 5p satellite. We processed individual season temporal datasets, from 2018-09-23 to 2019-09-23, in Google Earth Engine. The details of time periods of datasets are shown in Table 2. Our results show that overall NO 2 concentration in Pakistan troposphere increases from summer to autumn season. The seasonal maximum and minimum of NO 2 were in autumn and summer season respectively. A seasonal cycle is found to be 1) Autumn, 2) Winter 3) Spring and 4) summer, as shown in Fig. 7Error! Reference source not found. .
|
2020-09-10T10:07:28.428Z
|
2020-08-18T00:00:00.000
|
{
"year": 2020,
"sha1": "c361e598cfb072b918946392861051fa3a953b04",
"oa_license": null,
"oa_url": "https://www.journalimcms.org/wp-content/uploads/50-jmcms-2008111-ATVNP1-TO-JMCM-Sheeraz-1-09-7-2020.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "aa6fff6ebe954ad6e84d961487dfcba4513a9b3f",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Geography"
]
}
|
31570258
|
pes2o/s2orc
|
v3-fos-license
|
Biosynthesis of Oligomeric Anthocyanins from Grape Skin Extracts
We synthesized oligomeric anthocyanins from grape skin-derived monomeric anthocyanins such as anthocyanidin and proanthocyanidin by a fermentation technique using Aspergillus niger, crude enzymes and glucosidase. The biosyntheses of the oligomeric anthocyanins carried out by the conventional method using Aspergillus niger and crude enzymes were confirmed by ESI-MS. The molecular weight of the synthesized anthocyanin oligomers was determined using MALDI-MS. The yield of anthocyanin oligomers using crude enzymes was higher than that of the synthesis using Aspergillus fermentation. Several studies have been demonstrated that oligomeric anthocyanins have higher antioxidant activity than monomeric anthocyanins. Fermentation-based synthesis of oligomeric anthocyanins is an alternative way of producing useful anthocyanins that could support the food industry.
Introduction
Anthocyanins are naturally occurring water-soluble plant pigments belonging to the group of phytochemicals known as flavonoids [1]. Anthocyanins are present in many plants which display colorful flowers, and different kinds of fruits and vegetables [2][3][4]. The quality and nutritional value of fruits and their products is commonly associated with the color that is derived from anthocyanins [5,6]. Anthocyanins are very useful for the food industry, due to their good water solubility and safety. They have been recognized internationally for their applications, including the replacement of synthetic colorants [7,8]. Anthocyanins have antioxidant activity which contributes to many biological activities such as anticancer, cardiovascular protection, ocular protection and protection against some other chronic diseases [9][10][11][12]. Several studies have been demonstrated that the oligomeric derivatives of anthocyanin have higher activity than the monomeric versions. For example, the anthocyanin oligomers derived from bilberry fruit such as small anthocyanidin glycoside polymers, particularly in the form of dimers, trimers, tetramers and pentamers have higher antioxidant activity than the monomers. These compounds are highly hydro-and liposoluble in nature and are not known to accumulate in the human body [13].
The biosynthesis of oligomeric anthocyanins is the best alternative to overcome the problem of deficiency. At present, studies on the synthesis of anthocyanin oligomers are scarce, and only one related paper is available [13]. Aspergillus species such as Aspergillus niger, A. sojae and A. oryzae have long been used for the production of traditional fermented foods such as doenjang, cheonggukjang, soy sauce and sake in Asian countries [14,15]. Fungi are rich sources of citric acid [16], C 8 volatiles [14] and many enzymes such as xylanase, cellulose [17], amyloglucosidase and exopolygalacturonase [18]. However, in the industrial applications of these fungi overcoming their contamination is a big challenge. The present study focuses on the synthesis of oligomeric anthocyanins by fermentation of monomeric anthocyanins with Aspergillus niger, as well as with crude enzymes derived from the fungus.
Results and Discussion
The oligomeric anthocyanins were successfully synthesized by fermentation using Aspergillus niger ( Figure 1) as well as crude enzyme ( Figure 2) as confirmed by ESI-MS (Figures 3 and 4). The oligomeric anthocyanins showed higher peak values and higher molecular weight than the monomeric anthocyanins. The higher peak value might be attributed to the presence of higher amount of oligomeric anthocyanins under similar experimental conditions [19]. It was confirmed that the yield of oligomeric anthocyanins derived from the fermentation by crude enzyme was better than that derived from the fermentation with Aspergillus niger (Table 1). We have previously reported the synthesis and characterization of anthocyanin oligomers produced by A. niger fermentation using anthocyanin monomers as substrate [13]. The molecular weight of the anthocyanin oligomers was determined using Matrix Assisted Laser Desorption/Ionization Mass Spectrometry (MALDI-MS) [20]. In this study, the biosynthesis of oligomeric anthocyanins was detected using the relative absorbance values of compounds estimated by ESI-MS. ESI-MS is an important technique to detect femtomole quantities of sample, including non-volatile and thermally labile biomolecules that are difficult to analysis by other conventional techniques [21]. Liu et al. [22] detected the monomers, dimers, tetramers and hexamers of purified oligomeric proanthocyanins using ESI-MS. Therefore, the present study also used ESI-MS to analyze the various structures of the oligomeric proanthocyanins.
The monomeric anthocyanins such as anthocyanidin and proanthocyanidin give peaks at 288 m/z ( Figure 3A) and 381 m/z ( Figure 4A), respectively. The oligomeric anthocyanins synthesized from anthocyanidin monomers showed the peak values of m/z 905, 1193 ( Figure 3B). Similarly, the oligomeric anthocyanins synthesized from the other monomer (proanthocyanidin) showed the following highest peak values: m/z 903, 1191, 1479 ( Figure 4B). Therefore, the differentiation of peak values between before and after fermentation confirmed the synthesis of oligomeric anthocyanins using crude enzyme as well as fermentation with Aspergillus niger. The amount of oligomeric anthocyanin synthesized from fermented crude enzyme was higher than that synthesized from fermentation with Aspergillus niger. Table 2). According our literature search, some of the carbohydrate hydrolases mentioned in Table 2 were found to produce condensation reactions [23]. After the synthesis (Figure 8), the content of anthocyanin was determined based on the presence of glucosidase in the product ( Figure 9). The results demonstrate the similarity of pattern between Figures 3 and 4. Figure 9 indicates a difference in molecular weight of m/z 288 for each oligomeric anthocyanin peak, which corresponds to the molecular weight of cyanidin. Consequently, the structure of anthocyanins was presumed to be as shown in Figure 10.
Comparing the yield of the oligomeric anthocyanins, it was seen that the highest yield was obtained using crude enzyme (Table 1), but crude enzymes are difficult to obtain. Therefore, it was thought that it might be more economical to use a commercially available glucosidase. In addition, as seen in Figures 4B and 9B, the oligomeric anthocyanins synthesized with glucosidase showed higher oligomer content than oligomeric anthocyanin synthesized with crude enzyme. Table 2). According our literature search, some of the carbohydrate hydrolases mentioned in Table 2 were found to produce condensation reactions [23]. After the synthesis (Figure 8), the content of anthocyanin was determined based on the presence of glucosidase in the product ( Figure 9). The results demonstrate the similarity of pattern between Figures 3 and 4. Figure 9 indicates a difference in molecular weight of m/z 288 for each oligomeric anthocyanin peak, which corresponds to the molecular weight of cyanidin. Consequently, the structure of anthocyanins was presumed to be as shown in Figure 10.
Comparing the yield of the oligomeric anthocyanins, it was seen that the highest yield was obtained using crude enzyme (Table 1), but crude enzymes are difficult to obtain. Therefore, it was thought that it might be more economical to use a commercially available glucosidase. In addition, as seen in Figures 4B and 9B, the oligomeric anthocyanins synthesized with glucosidase showed higher oligomer content than oligomeric anthocyanin synthesized with crude enzyme. Table 2). According our literature search, some of the carbohydrate hydrolases mentioned in Table 2 were found to produce condensation reactions [23]. After the synthesis (Figure 8), the content of anthocyanin was determined based on the presence of glucosidase in the product (Figure 9). The results demonstrate the similarity of pattern between Figures 3 and 4. Figure 9 indicates a difference in molecular weight of m/z 288 for each oligomeric anthocyanin peak, which corresponds to the molecular weight of cyanidin. Consequently, the structure of anthocyanins was presumed to be as shown in Figure 10.
Comparing the yield of the oligomeric anthocyanins, it was seen that the highest yield was obtained using crude enzyme (Table 1), but crude enzymes are difficult to obtain. Therefore, it was thought that it might be more economical to use a commercially available glucosidase. In addition, as seen in Figures 4B and 9B, the oligomeric anthocyanins synthesized with glucosidase showed higher oligomer content than oligomeric anthocyanin synthesized with crude enzyme. The comparison of monomeric and oligomeric anthocyanins using HPLC with UV detection confirmed that the fractions have different patterns (Figure 11). Different patterns were fractionated and analyzed by LC/MS. In the second fraction, a single substance showing a molecular weight of m/z 429, 871 was identified at 7 min ( Figure 12). It was assumed that the compound with molecular weight m/z 871 was the dimer and that with m/z 429 was the monomer. This assumption was correlated to the results obtained by ESI-MS for the m/z 871 peak ( Figure 13A). Based on the results of Figure 13, the m/z 429 compound consisted of anthocyanidin (m/z 310) and glucose. The m/z 871 peak is the dimeric form of the m/z 429 species. An NMR study is needed for better understanding of these molecular structures. The comparison of monomeric and oligomeric anthocyanins using HPLC with UV detection confirmed that the fractions have different patterns (Figure 11). Different patterns were fractionated and analyzed by LC/MS. In the second fraction, a single substance showing a molecular weight of m/z 429, 871 was identified at 7 min ( Figure 12). It was assumed that the compound with molecular weight m/z 871 was the dimer and that with m/z 429 was the monomer. This assumption was correlated to the results obtained by ESI-MS for the m/z 871 peak ( Figure 13A). Based on the results of Figure 13, the m/z 429 compound consisted of anthocyanidin (m/z 310) and glucose. The m/z 871 peak is the dimeric form of the m/z 429 species. An NMR study is needed for better understanding of these molecular structures. The comparison of monomeric and oligomeric anthocyanins using HPLC with UV detection confirmed that the fractions have different patterns (Figure 11). Different patterns were fractionated and analyzed by LC/MS. In the second fraction, a single substance showing a molecular weight of m/z 429, 871 was identified at 7 min ( Figure 12). It was assumed that the compound with molecular weight m/z 871 was the dimer and that with m/z 429 was the monomer. This assumption was correlated to the results obtained by ESI-MS for the m/z 871 peak ( Figure 13A). Based on the results of Figure 13, the m/z 429 compound consisted of anthocyanidin (m/z 310) and glucose. The m/z 871 peak is the dimeric form of the m/z 429 species. An NMR study is needed for better understanding of these molecular structures. In summary, the biosynthesis of oligomeric anthocyanins using fermentation is an alternative approach to overcome the problem of their natural scarcity to avoid the overexploitation of natural resources.
Materials
5-Dimethyl-1-pyrroline-N-oxide (DMPO), FeSO4, and H2O2 were purchased from Sigma Chemical Co. (St. Louis, MO, USA). KH2PO4, KCl and NaCl were purchased from Junsei (Tokyo, Japan). Saccharose, dextrose, urea, MgSO4, MnSO4, and ZnSO4 were purchased from Deajung (Siheung, Korea). Peptone In summary, the biosynthesis of oligomeric anthocyanins using fermentation is an alternative approach to overcome the problem of their natural scarcity to avoid the overexploitation of natural resources. In summary, the biosynthesis of oligomeric anthocyanins using fermentation is an alternative approach to overcome the problem of their natural scarcity to avoid the overexploitation of natural resources. Peptone G was purchased from Acumedia (Lansing, MI, USA). The grape skin-derived anthocyanins were purchased from Kitolife Co. Ltd. (Pyeongtaek, Korea).
Synthesis of Oligomeric Anthocyanin by Fermentation Using Aspergillus niger
Synthesis of oligomeric anthocyanins using Aspergillus niger was described by Lee et al. [13]. In this study, monomeric anthocyanins such as anthocyanindins and proanthocyanidins were used to synthesize oligomeric anthocyanins. The monomeric anthocyanin powders were fermented with Aspergillus niger at 25 • C in a shaking incubator for 5 days. The fermented cultures were centrifuged at 3000 rpm, 4 • C, for 20 min. The supernatants were filtered with Whatman No. 41 filter paper and the filtrate was freeze-dried by a freeze drier system (SFDSM06, Samwon, Busan, Korea) in order to obtain the synthesized oligomeric anthocyanins. The concentrations of fermented oligomeric anthocyanin produced by fermentation were estimated using Electrospray Ionization-Mass Spectrometry (ESI-MS) at the Korea Basic Science Institute (KBSI, Ochang, Korea). The molecular mass values of the compounds were analyzed by a Synapt G2 HDMS quadrupole time-of-flight (TOF) mass spectrometer equipped with an electrospray ion source (Waters, Milford, MA, UK) in positive ion mode at a spray voltage of 2.5 kV. MS spectra were obtained with the capillary heated to 150 • C. The instrument was calibrated using NaF solution. The sample was dissolved in 100% MeOH and introduced by direct infusion at a flow rate of 20 µL/min into the ion source operating in positive mode. All spectra were acquired at a range of 50 to 2500 m/z. Leucine enkephalin was used as the lock mass for the exact mass measurement correction.
Separation of Fermented Crude Enzyme from the Culture of Aspergillus niger
The fungal strain Aspergillus niger was cultured in 100 mL saccharose medium or potato dextrose agar (PDA) medium and incubated at 25 • C, for 7 days in an shaking incubator. The Aspergillus niger culture medium was centrifuged at 3000 rpm, 4 • C, for 20 min. The supernatant was precipitated with an equal volume of acetone at 4 • C, overnight (10-12 h) and this mixture was then centrifuged at 3000 rpm, 4 • C for 20 min. After removal of the supernatant, the pellet was dissolved using 5 mL of distilled water and further centrifuged at 13,000 rpm, 4 • C, for 5 min. The supernatant (crude enzyme) was freeze-dried in order to synthesize the oligomeric anthocyanin.
Synthesis of Oligomeric Anthocyanin Using Crude Enzyme
The anthocyanin powder was fermented with crude enzyme at 25 • C in a shaking incubator for 7 days. The fermented stuff was centrifuged at 4 • C, 3000 rpm for 20 min. The supernatant was filtered with Whatman No. 41 filter paper and the filtrate was freeze-dried in a SFDSM06 freeze drier system in order to obtain the synthesized oligomeric anthocyanins. The concentration of oligomeric anthocyanin was examined by ESI-MS at KBSI.
3.6. Analysis of Crude Enzyme from the Culture of Aspergillus niger SDS-PAGE gel slicing was used for LC-MS/MS analysis of the secretory proteins from Aspergillus niger. The soluble proteins in urea lysis buffer containing 8 M urea and 4% CHAPS after the acetone precipitation of the secretory fraction from Aspergillus niger was subjected to 6-15% SDS-PAGE and stained with colloidal Coomassie solution. The lanes, which are cut as nine slices, were excised from the gel in three of the protein lanes for the mass spectrometry experiments by destaining and in-gel digestion followed by peptide extraction. The tryptic peptides obtained from each gel slice were analyzed by LC-MS/MS running on the Q-STAR Pulsar ESI-hybrid Q-TOF instrument.
Synthesis of Oligomeric Anthocyanin Using Various Enzymes
The anthocyanin powder was fermented with various enzymes at 25 • C in a shaking incubator for 7 days. The fermented product was centrifuged at 4 • C, 3000 rpm for 20 min. The supernatant was filtered with Whatman No. 41 filter paper and the filtrate was freeze-dried by a freeze dry system (SFDSM06) in order to obtain the synthesized oligomeric anthocyanins The concentration of oligomeric anthocyanin was examined by ESI-MS at KBSI.
Isolation of Oligomeric Anthocyanin and Analysis
The monomeric and oligomeric anthocyanins were further isolated using reversed-phase HPLC (RP-HPLC) on a C 18 column (4.0 × 250 mm) with a linear gradient of MeOH (0-60%) at a flow rate of 1.0 mL/min. The eluted peaks were detected at 272 nm. The collected samples were pooled and concentrated using a rotary evaporator, then lyophilized for 3 days. The lyophilized sample was further analyzed by LC/MS followed by ESI-MS at KBSI.
Statistical Analysis
The statistical analyses was carried out by the paired t-test (p < 0.05) and comparisons made between monomeric anthocyanins and oligomeric anthocyanins. The data are presented as mean ± SD. All analyses were performed using the SPSS software (SPSS Institute, Chicago, IL, USA).
Conclusions
Our results indicate that the synthesis of oligomeric anthocyanins using glucosidase from A. niger is better than that possible with fermentation of A. niger. Synthesis of oligomeric anthocyanins was confirmed by ESI-MS and HPLC analysis. The present study successfully overcome the problem of fungal contamination during synthesis of oligomeric anthocyanins. Further studies are however required to assess the biological activities of the produced oligomeric anthocyanins.
|
2017-05-09T13:26:07.969Z
|
2017-03-01T00:00:00.000
|
{
"year": 2017,
"sha1": "ab032df3344abd371aeb65a53ff7a16a418e00f0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/22/3/497/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ab032df3344abd371aeb65a53ff7a16a418e00f0",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
119200536
|
pes2o/s2orc
|
v3-fos-license
|
More randomness from a prepare-and-measure scenario with independent devices
How to generate genuine quantum randomness from untrusted devices is an important problem in quantum information processing. Inspired by previous work on a self-testing quantum random number generator [T. Lunghi et al., Phys. Rev. Lett. 114, 150501 (2015)], we present a method to generate quantum randomness from a prepare-and-measure scenario with independent devices. In existing protocols, the quantum randomness depends only on a witness value (e.g., Clauser-Horne-Shimony-Holt value), which is calculated with the observed probabilities. Differently, here all the observed probabilities are directly used to calculate the min-entropy in our method. Through numerical simulation, we find that the min-entropy of our proposed scheme is higher than that in the previous work when a typical untrusted Bennett-Brassard 1984 (BB84) setup is used. Consequently, thanks to the proposed method, more genuine quantum random numbers may be obtained than before.
I. INTRODUCTION
True randomness is an essential resource in quantum information processing and has multiple applications in numerical simulation, statistics, lottery games and cryptography. Since it is impossible to generate true random numbers by computer algorithms, most true random number generators are based on unpredictable physical process. Recently a variety of quantum random number generation (QRNG)schemes based on the intrinsic randomness of quantum theory have been proposed [1][2][3][4][5][6][7][8][9][10][11][12][13][14]. All of these schemes work essentially according to the same principle, exploiting the randomness of quantum measurements. However, the random numbers generated by these protocols relies on the assumption of the specific internal functioning of devices. The output data can only be tested by statistical method, such as statistical test suite from NIST [15]. The statistical method cannot guarantee the true randomness of the output data. Furthermore, if the devices are spoiled or controlled by an adversary, the output data may be just pseudo-random numbers. To solve this problem, "Device-Independent" (DI) QRNG was built [16], which does not need knowledge of the internal functioning of the devices. The private randomness in DI protocols is certified by Bell inequality violation but not the details of the quantum devices. Unfortunately, such protocols are quite impractical under current technology, since they demand the total efficiency must be very high to avoid detection loophole attacks. Inspired by the DI approach to true randomness, Li et al proposed the semi-device-independent random number generation protocol [17]. Semi-device-independent approach works in a prepare-and-measure scenario in which no assumption is made on the internal functioning of the preparation and measurement devices, except that the dimension of the quantum system accessed by the measurement device is bounded [18]. However, this protocol still suffers from detection loophole attacks [19].
Last year, Bowles et al proposed a new scheme based on a prepare-and-measure setup [21] and experimentally realized it [22]. This protocol (BQB14 for abbreviation) seems like SDI protocol, but requires the assumptions that the preparation and measurement devices are independent and the quantum system has bounded dimension. This protocol uses a dimension witness value to characterize the quantum randomness of the system. Since the witness value is given by an equality, this protocol can be used to generate randomness with high channel loss. Here we present a novel QRNG protocol also in a prepare-and-measure scenario with independent devices. The assumption of our protocol is completely the same as the BQB14. We make no assumption on the functioning of the devices except the dimension of preparation device is set to be 2 and its hidden variables are independent of any other devices. The key difference between our protocol and the BQB14 is that: we use all the observed probabilities instead of a witness value as the index of the potential quantum randomness. In BQB14 and even all SDI, DI protocols, one must use the observed probabilities to calculate a witness value, then use this witness value to calculate the quantum randomness of the output data. Unlike the existing protocols, we search all the possible quantum preparation and measurement process satisfied all the observed probabilities to find the minimum real randomness of the output data. The merit of our method is that all the observed probabilities are directly used to calculate the randomness, thus our method may be optimal than the existed protocols. Simulation results show our protocol works with very low detection efficiency. With a typical prepare-and-measure setup (untrusted BB84 setup [20]), we find that entropy of the proposed protocol is higher than BQB14 protocol.
2. For Bob there are two measurements y=0,1 with two outputs b ∈ {0, 1}. In general, the measurement M y = {M 0 y , M 1 y } should be a POVM. However, we first assume that M y is a projective measurement for simplicity. The POVM case will be analyzed later in this paper.
3. Alice and Bob observe the conditional probabilities q(b|x, y). The task is to extract real quantum randomness generated by potential quantum process according to all the observed probabilities q(b|x, y).
Before proceeding, we must model our device with hidden variables. To model the characteristics of the preparation devices, we represent the internal state of the preparation device by a random variable λ. In each run of the experiment, the preparation device emits a qubit sate ρ λ x which depends on the setting x and the internal hidden variable λ. Hence when Alice inputs x, the device prepares λ q λ x ρ λ x . We assume that λ is unknown to the legitimate users, and even also any adversary. This is the key difference between our model and DI protocols. In DI ones, the hidden variable λ is planted by an adversary and thus known to the adversary. Conversely, in our model the adversary only knows the distribution of λ but does not know the exact value of λ of each run. Our assumption of preparation device is quite similar to BQB14. For measurement device, since we have assumed that the adversary including the measurement device has no idea about the exact value of λ of each run , the measurement device performs an unknown measurement M y , which is irrelevant of λ. As the observer has no access to the variable λ, he will only observe the distribution: where Without loss of generality, we can rewrite ρ x = λ q λ x Ψ λ Ψ λ . The task of the legitimate user is to estimate the amount of genuine quantum randomness generated in the setup based only on the observed distribution q(b|x, y). Since we have assumed that M y = {M 0 y , M 1 y } is a projective measurement, the genuine quantum randomness for the output data under measurement M y is given by maximum value of guess probability, which is The maximum value of the guessing probability p g reflects the genuine quantum randomness. Since the hidden variable λ is unknown to user, one should calculate max λ p g by searching all possible distribution of λ and decomposition of ρ x . A general consideration for how to calculate this value is given in the next section.
III. ANALYSIS
We still consider the measurements are all projective measurements at first. For the output data with ρ x under measurement M y , we define the maximum value of guessing probability as max p g (x, y) = max , which reflects the quantum randomness. Although the hidden variable λ may have infinite values, we can be divided λ into two parts by the value of T r( Ψ λ x Ψ λ x M 0 y ) is higher than 1 2 or not. So it will be not restrictive for the calculation of maxp g (x, y) if we assume λ can be just chosen from two values λ 1 and λ 2 . The maximal guessing probability becomes The maximal guessing probability denotes the solution to the following optimization problem: max p g (x, y) subject to : Above formulae are based on the assumption that M y is a projective measurement. However, M y may be a POVM but not projective measurement. Fortunately, as proved in [23], a general POVM can be decomposed into 3 different operations, which is performed randomly. Concretely, the three operations are performing a projective measurement to decide the output, or just generating 0 or 1 without any measurement. Then we will use this measurement model to estimate the quantum randomness. We can assume the probabilities to choose these three operations to be {m y , u 0 y , u 1 y }(m y + u 0 y + u 1 y ) for y=0 and 1 separately. As a result, the min-entropy for the output of measurement M y is given as Consider we are interested with two-dimension system, it is convenient to rewrite our formulae with Bloch vectors. The observed probabilities are rewritten as q(0|x, y) = m y ( 1 2 +
Sx· Ty
2 ) + u 0 y , where S x is the Bloch vector of the input state ρ x , T y is the Bloch vector of the projective measurement M y . The problem of finding the genuine quantum randomness becomes the calculation of where q λ1 x + q λ2 x = 1, S λ1 x and S λ2 x are Bloch vectors for qubit states. This is an optimization problem of variables m y , u b y , q λ1 x , q λ2 x , S x , S λ1 x , S λ2 x , T y subject to above constraints. In an experiment, Alice and Bob observe the probabilities q(b|x, y) and then we can use numerical method to compute max p g (x, y). In practical, we are particularly interested in extracting randomness from an untrusted BB84 setup. In next section, we simplify the general result to be fit for the experimental results based on untrusted BB84 setup.
where e 0 , e 1 , e 2 and e 3 are quantum bit error rates(QBER). The measurement results can be written in our measurement framework as: The quantum bit error rate e 0 , e 1 , e 2 and e 3 can be measured in the experiment and are always between 0 and 0.5. We can find out the maximal value of mean guessing probabilities i.e., the average guessing probability of the outcome for input states x=0,1,2,3 in the measurement y=0.
As proved in the last section, to calculate p g (2, 0) + p g (3, 0) we should decompose the input state ρ 2 and ρ 3 into two parts and search all over the qubit strategies to get the maximal guessing probability. The constraints can be simplified by some mathematical techniques. Considering the worst situation, equation (7) and (8) become by (11)-(12), we get Since 0 < m 1 ≤ 1, 0 ≤ | T 1 | ≤ 1, we get So the maximal guessing probability denotes the solution to the following optimization problem: Thus for observed QBERs, p(0|2, 0) and p(0|3, 0), we can calculate the maximal guessing probability numerically.
V. SIMULATION
In Fig 2, we plot the value of maximal guessing probability as a function of QBERs compared with BQB14 protocol. In the simulation, we assume the four QBERs e 0 = e 1 = e 2 = e 3 and p(0|2, 0) = p(0|3, 0) = 1/2. From the simulation results, we can see both BQB14 protocol and our protocol can work in high noisy environment even when the QBERs are close to 0.5. And the maximal guessing probability in our protocol is lower than that in BQB14 protocol. In the ideal situation, the maximal guessing probabilities of our protocol is approximate to 0.75, and for BQB14 protocol, it is 0.854.
Then we use off-the-shelf experimental parameters to show the performance of the protocol in the presence of loss and noise, e.g., the loss is d dB, detection efficiency is η d = 10% and its dark count rate is p d = 10
VI. DISCUSSION AND CONCLUSION
Inspired by the pioneering work on quantum randomness generation [21,24,25] , we propose an alternative method which has higher quantumness generation rate at the same condition. Same as [21], our method works in a prepareand-measure scenario with independent devices. In our method, all observed probabilities are directly used to bound the min-entropy of the output data, while a specific witness value is used in other protocols. Hence, our method gives a tighter bound of min-entropy and thus, higher quantumness generation rate is obtained. Besides, our protocol maintains the advantage of BQB14 protocol that works in high lossy environment.
We use phase-randomized weak coherent source in experiment. However, our theory is for the single photon source. We provide two ways to overcome this problem. The first way is using a photon-number-resolving detector [26,27]. Thus we can clearly distinguish single photon events from multiphoton events. Then we can discard all multi-photon events and just use the trials that correspond to single photon events to generate randomness. The second way is using the decoy states method when photon-number-resolving detector is not available. Similarly with decoy state quantum key distribution [28][29][30], we assume that Alices source is phase-randomized weak coherent source. Then Alice can prepare additional decoy states besides the signal state by modulating the mean photon number of the laser pulses. In experiment, we observe that q µ (b|x, y) directly, where µ is the mean photon number of the source. Note that q µ (b|x, y) = ∞ n=0 p n (µ)q n (b|x, y), where p n (µ) is the probability of n-photon events of a phase randomized weak coherent source, q n (b|x, y) is the probability of outputting b conditioned that the source emits a n-photon pulse, Alice inputs x and Bob inputs y. If we know q 1 (b|x, y), we can calculate the min-entropy for single photon events with our theory. Then the min-entropy for all the events can be obtained by multiplying p 1 (µ), since we can assume the minentropy for multi-photon events is 0. Fortunately, with the idea of decoy states we can establish some linear equations q µ (b|x, y) = ∞ n=0 p n (µ)q n (b|x, y) by modulating different µ. Then the bounds of q 1 (b|x, y) can be obtained by solving these linear equations. Furthermore, when the number of decoy states is infinite (modulating infinite different µ), we can get the precise value of q 1 (b|x, y) in principle. Then the calculation of min-entropy is straightforward by our theory. In conclusion, we can choose one from these two ways to exclude the effect of multiphoton events and generate true quantum randomness using our protocol.
|
2016-04-07T08:19:41.000Z
|
2016-04-07T00:00:00.000
|
{
"year": 2016,
"sha1": "44b4f9d5be17a59bcaad3b8a27ce3c69e031fef7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1604.01915",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "44b4f9d5be17a59bcaad3b8a27ce3c69e031fef7",
"s2fieldsofstudy": [
"Computer Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
34654508
|
pes2o/s2orc
|
v3-fos-license
|
Mach Number Dependence of Turbulent Magnetic Field Amplification: Solenoidal versus Compressive Flows
We study the growth rate and saturation level of the turbulent dynamo in magnetohydrodynamical simulations of turbulence, driven with solenoidal (divergence-free) or compressive (curl-free) forcing. For models with Mach numbers ranging from 0.02 to 20, we find significantly different magnetic field geometries, amplification rates, and saturation levels, decreasing strongly at the transition from subsonic to supersonic flows, due to the development of shocks. Both extreme types of turbulent forcing drive the dynamo, but solenoidal forcing is more efficient, because it produces more vorticity.
The main objective of this Letter is to investigate fundamental properties of turbulent dynamo amplification of magnetic fields by making systematic numerical experiments, in which we can control the compressibility of the plasma by varying the Mach number and the energy injection mechanism (forcing) of the turbulence. We consider flows with Mach numbers ranging from M = 0.02 to 20, covering a much larger range than in any previous study. Haugen et al. [4] provided critical Reynolds numbers for dynamo action, but did not investigate growth rates or saturation levels, and studied only 0.1 ≤ M ≤ 2.6. The energy released by, e.g., supernova explosions, however, drives interstellar and galactic turbulence with Mach numbers up to 100 [5]. Thus, much higher Mach numbers have to be investigated. It is furthermore tempting to associate such supernova blast waves with compressive forcing of turbulence [6][7][8]. Mee & Brandenburg [6] concluded that it is very hard to excite the turbulent dynamo with such curl-free forcing, because vorticity is not directly injected. In this Letter, we show that the turbulent dynamo is driven by curl-free injection mechanisms, and quantify the amplification as a function of compressibility of the plasma. This is the first study-to the best of our knowledge-addressing the Mach number and forcing dependence of the turbulent dynamo in detail. The main questions addressed are: How does the turbulent dynamo depend on the Mach number of the flow? What are the growth rates and saturation levels in the supersonic and subsonic regimes of turbulence? What is the field geometry and amplification mechanism?
To address these questions, we compute numerical solutions of the compressible, nonideal, three-dimensional, magnetohydrodynamical (MHD) equations with the grid code FLASH [9], (1/2)ρ |u| 2 + (1/2) |B| 2 denote density, velocity, pressure (thermal and magnetic), magnetic field, and total energy density (internal, kinetic, and magnetic). Viscous interactions are included via the traceless rate of strain tensor, S ij = (1/2)(∂ i u j +∂ j u i )−(1/3)δ ij ∇·u, and controlled by the kinematic viscosity, ν. We also include physical diffusion of B, which is controlled by the magnetic diffusivity, η. The MHD equations are closed with a polytropic equation of state, p = c 2 s ρ, such that the gas remains isothermal with constant sound speed c s . To drive turbulence with a given Mach number, we apply the forcing term F as a source term in the momentum equation. The forcing is modeled with a stochastic Ornstein-Uhlenbeck process [8,10], such that F varies smoothly in space and time with an autocorrelation equal to the eddy-turnover time, t ed = L/(2Mc s ) at the largest scales, L/2 in the periodic simulation domain of size L. M = u rms /c s denotes the root-mean-squared (rms) Mach number, the ratio of rms velocity and sound speed. The forcing is constructed in Fourier space such that kinetic energy is injected at the smallest wave numbers, 1 < |k| L/2π < 3. We decompose the force field into its solenoidal and compressive parts by applying a projection in Fourier space. In index notation, the projection operator reads P ζ ij (k) = ζ P ⊥ ij + (1 − ζ) P ij = ζ δ ij + (1 − 2ζ) k i k j /|k| 2 , where P ⊥ ij and P ij are the solenoidal and compressive projection operators. This projection allows us to construct a solenoidal (divergence-free) or compressive (curlfree) force field by setting ζ = 1 (sol) or ζ = 0 (comp).
arXiv:1109.1760v1 [physics.flu-dyn] 8 Sep 2011
For most of the simulations, we set the kinematic viscosity ν and the magnetic diffusivity η to zero, and thus solve the ideal MHD equations. In this case, the dissipation of kinetic and magnetic energy is due to the discretization of the fluid equations. However, we did not add any artificial viscosity. Here, we use Riemann solvers, which capture shocks also in the absence of artificial viscosity. In addition to the ideal MHD simulations, however, we also solved the full, nonideal MHD system, Eq. 1, for four representative models to show that our results are physical and robust against changes in the numerical scheme. For the ideal MHD simulations, we use the positive-definite, split Riemann scheme HLL3R [11] in FLASH v2.5, while our nonideal MHD simulations were preformed with the unsplit staggered mesh scheme in FLASH v4 [12], using a third-order reconstruction, constrained transport to maintain ∇ · B = 0 to machine precision, and the HLLD Riemann solver [13]. We ran simulations with 128 3 , 256 3 , and 512 3 grid cells, showing convergence of our results below.
After an initial transient phase that lasts for 2 t ed , turbulence becomes fully developed and the Mach number reaches its preset value, fluctuating on a 10% level. ∼ 1 as soon as they reach saturation. For these runs, the magnetic field has increased to a dynamically significant level, causing M to drop at late times, due to the back-reaction of B onto the flow. In contrast, in all supersonic runs and in all runs with compressive forcing, the magnetic field has little dynamical impact on the turbulent flow. Although the Mach numbers are not strongly affected in those cases, the fragmentation behavior of the gas might still change [17], emphasizing the importance of magnetic fields. Figure 1 (bottom) shows that the magnetic energy grows exponentially over at least 10 orders of magnitude in each model and reaches saturation at different levels (discussed in detail below). Note that the nonideal MHD models at different resolution are almost indistinguishable from the ideal MHD models. Figure 2 shows that the high Mach number runs are dominated by shocks. Compressive forcing yields stronger density enhancements for similar Mach numbers [18]. The magnetic field occupies large volume fractions with rather unfolded, straight field lines in the compressively driven cases, while solenoidal forcing produces more space-filling, tangled field configurations, sug- gesting that the dynamo is more efficiently excited with solenoidal forcing. This is quantitatively shown in Fig. 3 (top and middle panels), where we plot the growth rates, Γ, in the relation E m = E m0 exp(Γt), and the saturation level, (E m /E k ) sat with the magnetic and kinetic energies E m and E k as a function of Mach number for all models. Both Γ and (E m /E k ) sat depend strongly on M and on the turbulent forcing. Solenoidal forcing gives growth rates and saturation levels that are always higher than in compressive forcing, as indicated by the different field ge- The stretch-twist-fold mechanism of the dynamo [1] is evident in all models, but operates with different efficiency due to the varying compressibility, flow structure, and formation of shocks in the supersonic plasmas.
ometries shown in Fig. 2. Both Γ and (E m /E k ) sat change significantly at the transition from subsonic to supersonic turbulence. We conclude that the formation of shocks at M ≈ 1 is responsible for destroying some of the coherent vortical motions necessary to drive the dynamo [4]. However, as M is increased further, vorticity generation in oblique, colliding shocks [19,20] starts to dominate over the destruction. The very small growth rates of the subsonic, compressively driven models is due to the fact that hardly any vorticity is excited. To quantify this, we plot the solenoidal ratio, i.e., the specific kinetic energy in solenoidal modes of the turbulent velocity field, divided by the total specific kinetic energy, χ = E sol /E tot in Fig. 3 (bottom), which shows a strong drop of solenoidal energy for low-Mach, compressively driven turbulence.
In the absence of the baroclinic term, (1/ρ 2 )∇ρ × ∇p, the only way to generate vorticity, ω = ∇ × u, with compressive (curl-free) forcing is via viscous interactions in the vorticity equation [6]: The second term on the right hand side of the last equation is diffusive. However, even with zero initial vorticity, the last term generates vorticity via viscous interactions in the presence of logarithmic density gradients. The small seeds of vorticity generated this way are exponentially amplified by the non-linear term, ∇ × (u × ω), in analogy to the induction equation for the magnetic field, if the Reynolds numbers are high enough [21]. For very low Mach numbers, however, density gradients start to vanish, thus explaining the steep drop of dynamo growth in compressively driven turbulence at low Mach number. Analytic estimates [22] suggest that Γ ∝ M 3 in compressively driven, acoustic turbulence [23], indicated as dotted line in Fig. 3. The solid lines are fits with an empirical model function, The fit parameters are given in Table I. We emphasize that the fits do not necessarily reflect the true asymptotic behavior of Γ and (E m /E k ) sat . The subsonic, solenoidally driven models show very high saturation levels, (E m /E k ) sat ≈ 40-60%, explaining the strong back reaction of the field, causing M to drop in the saturation regime (see fig. 1, [24]). For the growth rate, we fixed p 6 such that Γ ∝ M 1/3 for M 1, in good agreement with our models up to M ≈ 20. However, even higher M has to be investigated to see, if Γ ∝ M 1/3 holds in this limit. We find that Γ depends much less on M in the solenoidal forcing case than in the compressive one. Nevertheless, a drop of the growth rate at M ≈ 1 is noticeable in both cases. Theories based on Kolmogorov's [25] original phenomenology of incompressible, purely solenoidal turbulence predict no dependence of Γ on M. For instance, Subramanian [26] derived Γ = (15/24)Re 1/2 t −1 ed based on Kolmogorov-Fokker-Planck equations, in the limit of large magnetic Prandtl number, Pm = ν/η = Rm/Re 1 with the kinetic and magnetic Reynolds numbers Re = Lu rms /(2ν) and Rm = Lu rms /(2η). For Pm ≈ 2 [applicable to ideal MHD, see 27], and Re ≈ 1500, corresponding to our simulations, however, we find slightly smaller growth rates, in agreement with analytic considerations [28], and with numerical simulations of incompressible turbulence for Pm ≈ 1 [29,30]. Thus, an extension of dynamo theory to small Pm is needed. Moreover, extending the theory from Kolmogorov to Burgers-type, shock-dominated turbulence would be an important step forward in developing a more generalized theory of turbulent dynamos, potentially with predictive power for the supersonic regime and for compressive turbulent energy injection.
In summary, we conclude that the growth rate and saturation level of the dynamo depend sensitively on the Mach number and the energy injection mechanism of magnetized turbulence, exhibiting a characteristic drop of the growth rate at the transition from subsonic to supersonic turbulent flow. Geophysical and astrophys- Table I. The arrows indicate four models (M ≈ 0.4, 2.5 for sol. and comp. forcing), using ideal MHD on 128 3 grid cells (a), nonideal MHD on 256 3 (b), and 512 3 grid cells (c), demonstrating convergence for the given magnetic Prandtl, Pm ≈ 2, and kinematic Reynolds number, Re ≈ 1500.
ical dynamos operate in both, subsonic and supersonic plasmas, driven by vastly different injection mechanisms. Here we showed that strong magnetic fields are generated even in purely compressively (curl-free) driven turbulence (applicable to e.g., galactic clouds), but solenoidal (divergence-free) turbulence drives more efficient dynamos, due to the higher level of vorticity generation and the stronger tangling of the magnetic field. the Baden-Württemberg-Stiftung (grant P-LS-SPII/18) and from the German BMBF (grant 05A09VHA). The simulations were run at the LRZ (grant pr32lo) and the JSC (grants hhd14, hhd17, hhd20). The FLASH code was in part developed by the DOE NNSA-ASC OASCR Flash Center at the University of Chicago. * federrath@uni-heidelberg.de; Ecole Normale Supérieure de
|
2011-09-08T16:30:47.000Z
|
2011-09-08T00:00:00.000
|
{
"year": 2011,
"sha1": "df08caa979d927dc24ab43e35165e3e352412d53",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1109.1760",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0720bc37b69d4e40f2344fd511477bb2f7d5d776",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
235709484
|
pes2o/s2orc
|
v3-fos-license
|
Wind Exposure Regulates Water Oxygenation in Densely Vegetated Shallow Lakes
The presence of dense macrophyte canopies in shallow lakes locally generates thermal stratification and the buildup of labile organic matter, which in turn stimulate the biological oxygen demand. The occurrence of hypoxic conditions may, however, be buffered by strong wind episodes, which favor water mixing and reoxygenation. The present study aims at explicitly linking the wind action and water oxygenation within dense hydrophytes stands in shallow lakes. For this purpose, seasonal 24 h-cycle campaigns were carried out for dissolved gases and inorganic compounds measurements in vegetated stands of an oligo-mesotrophic shallow lake. Further, seasonal campaigns were carried out in a eutrophic shallow lake, at wind-sheltered and -exposed sites. Overall results showed that dissolved oxygen (DO) daily and seasonal patterns were greatly affected by the degree of wind exposure. The occurrence of frequent wind episodes favored the near-bottom water mixing, and likely facilitated mechanical oxygen supply from the atmosphere or from the pelagic zone, even during the maximum standing crop of plants (i.e., summer and autumn). A simple model linking wind exposure (Keddy Index) and water oxygenation allowed us to produce an output management map, which geographically identified wind-sheltered sites as the most subjected to critical periods of hypoxia.
Introduction
In lentic shallow water bodies, the diel and seasonal oxygen balance is given by the interplay between the photosynthetic activity of primary producers (net production of dissolved oxygen, DO), their respiration (net DO consumption) and heterotrophic respiration of bacteria and animals (net DO consumption). When present, submerged aquatic vegetation (SAV) induces significant diel fluctuations in oxygen levels [1]. During the day, a supersaturation of oxygen (>100%) is observed due to photosynthesis, while at night oxygen is no longer produced, and consumption processes are predominant due to respiration. This type of nycthemeral variation is particularly emphasized in summer, when plant photosynthetic rate and heterotrophic respiration are at their maximum, and whose net effect largely exceeds the contribution of temperature-dependent oxygen solubility [2,3]. Primary production releases high amounts of oxygen in the water column, allowing for the oxidation of methane (CH 4 ) by methanotrophic epiphytic bacteria [4,5]. In addition, radial oxygen loss in the rhizosphere [6][7][8] contributes to reduce benthic CH 4 flux through benthic methanotrophy or oxidation of nitrate [9]. The synthesis of large quantities of biomass occurs through the assimilation of nutrients (including N-compounds, phosphate and carbon dioxide, CO 2 ); SAV is thus able to uptake gases and nutrients coming from the sediment and the atmosphere, and synthesize them in biomass [10]. However, oxygen dynamics can be altered in densely vegetated stands, such as those dominated by invasive macrophytes:
Diel Variations in Vegetated Stands
Sampling sites in Lacanau Lake (hereafter, LAC Lake) were homogeneously distributed within the largest invasive macrophyte stands of the lake, which developed in the most sheltered zones of the lake (Figure 1).
Results from seasonal 24 h-cycle campaigns showed that most of the sites were hypoxic (DO saturation <100%), and that DO depletion also occurred during daylight ( Figure 2). Concomitantly, CO 2 was mostly supersaturated and pH acid (pH < 7), with some exceptions during daylight in summer and autumn ( Figures S1 and S2). CH 4 , NH 4 + and NO 3 − buildup in the water column appeared both during the night and day ( Figures S3-S5). Water temperature ranged from 11.1 ± 0.2 to 26.7 ± 0.3 • C (in spring and summer, respectively- Figure S6) and DOC averaged 13.1 ± 0.2 mg L −1 on an annual basis. Keddy Index calculated on annual basis for Lacanau Lake (on the left) and Parentis-Biscarrosse Lake (on the right). The windrose is calculated on wind speed and direction hourly data on an annual basis. Lake bathymetry and sampling sites for seasonal 24-cycle campaigns (LAC Lake), as well as sampling sites for seasonal campaigns at wind-sheltered and -exposed sites (PAR Lake) are reported. Keddy Index calculated on annual basis for Lacanau Lake (on the left) and Parentis-Biscarrosse Lake (on the right). The windrose is calculated on wind speed and direction hourly data on an annual basis. Lake bathymetry and sampling sites for seasonal 24-cycle campaigns (LAC Lake), as well as sampling sites for seasonal campaigns at wind-sheltered and -exposed sites (PAR Lake) are reported. ANOVA test revealed that, with temperature and ammonium as solely exceptions (day > night), dissolved gases and inorganic compounds measured within plant stands did not differ between day and night; all parameters varied seasonally (Table 1).
Wind-Sheltered vs. Wind-Exposed Sites
The choice of sampling sites in Parentis-Biscarrosse Lake (hereafter, PAR Lake) was based on two co-occurring conditions: the presence of densely vegetated areas and the difference in wind exposure ( Figure 1). Results from seasonal campaigns in PAR Lake showed that dissolved gases and inorganic compounds significantly changed in function of wind exposure (ANOVA, Table 2). Differences between sheltered and exposed sites were significant for every physicochemical parameter, yet only at vegetated sites and in function of the season (in summer and in autumn). Significant differences between vegetated and plant-free sites occurred only at sheltered sites. Tukey's HSD test indicated that water temperature was lower at sheltered sites than at exposed ones ( Figure S7). pH values were lower at vegetated and sheltered sites than at exposed ones, only during summer ( Figure S8). DO was lower at vegetated and sheltered sites than at exposed ones ( Figure 3); CO 2 and CH 4 were higher at vegetated and sheltered sites than at exposed ones (Figures S9 and S10); NH 4 + and NO 3 − values differed seasonally between sheltered and exposed sites, with no differences between vegetated and plant-free areas (Figures S11 and S12). DOC averaged 6.2 ± 0.2 mg L −1 on an annual basis.
Plants 2021, 10, x FOR PEER REVIEW 6 of 13 sheltered sites than at exposed ones ( Figure S7). pH values were lower at vegetated and sheltered sites than at exposed ones, only during summer ( Figure S8). DO was lower at vegetated and sheltered sites than at exposed ones ( Figure 3); CO2 and CH4 were higher at vegetated and sheltered sites than at exposed ones (Figures S9 and S10); NH4 + and NO3 − values differed seasonally between sheltered and exposed sites, with no differences between vegetated and plant-free areas (Figures S11 and S12). DOC averaged 6.2 ± 0.2 mg L −1 on an annual basis.
Figure 3.
DO results from seasonal campaigns in PAR Lake. Measurements were carried out at wind-sheltered and -exposed sites, in vegetated and plant-free areas. For a better readability, Tukey's HSD results are not reported for the seasonality factor. *** indicates p-value < 0.001.
Dependence of DO Saturation on Plant Biomass and Sedimentary OM
Total biomass varied seasonally at both lakes, with values comprised between 319 ± 245 and 668 ± 414 gDW m −2 at LAC Lake (in spring and summer, respectively), and between 1626 ± 132 and 4528 ± 2413 gDW m −2 at PAR Lake (in spring at exposed and in autumn at sheltered sites, respectively). OM content in vegetated sediments ranged from 0.7 ± 0.2 to 71 ± 3 % and from 0.7 ± 0.1 to 1.2 ± 0.1 % as LOI, for LAC Lake and PAR Lake, respectively. Linear mixed-effects model, calculated on the two lakes dataset, showed that DO saturation was not dependent on OM sedimentary content on the total plant biomass; only DO values measured in LAC Lake during summer resulted in being negatively correlated to biomass (p-value < 0.01).
Dependence of DO on Wind Exposure and Hypoxia Risk Map Production
The regression of DO saturation against wind exposure, identified with the segmented function in R, showed a structural breakpoint at Keddy Index = 2.9 ( Figure 4). We considered this breakpoint as a threshold of hypoxia risk, i.e., low risk above this value and high risk below. This threshold is assumed to be the minimum wind exposure which would be able to decrease the risk of hypoxia in dense submerged plant stands.
Further, in order to produce a hypoxia risk map, Keddy Index was calculated for each 4 h-long period (n = 2190) on each pixel cell (n = 4031 for LAC Lake and n = 14,438 for PAR Lake) matching with densely vegetated areas, presenting biomass >50 gDW m −2 ,
Plant-free
Wind-sheltered Wind-exposed Wind-sheltered Wind-exposed *** *** *** Figure 3. DO results from seasonal campaigns in PAR Lake. Measurements were carried out at wind-sheltered and -exposed sites, in vegetated and plant-free areas. For a better readability, Tukey's HSD results are not reported for the seasonality factor. *** indicates p-value < 0.001.
Dependence of DO Saturation on Plant Biomass and Sedimentary OM
Total biomass varied seasonally at both lakes, with values comprised between 319 ± 245 and 668 ± 414 g DW m −2 at LAC Lake (in spring and summer, respectively), and between 1626 ± 132 and 4528 ± 2413 g DW m −2 at PAR Lake (in spring at exposed and in autumn at sheltered sites, respectively). OM content in vegetated sediments ranged from 0.7 ± 0.2 to 71 ± 3% and from 0.7 ± 0.1 to 1.2 ± 0.1% as LOI, for LAC Lake and PAR Lake, respectively. Linear mixed-effects model, calculated on the two lakes dataset, showed that DO saturation was not dependent on OM sedimentary content on the total plant biomass; only DO values measured in LAC Lake during summer resulted in being negatively correlated to biomass (p-value < 0.01).
Dependence of DO on Wind Exposure and Hypoxia Risk Map Production
The regression of DO saturation against wind exposure, identified with the segmented function in R, showed a structural breakpoint at Keddy Index = 2.9 ( Figure 4). We considered this breakpoint as a threshold of hypoxia risk, i.e., low risk above this value and high risk below. This threshold is assumed to be the minimum wind exposure which would be able to decrease the risk of hypoxia in dense submerged plant stands.
Further, in order to produce a hypoxia risk map, Keddy Index was calculated for each 4 h-long period (n = 2190) on each pixel cell (n = 4031 for LAC Lake and n = 14,438 for PAR Lake) matching with densely vegetated areas, presenting biomass >50 g DW m −2 , mapped at the lake scale (1.19 km 2 and 4.17 km 2 in LAC and PAR Lakes, respectively, from 31) ( Figure 5). Hypoxia risk was above 50% in 70 ha of plants stands (corresponding to 60% of the total vegetated surface) in LAC Lake and in 50 ha in PAR Lake (12% of the total vegetated surface). This risk was above 75% in 11 ha of plants stands (9% of the total vegetated surface) in LAC Lake and in 11ha in PAR Lake (3% of the total vegetated surface). 021, 10, x FOR PEER REVIEW 7 of 13 mapped at the lake scale (1.19 km 2 and 4.17 km 2 in LAC and PAR Lakes, respectively, from 31) ( Figure 5). Hypoxia risk was above 50% in 70 ha of plants stands (corresponding to 60% of the total vegetated surface) in LAC Lake and in 50 ha in PAR Lake (12% of the total vegetated surface). This risk was above 75% in 11 ha of plants stands (9% of the total vegetated surface) in LAC Lake and in 11ha in PAR Lake (3% of the total vegetated surface). mapped at the lake scale (1.19 km 2 and 4.17 km 2 in LAC and PAR Lakes, respectively, from 31) ( Figure 5). Hypoxia risk was above 50% in 70 ha of plants stands (corresponding to 60% of the total vegetated surface) in LAC Lake and in 50 ha in PAR Lake (12% of the total vegetated surface). This risk was above 75% in 11 ha of plants stands (9% of the total vegetated surface) in LAC Lake and in 11ha in PAR Lake (3% of the total vegetated surface).
Discussion
In vegetated stands, diel variations of inorganic compounds typically reflect plants photosynthetic activity, with the lowest dissolved carbon and nitrogen concentrations measured in the water at late afternoon, corresponding to nutrients depletion by plants uptake, then an accumulation during the night, with a peak just before dawn. At the same time, DO and pH follow an exactly inverse pattern. In our study, the described nycthemeral shape was detectable only at some sites and mostly during summer. At other sites, heterotrophic activity, stimulated by temperature increase during summer and autumn, exceeded net oxygen release during the day, that resulting in hypoxia/anoxia events and buildup of CO 2 , CH 4 and NH 4 + in the water column. This observation is recurrent in dense stands formed by invasive macrophytes, where the sedimentation of organic matter generates an elevated benthic BOD during the period of senescence of plants; this implies a permanent DO deficit [15,31,32]. In dense hydrophyte stands, DO input from the atmosphere can be limited to the surficial layer of the water column, as long stems constitute a physical barrier, as floating-leaved macrophytes do [13]. In our case, the occurrence of a vertical "plant wall" at the external boundaries of vegetated stands may also lead to the annihilation of the horizontal flow of nutrients and DO from the pelagic to the littoral zones [22].
Hypoxic events and inorganic compounds buildup can be however contrasted by the wind action, which may induce local turbulent mixing and reaeration even within dense submerged canopies [14,18]. Coherently, some of the diel variations measured in our study showed a flattened shape, with constant values along the 24 h-cycle. On one hand, elevated DO values during the night could be attributable to convective mixing due to air temperature nightly decrease [17,33]. On the other hand, the maintaining of constant DO values along a diel cycle may be an indicator of stationary wind conditions and turbulent mixing; this supposition is supported by the second part of our study. Seasonal campaigns at wind-sheltered and -exposed sites showed that, ecosystem functioning was not ascribable to the plant presence/absence or to the seasonal biomass variation only. Indeed, DO and CO 2 saturation at wind-exposed sites hovered at about 100% all year round, indicating that wind-driven diffusion continuously outreached net production and consumption within the water column, even in invaded areas of the lake. Overall results show thus that the presence of invasive hydrophytes does not systematically promote water hypoxia, if local wind conditions allow an efficient water mixing by wind.
When considering the whole dataset, only DO values measured in LAC Lake during summer resulted in being dependent on plant density; moreover, vegetated stands in this lake mainly developed at sheltered sites [34]. Prevailing winds oriented from the northwest created low hydrodynamic conditions, because of the natural barrage formed by sand dunes [35]. Elevated plant biomass matching with shallow depths in wind-sheltered areas seemed to generate favorable conditions for water hypoxia, a phenomenon exacerbated by an elevated turnover of biomass during summer. In contrast, extremely elevated biomass measured in PAR Lake, largely exceeding values reported until now for Egeria spp. invaded sites [3,31], did not generate an extreme DO deficit even at wind-sheltered sites. The difference between the two lakes is evident also from a thermic point of view: at LAC Lake, a previous study had showed that water temperature measured in vegetated stands was significantly lower than that measured in plant-free areas, irrespective of the season [19]. The present study on PAR Lake shows instead that no significant difference exists between vegetated and plant-free areas, irrespective of the season ( Figure S7). As for the DO and CO 2 , the divergence in temperature results among the two lakes could be due to the different size of the lake, the second being larger and permitting fetch length-and thus, water mixing-to be more important.
The hypoxia risk map shows that elevated hypoxia probability is associated with wind-sheltered areas of the lakes, and that oxygenation shortage can affect a large total surface of several tens of hectares. Hypoxia risk is at its maximum in both lakes at enclosed and wind-sheltered areas, like small marinas and public boat launches, which are known to be important drivers of aquatic plant spread [36,37]. On the other hand, large surfaces of the lake invaded by elevated plant densities would not be affected by hypoxia and would thus not necessitate intervention. The hypoxia risk map we produced represents a preliminary and concrete tool, coupling field measurements and modelling, which can reduce plant management costs, as it indicates precisely where invasive plants constitute a problem for ecosystem functioning. A similar approach providing reproducible management tools has been recently published [38], that coupling lake depth or bathymetry to anoxia probability in the hypolimnion of deep lakes. Our model should be, however, calibrated site-specifically, because the intrinsic sedimentary features and the trophic status of the lake could affect the magnitude of hypoxia level and nutrients flux. The two lakes we studied presented different DOC values, sedimentary OM content and resulted in very different concentrations of CO 2 and CH 4 . Also, due to a different fetch length, the reaeration strongly varied even within comparable wind velocity. A possible improvement of our method could have been to introduce the local bathymetry in the model. Indeed, waves induce vertical upward forces acting on the water column movements and sediment resuspension [34]; furthermore, wind-induced circulation in nearshore zones appears to be crucial in littoral plant-free areas [24]. We can expect an increase of wind effect on water mixing in shallow zones due to orbital movements translated to the lake bottom. Nevertheless, SAV also reduce waves action and current velocities within beds [39,40]. Future modelling works should thus focus on integrating vegetation in the photic region to better define how the cross-shore water circulation works. Another possible improvement in the future would be the use of automatic oxygen probes, in order to obtain a finer resolution scale of diel and seasonal variations, and perfect the calculation of hypoxia risk probability on a long temporal scale.
Our results highlight the need to consider local hydrodynamics in lake management decisions. Wind exposure should be used for spatially organizing management plans and prioritizing zones where invasive biomass control actions are needed. Mapping hypoxia risk in densely vegetated stands is a promising tool for the management of invasive hydrophytes in shallow lakes.
Study Area
Lacanau Lake and Parentis-Biscarrosse Lake are shallow lakes located in the southern Atlantic coast of France. Those lakes are characterized by sandy acidic substrate and classed as oligo-mesotrophic (Lacanau, 16 km 2 ) and eutrophic (Parentis-Biscarrosse, 32 km 2 ). Within the two lakes, large submerged stands of Egeria densa Planch. and Lagarosiphon major (Ridl.) Moss develop between 1 and 7 m deep, with dense stands being preferentially located at shallow and wind-sheltered sites, or at deep and wind-exposed sites [34].
Field Campaigns
Between June 2013 and November 2015 at Lacanau Lake, seasonal 24 h-cycle campaigns were carried out at 15 sites. Sampling sites were homogeneously distributed within the largest invasive macrophyte stands of the lake [34]. Water was collected within plant canopy at depths ranging from 100 to 330 cm, with a frequency of four times a day (two samplings during the day, between 11 a.m. and 3 p.m.; two samplings during the night, between 9 p.m. and 6 a.m.). Water temperature (T, • C), pH, dissolved oxygen (DO, expressed as saturation %), dissolved carbon dioxide (CO 2 , %), dissolved methane (CH 4 , µM), nitrate (NO 3 − , µM), ammonium (NH 4 + , µM) and dissolved organic carbon (DOC, mg L −1 ) were measured according methods reported in [19]. Finally, we tested the influence of the sampling time on the biogeochemistry of the water column by a two-way ANOVA with interactions among factors. The diel variation (two levels: day vs. night) and the season (three levels: spring vs. summer vs. autumn) were considered as fixed factors, while the sampling site (fifteen levels) was considered as a random factor.
Between January 2016 and January 2017 at Parentis-Biscarrosse Lake, seasonal sampling campaigns were carried out, during the day only, at vegetated (3 wind-sheltered and 3 wind-exposed sites) and at plant-free sites (3 wind-sheltered and 3 wind-exposed). The degree of wind exposure was estimated by previous modeling of wind exposure Keddy Index [41]. Water was collected within plant canopy at depths ranging from 150 to 300 cm, between 11 a.m. and 3 p.m. Water samples collection, treatment and analyses are the same as those adopted in Lacanau Lake and reported in [19]. Finally, we tested the influence of spatial exposure to wind on the biogeochemistry of the water column by a three-way ANOVA with interactions among factors. The degree of wind exposure (two levels: exposed vs. sheltered), the plant presence (two levels: vegetated vs. plant-free) and the season (four levels: winter vs. spring vs. summer vs. autumn) were considered as fixed factors, while the sampling site (twelve levels) was considered as a random factor.
Normal distribution (Shapiro-Wilk Test) and homoscedasticity (Levene's Test) were tested before running ANOVAs. Post hoc analyses were performed by Tukey's Honestly Significant Difference (HSD) test. Statistical analyses were performed with R Program [42]. Mean values are reported with their standard deviation.
Macrophytes sampling was carried out by rake for total biomass (g DW m −2 ) measurements, immediately after water samplings, as reported in [19]. Concomitantly, within the plant stands, sediment samples were collected by grabber, as described in [34], for sedimentary organic matter (OM, as loss of ignition, % LOI) measurements. In order to test the dependence of DO saturation on plant biomass and OM content, a linear mixed-effects model fit by maximum likelihood was performed on the whole dataset (DO measurements from Lacanau and Parentis-Biscarrosse Lakes), with the sampling site as a random factor.
Wind Exposure Calculations
Wind exposure was calculated according [41] for both lakes by using a fetch matrix (i.e., the distance over which waves can build up) obtained from lake open-water raster grid cells (resolution of 17 m) for each wind compass direction (10-360 • , in 10 • increments). Wind data (hourly and daily mean speed and direction) were provided by Météo France in Cap-Ferret (44 • 37 54" N, 1 • 14 53" O) and Biscarrosse (44 • 25 54" N, 1 • 14 51" O) weather stations for Lacanau and Parentis-Biscarrosse Lakes, respectively. It is possible to generate values which should be related to the effect of wind at a given point (here, a grid cell) by using fetch and wind velocity. For a given compass direction, one measure of exposure is the product of mean wind speed and direction and the percent frequency of the wind blowing in that direction.
In order to position wind-sheltered and -exposed sampling sites in Parentis-Biscarrosse Lake, daily mean wind speed and direction were used to build a wind exposure map during 1-year period (2014). One measure of exposure was calculated for each grid cell over 36 compass wind directions according to a fetch matrix. A cell's total exposure is given by the sum of values calculated for all of the compass directions during the 1-year period. Sampling sites were chosen within lake areas identified as low-or highly-exposed to wind action.
Coupling DO and Wind Exposure
Keddy Index was calculated for the 4 h before the exact timing of water sampling. Then, each DO value was coupled to the sum of Keddy Index values for this period. This is the duration estimated being necessary for the water mixing at low depth [17,43]. In order to test the dependence of DO saturation on wind exposure, a Chow test was performed to determine the presence of a structural break at some point of the data series [44]. We used the sctest function from the strucchange package in R software to perform a Chow test, which resulted in F = 10.7, p-value = 2.7 × 10 −5 . The significance of the test indicates that a structural breakpoint is present in the regression. Or else, that two regression lines can fit the pattern in the data more effectively than a single regression line. Finally, we applied the segmented function in R to analyze segmented relationships in the regression, in order to obtain a breakpoint value.
Hypoxia Risk Map Production
We calculated 4 h-long Keddy Index values each day during one year (2014 and 2016 for Lacanau and Parentis-Biscarrosse Lakes, respectively) for each raster cell corresponding to densely vegetated areas of the lake, and presenting a biomass >50 g DW m −2 [34]. Each 4 h-long period and each grid cell in which wind exposure was under the breakpoint value, indicating a high risk of hypoxia, was classified as "1", whereas 4 h-long period with low risk of hypoxia (>2.9) were classified as "0". The probability of hypoxia was expressed as the percentage (0-100%) of 4 h-long periods where wind exposure was below the hypoxia threshold during one year. Finally, this probability was reported on raster grid cells to map the hypoxia risk at densely vegetated areas scale.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement:
The data presented in this study are available in the article or supplementary material.
|
2021-07-03T06:17:01.185Z
|
2021-06-22T00:00:00.000
|
{
"year": 2021,
"sha1": "eaa8e2615dcb79ead23453f34ebd81a740fab0ca",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/10/7/1269/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "083e0dd8ee02b9574e9bfbfae7bb21cfc0c9c3fa",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239028661
|
pes2o/s2orc
|
v3-fos-license
|
Neutron tomography, fluorescence and transmitted light microscopy reveal new insect damage, fungi and plant organ associations in the Late Cretaceous floras of Sweden
ABSTRACT Neutron tomographic reconstructions, macrophotography, transmitted light microscopy and fluorescence microscopy are employed to assess the quality of organic preservation, determine organ associations, identify insect damage, and document fungal interactions with selected Santonian–lower Campanian plant fossils from the northern Kristianstad Basin, southern Sweden. Fricia nathorstii (Conwentz) comb. nov., is proposed for a composite fossil comprising an anatomically preserved (permineralized) cupressacean conifer cone and its subtending, concealed, leafy axis (preserved as a mould) in the Ryedal Sandstone. Several other impressions of conifer and angiosperm leaf-bearing axes and isolated leaves are described under open nomenclature. Three cuticle types are described from the non-marine plant-bearing beds in the basal part of the succession exposed at Åsen, but these are only assigned to informal morphotypes pending a comprehensive review of the extensive fossil cuticle flora. Two species of ascomycote epiphyllous fungi from Åsen are established: Stomiopeltites ivoeensis sp. nov. (Micropeltidales) and Meliolinites scanicus sp. nov. (Meliolales). The latter provides an important calibration point for dating the divergence of Meliolales, being the first pre-Cenozoic representative of the order. Various additional fungal remains, including thyriothecia, scolecospores, chlamydospores, putative germlings, and hyphae, are described from the cuticular surfaces of conifer and angiosperm leaves from Åsen. Insect herbivory is expressed in the form of both margin-feeding and piercing-and-sucking damage on angiosperm leaves. The Santonian–early Campanian vegetation is inferred to have grown in strongly humid, mid-latitude, coastal plain settings based on the depositional context of the assemblages, leaf morphology, and the pervasive distribution of epiphyllous fungi.
Introduction
The southern Swedish province of Skåne (Scania) and portions of adjacent Blekinge are the only regions of the Fennoscandian Shield hosting exposed Upper Cretaceous strata containing plant macrofossils. Fennoscandia was largely an emergent landmass through most of the Mesozoic and Cenozoic but significant coastal and marine deposits accumulated on the southern margin of this region in the Late Cretaceous (Fig. 1A). The plant assemblages preserved in the Swedish deposits represent the northernmost Late Cretaceous floras in Europe (Vakhrameev 1991) and are crucial for reconstructing the palaeovegetation of this region. Cretaceous rocks of southernmost Sweden are preserved in three major geological depressions, the Vomb Trough in the south, the Båstad Basin in the northwest, and the Kristianstad Basin in the northeast (Fig. 1B). The basal claystone-, siltstone-, and sandstone-dominated portion of the Cretaceous succession in the Kristianstad Basin has traditionally been assigned to the Ryedal Sandstone or Holma Sandstone (Holst 1888;De Geer 1889;Magnusson 1958) and is locally rich in plant remains. Relatively few fossil plant taxa have been thoroughly documented from these Cretaceous strata (Table S1). Nevertheless, the Late Cretaceous represents a time of dramatic radiation in angiosperm diversity globally (Lidgard & Crane 1990;McLoughlin et al. 2010;Friis et al. 2011;Halamski et al. 2020), and rocks of southern Sweden potentially host important fossil evidence of the diversification of this group.
Upper Cretaceous plant fossils (then assumed to be Cenozoic in age) were first reported from Köpinge in the Vomb Trough by Nilsson (1824Nilsson ( , 1832. Nathorst (1876) initially reported fossil plants from Upper Cretaceous strata in the northern Kristianstad Basin and Conwentz (1892) subsequently documented a range of woods, cones and fungi preserved as adpressions and siliceous permineralizations from the Ryedal Sandstone and its stratigraphic equivalents. Felix (1894) later described the saprotrophic fungi found in association with the woods reported by Conwentz (1892). Sporadic reports of fossil plants, especially lignitized and permineralized wood and palynomorphs, continued through the early and middle twentieth century (e.g., Linnell 1937;Regnéll 1940;Ross 1949;Nykvist 1957), mostly associated with geological mapping and kaolin quarrying in the region (Table S1).
In the last three decades, work on the Åsen fossil flora has declined as investigations on early angiosperm flowers have largely shifted to older deposits elsewhere in the world (e.g., Eklund et al. 1997;Friis et al. 2011). However, as part of a campaign to document Late Cretaceous adpression floras through central and northern Europe, the Campanian mega-and palynoflora of the Vomb Trough was described recently by . Fossil plant axes were also documented by McLoughlin et al. (2018) from associated marine deposits where they acted as substrates for Late Cretaceous sessile marine invertebrates. Relatively few other records of Cretaceous terrestrial fossils from the Kristianstad Basin have been published over the past 200 years (Table S1), and these are not investigated further in this study.
The present study has four main objectives: 1, to describe the Santonian-lower Campanian impression floras of the Ryedal Sandstone; 2, to re-evaluate the structure and systematics of the permineralized cone and associated axis impression initially assigned to Pinus nathorstii by Conwentz (1892) from this unit; 3, to test the quality of cuticular preservation of angiosperm and conifer leaves from the Åsen deposit for future systematic and palaeoecological studies; and 4, to provide an initial survey of fossil fungi and insect interactions with plant remains from the Kristianstad Basin. In addition to traditional macrophotography and transmitted light microscopy, we employ techniques that have not been applied previously to the Late Cretaceous plants of Sweden, e.g., neutron tomography and incident-light fluorescence microscopy, in order to gain additional anatomical and palaeoecological information from the fossil flora. These aspects, together with data compiled from previous studies, are linked to provide some initial insights into the palaeocology of the Late Cretaceous vegetation of the Kristianstad Basin. A systematic re-evaluation of the Upper Cretaceous silicified woods and their associated fungi and invertebrate borings from southern Sweden is intended for a later study. 2015), modified after data from Chatziemmanouil (1982), Surlyk in Voigt & Wagreich (2008), Halamski (2013) and ]. B. Bedrock geological map of Skåne (Scania) and adjacent regions indicating fossil localities and basins hosting Upper Cretaceous strata (after Christensen 1986;Koistinen et al. 2001;Vajda & Gravesen 2008).
Geological setting
The Kristianstad Basin, represents the onshore extension of the Hanö Bay Basin (Kumpas 1980) and is situated in northeastern Skåne and western Blekinge provinces around the city of Kristianstad in the southernmost part of the Fennoscandian Shield (Fig. 1B). The basin hosts Barremian to Maastrichtian sedimentary rocks overlying a Precambrian granitic-gneissic basement (Norling & Skoglund 1977;Bergström & Sundquist 1978;Kumpas 1980;Norling 1981;Lindström et al. 1991), but most of the exposed strata are dated to the Santonian-Campanian (Christensen 1975). The Cretaceous sedimentary succession reaches 250 m thick in the central Kristianstad Basin (Erlström & Gabrielson 1992) but increases to about 700 m thick in the offshore Hanö Bay Basin (Sopher et al. 2016). The Kristianstad Basin's northern boundary is erosional; numerous outliers of Cretaceous strata occur on adjacent basement rocks. Surficial Quaternary glaciofluvial deposits up to 30 m thick blanket much of the basin. A few natural exposures of Cretaceous strata occur along the shores of Lake Ivö (Lundegren 1934), and artificial exposures up to 20 m thick have been generated by kaolin quarrying, e.g., at Åsen. The oldest Cretaceous plant fossils are palynological assemblages from subsurface Albian strata recovered from a borehole near Österslöv, northern Skåne (Guy-Ohlson 1984; Table S1). A formal lithostratigraphic scheme for the basin has never been established and, traditionally, local geological correlations have employed biostratigraphic subdivisions of the strata (summarized by Einarsson 2018). Accessible strata range from lower or lower middle Santonian (Gonioteuthis westfalica westfalica Zone, sensu Christensen 1997) to uppermost Campanian (Belemnella lanceolata Zone; Thibault et al. 2012;Voigt et al. 2012) and were deposited at palaeolatitudes of 47-49°N (Kent & Irving 2010;Van Hinsbergen et al. 2015).
In the Late Cretaceous, Skåne was located in the border zone between the Fennosarmatian landmass (which, in this region, is essentially equivalent to the exposed portion of the Fennoscandian Shield) and the epeiric Chalk Sea (Fig. 1A). At that time, the margin of the Kristianstad Basin consisted of an archipelago of granitic islands and elongate peninsulas (Surlyk in Voigt & Wagreich 2008). Palaeoshorelines of the northeastern coast of the basin are recorded from the famous locality of Ivö Klack, where basement rocks are encrusted by Cretaceous (Campanian) epizoan brachiopods, mollusks, corals and other invertebrates (c. 200 shell-bearing invertebrate species in total; Surlyk & Christensen 1974;Surlyk & Sørensen 2010;Sørensen et al. 2012). Shoreline and shallow embayment deposits also occur at Åsen, where Campanian oysters are interpreted to have attached to transported woody debris from terrestrial arborescent plants (McLoughlin et al. 2018). The deposits at Ryedal and Åsen yielding plant fossils for this study (Fig. 1B) were laid down in coastal plain settings situated at the Santonian-lower Campanian shoreline or in the immediate hinterland (Surlyk & Sørensen 2010, fig. 3 & 4). Some permineralized axes from Ryedal contain marine molluscan borings (Conwentz 1892) indicative of extended immersion in normal marine waters. Their preservation in uniform-grained quartz sandstones suggests accumulation in strandline deposits after moderate transport via fluvial systems to the coast (Rees 1999). The plant remains at Åsen are well preserved in matted accumulations and probably experienced negligible transport in coastal swamps.
Ryedal
Medium-to coarse-grained quartzose sandstones exposed at Ryedal (56°08ʹN 14°38ʹE) in Blekinge are assigned to the Ryedal Sandstone (Holst 1888), although this unit is probably a lateral equivalent of lithologically similar Holma Sandstone exposed sporadically a short distance to the west in Skåne (De Geer 1889;Magnusson 1958). These deposits have yielded a few plant macrofossils, including permineralized woods and a cone that were documented by Conwentz (1892). In addition to associated fungal fossils (Felix 1894), some of these woods contain traces of invertebrate borers suggesting a foreshore depositional environment. Two conifer twigs and a single angiosperm leaf from this unit are described below. Conwentz (1892) also reported various other indeterminate stems and roots from the deposit at Ryedal, which he considered to be part of the Holma Sandstone. Inspection of several outcrops of the Holma Sandstone around the shores of Lake Ivö by the lead author in 2018 failed to yield any additional identifiable plant remains. The ages of the Ryedal and Holma sandstones are not well resolved but are inferred to be Santonian to lower Campanian based on their stratigraphic position and close association with basal marine deposits in the basin.
Åsen
The kaolin quarry at Åsen (56°9ʹN, 14°30ʹE), formerly belonging to Höganäs AB, is especially famous for mesofossils of charcoalified angiosperm flowers (Friis & Skarby 1981). About 20 m of unconsolidated sands and clays were previously exposed in a quarry that is now largely infilled (Sørensen et al. 2013). The basal plant-bearing non-marine succession is divided into two units by a distinctive weathered horizon. These strata were deposited in a NNE-trending palaeovalley, opening to the south and flanked by Proterozoic igneous rocks (Lundegren 1934;Siverson et al. 2016;see fig. 3B of Surlyk & Sørensen 2010). The lower unit is dominated by finely laminated lacustrine clays, silts and sands. The upper unit consists of cross-bedded or laminated sands, silts, and clays deposited in fluvial settings (Koppelhus & Batten 1989). The material studied herein derives from the lower part of the non-marine succession at Åsen.
The non-marine succession as a whole is assigned to the upper Santonian-lower Campanian on palaeontological and palaeomagnetic data (Mörner 1983;Friis et al. 2011 and references therein), and there are clear floristic differences between the upper and lower parts of the section, but as yet, no finer age controls are available for the two units. Besides plant microspores, pollen and angiosperm mesofossils, quarries at Åsen and nearby Axeltorp yielded conifer (Nykvist 1957) and angiosperm woods (Herendeen 1991), conifer leaves and cones (Srinivasan & Friis 1989), seeds (Kunzmann & Friis 1999), and lycopsid megaspores (Koppelhus & Batten 1989). Overlying the continental deposits at Åsen are marine marls and carbonates of latest early to earliest late Campanian age (Iqbal 2013;Einarsson et al. 2016;Siversson et al. 2016) that have yielded a rich invertebrate and vertebrate fauna (Sørensen et al. 2013;Einarsson 2018;McLoughlin et al. 2018, and references therein).
Material and methods
All specimens examined and illustrated in this study are held in the collections of the Swedish Museum of Natural History, Stockholm (registration numbers prefixed S for plants and NRM Mo for plant impressions on encrusted oyster shells). Some of the impression and permineralized fossils from Ryedal studied by Conwentz (1892) were originally illustrated using lithograph drawings. However, several of these specimens could not be located in the museum collections. The new combination and selected lectotype are registered with unique PFN numbers in the Plant Fossil Names Registry (https:// www.plantfossilnames.org/), hosted by the National Museum, Prague, for the International Organisation of Palaeobotany (IOP). New fungal taxa are registered in the Mycobank database (https://www.mycobank.org/) hosted by the International Mycological Association.
Leaf impressions and compressions
The angiosperm leaf and conifer leafy twigs from Ryedal are preserved as impressions in medium-grained sandstone, so they are described following the procedure advocated for poorly preserved leaf remains (Halamski & Kvaček 2015, pp. 102-103;Halamski et al. 2018, pp. 128-129). In contrast, fossil plants from Åsen are preserved as lignitized remains retaining cuticles, and as charcoalified material. Accordingly, several bulk samples (c. 250 g) from the lower clay bed at Åsen that are rich in cupressacean (=taxodiacean) conifers (Srinivasan & Friis 1989) were tested for leaf and twig extraction via bulk HFmaceration; the recovered cuticles were cleared in Schultze reagent and then immersed in glycerine. Selected cleared fossil leaves were manually split along the margins with a fine needle to separate the abaxial and adaxial cuticles then mounted and sealed on glass slides in glycerine jelly. Individual leaves from the same sample are differentiated by capital letter suffixes affixed to the sample number (e.g., S084386A, S084386B, etc.). Photo-micrographs of plant cuticles and fungi in transmitted white light (brightfield) and incident-ultraviolet light (fluorescence: blue light excitation at c. 460-490 nm) were taken using an Olympus BX51 microscope with an Olympus DP71 digital camera. Whole leaves, cones and axes were photographed with either an Olympus SZX10 stereomicroscope equipped with a Sony Exmoor E3CMOS digital camera or a Canon EOS 40D digital camera. Final images were obtained by merging up to thirty photographs taken at different focal depths using Adobe Photoshop CC and Helicon Focus software using the "autoalign" and "auto-stack" functions.
Permineralized remains
A siliceous permineralized cone (S085156) illustrated by Conwentz (1892) is embedded in a large block of quartzose sandstone. Rather than use destructive thin-sectioning for its anatomical analysis, we trialed non-destructive tomographic reconstruction of this fossil. A fission reactor neutron source was chosen to provide the desired combination of: 1, high penetration of the large (20 × 20 × 10 cm) block; and 2, differentiation of organic (hydrogen-rich) and inorganic (hydrogenpoor) fossil components (Sutton 2008). This study utilized the Open-Pool Australian Lightwater reactor at the Australian Nuclear Science and Technology Organisation (ANSTO), Lucas Heights, New South Wales, Australia. Data collection and reconstruction was conducted using the DINGO imaging and neutron tomography facility at ANSTO in January 2017. Neutron tomographs were reconstructed from a compilation of 1600 projections across a total rotation of 360°, each with an exposure length of 23 seconds. Since DINGO employs a quasiparallel collimated beam, the spatial resolution is determined by the following factors: 1, the collimation ratio (L/D, where L is the neutron aperture-to-sample length and D is the neutron aperture diameter); 2, the thickness of the scintillation screen; and 3, the pixel size of the detector (Garbe et al. 2011). Images were collected on DINGO's high-resolution collimator setting, which has an L/D value of 1000. A 100-μm-thick ZnS/6LiF scintillator screen was employed, with a 200 × 200 mm field-of-view. The detector system was a liquid-cooled, 16-bit Andor IKON-L CCD camera fitted with a 50 mm lens. The resultant pixel width for these projections was approximately 95.5 μm. All additional NT experimental setup details not outlined herein follow Mays et al. (2018). Tomographic reconstruction was performed by CM, J.J. Bevitt, M.-A. Harvey and A. Langendam using filtered backprojection with Octopus Reconstruction v.8.8 (Inside Matters NV). Volume rendering and segmentation were performed by CM using Avizo v.9.5.0 (FEI Company). The specific visualization techniques applied (volume or surface rendering) varied based on the different preservation styles within the same specimen. Specifically, a surface rendering was conducted for the cavity-forming mouldic preservation of the foliage-bearing axis, whereas a volume rendering was produced for the silicified seed cone portion of the fossil (and surrounding matrix). The reasons for the employed visualization techniques are discussed further in the systematic palaeontology section ("Neutron tomographic reconstruction of the cone and axis").
Systematic palaeontology
Both Nathorst (1891) and Schuster (1930) reported examples of the fern Weichselia in isolated sandstone boulders within glacial till located in Germany that were putatively derived from the Ryedal or Holma sandstones of Sweden (see Edwards 1933;Alvin 1971). No equivalent material is available in the collections of the Swedish Museum of Natural History (Stockholm) for evaluation, and these specimens are not discussed further. The following descriptions are restricted to material available in the Stockholm collections. Harris 1979 Geinitzia sp.
Description
Both specimens are preserved as impressions. S166194 is a leafy twig fragment c. 2.5 cm long. The axis is longitudinally striate, c. 1 mm thick. Leaves are helically arranged, awl-shaped, erect, weakly incurved adaxially, quadrangular in cross-section, 3.5-4.5 mm long, with slightly expanded bases (1 mm thick). S083900 is a leafy twig 18 mm long and 4 mm wide that lacks details of the axis but consists of spirally arranged, awl-shaped, leaf imprints, each c. 1 mm wide and 2.5 mm long.
Remarks
These specimens are attributed to the fossil-genus Geinitzia based on their quadrangular, slightly decurrent leaves. They are broadly similar in size and leaf shape to Geintizia reichenbachii (Geinitz 1842) Hollick & Jeffrey (1909, a widely reported (and possibly heterogeneous) taxon through the Upper Cretaceous of central and western Europe (Bosma et al. 2009;Halamski 2013;Halamski et al. 2018Halamski et al. , 2020Płachno et al. 2018) but the lack of cuticular details or attached cones inhibits further comparisons.
Description
Axis fragments 10 mm wide and up to 90 mm long, bearing rhombic, appressed leaves in tight helices. Leaves are typically 5-7 mm wide (forming a broad rhombic basal cushion) and 4-7 mm long, although bases are commonly concealed by overlapping leaves of the preceding spiral. Leaf apices are obtusely pointed and slightly reflexed.
Remarks
In the absence of cuticular details, these specimens are attributed to Brachyphyllum based on their consistently short, bluntly tapered leaves with lengths being roughly equivalent to their basal widths (Harris 1979). The leaves are less narrowly attenuated than those of Geinitzia sp. described above. A few short-leafed examples of Pagiophyllum sp. from Campanian strata of Köpinge fig. 3D) approach the shape and size of these Santonian specimens. Although the precise source of the illustrated specimens can not be determined, they likely derive from the basal plant-bearing units of the northern Kristianstad Basin (e.g., Ryedal or Holma sandstones or laterally equivalent units).
Remarks
Various leafless axis impressions and compressions have been reported from the Ryedal Sandstone (Conwentz 1892), the basal carbonaceous succession at Åsen, lignites at nearby Axeltorp (Nykvist 1957), and as impressions on encrusting oysters from the overlying carbonate succession at Åsen (McLoughlin et al. 2018). Some examples bearing dense, helically arranged, leaf scars ( Fig. 3C, E, F) probably represent distal branches of conifers, but they lack the morphological details necessary for identification to any particular family. A few axes have longitudinally ribbed textures and sporadic branch scars ( Fig. 3D) that are not diagnostic for any specific woody plant group. Although the frequency of these woody remains (some with invertebrate borings) is important for ascribing coastal to nearshore depositional environments to the host strata (Nykvist 1957;McLoughlin et al. 2018), they can not be identified with precision and are not described further herein. Several permineralized axes documented by Conwentz (1892) will be redescribed in a separate study.
Remarks
This species was fully described by Srinivasan and Friis (1989), together with several other cupressacean scaleleafed twigs and cones (Table S1). We illustrate three examples of Quasisequoia florinii leaves here simply to highlight the potential use of fluorescent light microscopy of coniferous macrofossils from Åsen to obtain epidermal fig. 1G).
Plant Fossil Names Registry Number for lectotype designation
PFN002284.
Emended diagnosis
Ovuliferous cone ovoid, borne terminally on leafy twig. Conescale complexes cylindrical to peltate or obovate, arranged helically, relatively massive, with rounded apices. Vascular bundle of cone scale complexes positioned centrally at base and dividing distally into several diffuse veins. Cone scales flattened rhombic in transverse section basally. Distal portions of cone scales bearing a shallow central adaxial depression, rhombic in cross-section with rounded apex. Seeds large, curved, of Seletya type, borne on adaxial side of cone scale. Subtending twig bearing helically arranged imbricate leaves of Pagiophyllum-or Geinitzia-type.
Description of surficial features
The lectotype (S085156) was selected by us from an assortment of other fossils (twigs, wood, leaves) that were considered by Conwentz (1892) to belong to the same taxon. The ovuliferous cone represents a unique permineralized specimen that Conwentz (1892) considered to be attached to the imprint of a thick branch lying adjacent on the rock surface and bearing numerous weakly defined, helically arranged, rhombic leaf scars (Fig. 3C). However, our investigation negated this assumption. Instead, neutron tomographic analysis revealed that the cone is attached to another, completely concealed, twig bearing Pagiophyllum/Geinitzia-like leaves ( Fig. 4A-C). The ovuliferous cone is split longitudinally, and is 36 mm long and 25 mm wide (Fig. 3A). The other half of the fossil cone was apparently never collected. The longitudinal section of the ovuliferous cone exposed on the rock surface reveals a cone axis c. 4 mm in diameter with a distinct dark inner pith and whitish outer woody zone (Fig. 3A). Cone scales emerge at c. 80° to the axis basally, reducing to lesser angles apically. Ovuliferous cone scales are roughly obovate in plan view, reaching 12 mm long and 7 mm wide. Cone scales have a narrow basal attachment (rhombic in cross-section) to the cone axis. They are roughly cylindrical near the cone apex but are clavate to peltate elsewhere, enlarging into a 3.2-mm-thick head distally. The terminus is rounded and rhombic in cross-section (Fig. 4D). There is no obvious differentiation of the cone scale into separate bract and ovuliferous scales, but the vascular trace of some examples appears to split into a lower and upper strand that probably fed the fused bract and ovuliferous scale, respectively ( Fig. 3B: arrowed). At least one slightly curved but otherwise spindleshaped seed is borne on the adaxial surface of each cone scale and is 1 mm thick and 3.2 mm long (Fig. 3A, B).
Neutron tomographic reconstruction of the cone and axis
The fossil specimen was preserved in a medium-grained quartz arenite, and the high silicon and oxygen content of the host rock permitted neutrons to penetrate the large (20 × 20 × 10 cm) block with only minor impedance. However, the size of this block required a large field of view during data collection, thus limiting the spatial resolution for the tomographic reconstruction and providing a relatively coarse pixel size for each projection (c. 96 μm). The neutron tomographic reconstruction revealed substantial variability in preservation both within the specimen and within the matrix. Organic remains were evident by regions of high relative neutron attenuation (RNA), whereas the sedimentary silicate grains, cement and permineralised regions of the fossil all had low attenuation, and pore spaces had negligible attenuation (Fig. 4A). The relatively high RNA in organic remains results from the higher hydrogen density (Sutton 2008), and this can be employed for distinguishing, and "virtually extracting", organically preserved plant remains from a hydrogen-poor silicate matrix (Mays et al. 2017a). Hydrogenous (presumably organic-rich) regions were identified both within the seed cone, and dispersed within the matrix, the latter likely indicating clastic plant debris in the sediment. In the present study, however, some parts of the preserved cone (e.g., at least one of the subsurface cone-scale complexes) lacked enough contrast to be distinguished from the matrix, indicating a nearcomplete replacement of the organic material with silicate minerals. Furthermore, the subtending axis and attached foliage was nearly entirely weathered away, leaving only a cavity in the sedimentary matrix. This variability in preservation precluded a consistent visualisation technique for all components. Firstly, because the mould of the leaves and axis was entirely encased within the sediment, the interface between the matrix and the mould facilitated a strong neutron attenuation contrast, and a high fidelity surface rendering could be produced ( Fig. 4B; Supplementary Video 1). Secondly, the partially preserved organic matter in the seed cone generally provided a good contrast with the siliciclastic matrix, except where it was replaced by silicate minerals. Thus, a volume rendering was produced, illustrating regions of differing neutron attenuation within the seed cone and matrix (Fig. 4C, D); Supplementary Video 2).
The neutron tomographic approach enabled nondestructive study of the unique specimen. This technique revealed anatomical details that either supported our findings from surficial features, or were not observed by other means. Up to 41 individual leaves were identified on the axis bearing the ovuliferous cone (Fig. 4B). These leaves are partly appressed to the axis, and have helical phyllotaxis. The leaves are short (3-4 mm long), 1-2 mm wide and thick, elliptical to rhombic in cross-section, weakly keeled, have entire margins and acute and straight apices. Neutron tomography revealed that the ovuliferous cone preserves remnants of about 20 cone-scale complexes ( Fig. 4C-G). It also confirms that the cone-scale complexes are arranged helically, as clearly demarcated by their well-defined insertion areas in the volume rendering (Fig. 4C, E-G; Supplementary Videos 2 and 3). The volume rendering indicates that the cone-scale complexes are obovate in outline, with pedicellate bases, and peltate heads that are rhombic in distal view (see Fig. 4D, E, H). The cone scales appear to contain several parallel veins or resin canals and one centrally placed vascular bundle (see upper left scale of Fig. 4D).
Remarks
The original material of Pinus nathorstii consists of an ovuliferous cone, various axis impressions and leaves; it is uncertain whether they belong to the same species. The cone (S085156) is designated herein as the lectotype, whereas the leaves and leafless axes (several of which could not be located in the collections) are removed from the species.
Kvaček 2013 described from the Cenomanian of the Bohemian Cretaceous Basin (Czech Republic, Velenovsky 1885; Kvaček 2013). The Ryedal cone is particularly similar to Fricia in the shape of its cone-scale complexes and curved seeds of Seletyatype (Dorofeev 1979 Kunzmann & Friis 1999. It differs from Fricia nobilis in having seeds on the adaxial side of cone-scale complexes and having a rhombic rather than polygonal escutcheon with a shallow depression. Fricia nathorstii has some similarities in general form to the ovuliferous cones described by Kunzmann (1999) as Geinitzia formosa Heer 1871. Fricia nathorstii differs from G. formosa in having cones that are ovoid and less elongate. Cone-scale complexes of F. nathorstii have rhombic termini in contrast to the hexagonal escutcheons typical of G. formosa. Similar ovuliferous cones attributed to G. schlotheimii by Kunzmann et al. (2003) from the Santonian of Aachen (Germany) differ from F. nathorstii in having elongate ovuliferous cones of much smaller size. Geinitzia schlotheimii also differs from F. nathorstii in having much longer leaves on the subtending twigs. Lignitized/charcoalified ovuliferous cones from the Santonian/Campanian of Åsen associated with Quasisequoia florinii Srinivasan and Friis (1989) are significantly different from the Ryedal F. nathorstii specimen in being much smaller and having ovuliferous cone-scales that are markedly peltate with a slightly convex or slightly depressed escutcheon, and in bearing winged seeds. Material S166195 (collected by Ture Hemming in 1905); S166196 is the counterpart but shows almost no venation details; Ryedal Sandstone, Ryedal, Blekinge, Sweden; ?upper Santonian-lower Campanian.
Description
The only available specimen is a relatively large (preserved length 95 mm, preserved width 70 mm; estimated length c. 120 mm, estimated width c. 100 mm) leaf fragment lacking base, apex, or margin. Moderate venation details are retained on the impression in medium-grained sandstone. The venation pattern may have been palmately pinnate or pinnate. The midvein follows a gently zigzagged course with a change of direction at each departure of a secondary vein, the changes becoming more pronounced distally. The midvein bifurcates near the eroded apical margin. Secondaries (three preserved with departure and two without) emerge at an acute angle (30-50°) from the midvein. Tertiary veins are percurrent and V-shaped between secondary veins.
Remarks
This venation pattern is characteristic of the platanoid form group. The absence of the base, apex, and margin precludes any more resolved identification, which should be based, in particular, on the basal vs suprabasal departure of the first pair of secondaries. However, the zigzagged midvein is a notable feature distinguishing the Ryedal platanoid leaf from other Late Cretaceous members of the form group, such as Ettingshausenia sp. from the Campanian of southern Scania ), E. lublinensis from the Campanian of southern Poland (Halamski 2013), or E. onomasta from the Coniacian of the Sudetes, Czech Republic .
LEAVES WITH CUTICLE FORM GROUP UNRESOLVED
Here we illustrate the cuticles of three selected leaf forms preserved as lignitized compressions from the well-known Åsen (Santonian-lower Campanian) plant fossil assemblage. Previous studies have documented a range of angiosperm reproductive structures, palynomorphs, and lignitized wood from this deposit (Table S1). Our intention in the descriptions below is not to provide a systematic account of the extensive Åsen angiosperm leaf flora, but to illustrate the preservational quality of the fossil foliage and to highlight the potential of this assemblage for investigations of Santonian-Campanian angiosperm diversity, plant-insect interactions, and palaeoenvironmental analysis.
Description
No complete leaf available. Venation with a single straight midvein and two thinner lateral veins (Fig. 5B). Distal parts (Fig. 5A) linear, 4-9 times longer than wide; apex probably emarginate. Margin with irregularly spaced (
Remarks
Anomocytic stomata are the most common type of angiosperm stomata; they are present in basal angiosperms (Nymphaeaceae), basal eudicots (Ranunculaceae, Lardizabalaceae), but also in more derived groups, such as Aceraceae, Rosaceae and even Campanulaceae (143 families in total; Metcalfe & Chalk 1950, pp. 1331-1332. However, combined with the presence of glandular cusps, a chloranthaceous affinity is suggested for this leaf. Glandular teeth are known from various extant representatives of Chloranthaceae (Todzia & Keating 1991), although in most cases they are more densely spaced than in the Åsen fossil. The presence of chloranthaceous leaves in the Åsen deposits comports with the mesofossil assemblages, which have yielded floral structures of this group (Crane et al. 1989;Eklund et al. 1997). The quality of the cuticle morphology is outstanding. If calibrated with its nearest living relative, the stomatal indices of this chloranthaceous leaf could prove useful in future palaeoclimatic analyses as a proxy for pCO 2 in the late Mesozoic (compare with, e.g., Steinthorsdottir & Vajda 2015;Steinthorsdottir et al. 2016). Moreover, cuticle of this type is host to a range of epiphyllous fungi ( Fig. 5G; and descriptions below) that attest to complex plant-fungal interactions in the Late Cretaceous and growth in a humid climate.
Remarks
We tentatively suggest a platanaceous affinity for this cuticle type. The occurrence of this group is supported by the presence of fossil reproductive structures of platanaceous affinity from the same deposits (Friis et al. 1988). Similar material was described by Golovneva (2011) as Ettingshausenia cuneifolia from the Cenomanian of Siberia. The cuticle type is also extensively overgrown by epiphyllous fungal hyphae, suggesting growth in a moist habitat.
Description
The available material consists of two cuticle fragments (the larger is 13 mm long and 4 mm wide) with subparallel sides (Fig. 6D); hence, it is inferred that leaves were originally linear. Both available cuticle fragments bear stomata; they may represent either abaxial cuticles or the leaves may have been amphistomatic. Cells subrectangular, arranged in rows parallel to the leaf margins, 50-100 µm long, c. 50 µm wide in the median region, 20-25 µm wide near the margins; pairs of longitudinally arranged cells represent sister cells having originated via bipartition of a mother cell. Trichomes and secretory structures absent. Stomata rare, irregularly disposed, cyclocytic (Fig. 6E, F), with 10-12 surrounding cells, 40-75 µm long and 30-50 µm wide, greatly to slightly longer than wide, rounded at poles, aperture c. 20 µm long and 15-20 µm wide. Weak striae are locally preserved on cells (Fig. 6F).
Remarks
Monocots have a fossil record extending back to the Early Cretaceous (Friis et al. 2004(Friis et al. , 2011Doyle et al. 2008;Coiffard et al. 2013). Although their fossil record is patchy through the Cretaceous, this group clearly diversified and increased in abundance through the latter part of the period (e.g., Kvaček & Herman 2004), such that monocots became locally dominant components of some vegetation types by the Maastrichtian (Upchurch 1995;Herman & Kvaček 2010). Pole (2007) outlined some of the problems associated with confidently identifying isolated fossil monocot cuticle fragments and assigning them to constituent families. Generally, monocot cuticle preserves longitudinally oriented files of rectangular epidermal cells and stomatal complexes, typically demarcated into long costal and intercostal bands. The stomatal complexes group, especially within Orchidaceae (Stern 2014
Etymology
After Lake Ivö (Ivösjön), located adjacent to the fossil locality.
Description
Epiphyllous fungus consisting of circular discoid to low domal plectenchymatous thyriothecium, 80-200 μm in diameter, with a 5-20 μm central ostiole and weakly thickened collar ( Fig. 7A-F). Margin of thyriothecium relatively smooth and entire or with sporadic radial clefts (Fig. 7A, C). Plectenchymatous wall consisting of densely and irregularly interwoven hyphae c. 3-4 μm wide, becoming concentrically arranged at the thyriothecium margin (Fig. 7B, F). In only a few cases, tightly contorted septate hyphae, 3-11 μm wide, emerge from the margin of the thyriothecium and extend irregularly across the host cuticle surface (Fig. 7D), giving off sporadic simple short hyphopodia. The thyriothecia occur on various angiosperm cuticle types either in isolation or in relatively dense clusters (Fig. 7A). They are especially common on the stomatiferous (presumably abaxial) surfaces of chloranthaceous leaves (Fig. 7E) and are positioned over stomata ( Fig. 7A-C). They rarely occur over major veins. Samarakoon et al. (2019) noted that, although the fossil record of shield-like epiphyllous fungi is quite extensive, many are described from incomplete specimens that lack clear characters resulting in morphological confusion and uncertain systematic placements. The small circular ostiole, tangled hyphae constituting the thyriothecium, and the relatively sharp boundary of the latter are features consistent with Micropeltidaceae (Zeng et al. 2019). The history of higher classification of Micropeltidaceae and segregation of its constituent genera is complex, as outlined by Phipps and Rember (2004). Classification below family level typically requires details of the ascospores, which are not available for the specimens described here. Nevertheless, the small ostiole, complex intertwined hyphae, and sharply defined margins with radial clefts is consistent with the features of Stomiopeltites (Alvin & Muir 1970). Although Alvin and Muir (1970) claimed that this taxon lacked hyphopodia, their illustrations do not appear to capture the full morphology of their studied fossils, and Phipps and Rember (2004, fig. 9) illustrated similar forms with adventitious hyphae bearing sporadic lateral extensions that probably represent hyphopodia. The type species, Stomiopeltites cretacea Alvin & Muir 1970, from the Wealden of the Isle of Wight, differs from the new species by its narrower hyphae (1.7-3 μm wide) and slighly larger thyrothecia (up to 250 μm wide). Stomiopeltites amorphos Phipps & Rember 2004, from the Miocene of Idaho, differs in its more prominently thickened collar around the ostiole and generally less tightly interwoven hyphae. The shield-like Stomiopeltites fossils and other forms described below can be separated from most commonly reported examples of fossil epiphyllous fungi affiliated with Microtheriaceae by the latter having much more regimented and regularly septate rays of hyphae constituting the thyriothecium (Dilcher 1965;Wu et al. 2011;Du et al. 2012;Worobiec & Worobiec 2013).
Description
Epiphyllous fungi consisting of prominently domed (Fig. 7J) plectenchymatous thyriothecia, 160-450 μm in diameter ( Fig. 7G-M), each with an apparent central cap or operculum (Fig. 7K) c. one-third of the thyriothecium diameter (in most cases detached: Fig. 7L), and with an 8-μm-wide central ostiole. Margin of thyriothecium ragged with irregular short (up to 30 μm) hyphal extensions and sporadic radial clefts (Fig. 7L, M). Plectenchymatous wall consisting of dense, contorted and partially interwoven, c. 3-4 μm wide, hyphae that generally become radially arrayed and laterally fused towards the margin forming a flat stellate skirt (Fig. 7I, K-M). Plectenchyma strongly thickened around outer margin of operculum and inner part of the outer thyriothecium. Localized thinning of the plectenchyma apparently provides a weakness in the thyriothecium wall for the detachment of the operculum (Fig. 7K), the apical part of the thyriothecium also opening by starshaped splits after operculum detachment (Fig. 7J). The thyriothecia occur on various angiosperm and conifer cuticle types, typically in isolation or weakly aggregated (Fig. 7G, H). They occur on both stomatiferous and nonstomatiferous leaf surfaces but appear to be significantly more common on the latter.
Remarks
We did not detect any adventitious hyphae emerging from the thyriothecia but further searches of the rich Åsen fossil assemblage are likely to yield additional details of the morphology of this fungus. In its generally ragged fringe, radial arrangement of slender hyphae around the margin of the thyriothecium, and stellate apical opening, this fossil form is similar to various modern Asterinaceae taxa, including Halbania cyathearum, and to Petrakina mirabilis, an ambiguously placed dothideomycete (see Hongsanan et al. 2014, figs 15d, 45f), but the extant examples lack a line of thinning to produce a broad, operculum-like, apex. Similar ring-like thinnings or ruptures are evident around the apices of some microthyrialean thyriothecia, such as Dictyopeltis applanata (see Gallo et al. 2018, fig. 2E), but such examples tend to lack the more markedly radial arrangement of hyphae. The Åsen form is also similar to some Cenozoic fossils, e.g., Asterilla kosciuskensis Selkirk 1975, in its fine radial hyphae, but the presence of an apical, ring-like, thinning or stellate clefts is distinctive to the Swedish taxon. Based on the dominant characters, we tentatively assign these remains to Asterinales; confirmation and further taxonomic resolution will require additional details of the hyphae, ascocarps and ascospores.
Etymology
After the province of Skåne (Scania), southern Sweden.
Remarks
The arcuate contortions evident in the hyphae near some septa are superficially reminiscent of clamp connections among Basidiomycota but instead appear to be short, lobate appressoria (hyphopodia) of an ascomycote fungus. Similar short curved cells were illustrated on hyphae marginal to ascomycete thyriothecia by Phipps and Rember (2004, fig. 19). This taxon is similar, in the general form of its perithecia, Y-branched and cross-connected adventitious hyphae with apparently short hyphopodia, to a range of species attributed to Meliolinites (Meliolales). Meliolales has a sparse fossil record with occurrences scattered through the Cenozoic (Köck 1939;Dilcher 1965;Selkirk 1975;Mandal et al. 2011;Taylor et al. 2015;Wang et al. 2017). The present record from Santonian-Campanian strata at Åsen supplies a new calibration point for dating the evolutionary diversification of this order that is consistent with the estimated divergence of crown group Meliolales at 177-93 Ma (late Early Jurassic -mid-Cretaceous) based on molecular data (Hongsanan et al. 2016).
At least ten species of this fossil genus have been established but, in many cases, the morphological features distinguishing these taxa are very subtle. In most cases, it is the characters of the spores, appressoria, and other hyphal extensions (e.g., mycelial setae) that are used to distinguish the fossil species, but these are either absent from the Åsen specimens or (with respect to appressoria) simple and ill-preserved. Based on the relatively simple appressoria and poorly ordered perithecia, Meliolinites scanicus appears to be a more archaic form within the genus. In general, other species of the genus, e.g., Meliolinites spinksii (Dilcher) Selkirk 1975, M. nivalis Selkirk 1975, M. dilcheri Daghlian 1978
Description
Thickened, translucent, stellate epicuticular structures occurring on both stomatiferous and non-stomatiferous surfaces of angiosperm leaves and centred on stomata or epidermal cell junctions (Fig. 8G-I). Larger examples are roughly circular (up to 200 μm in diameter: Fig. 8J), but smaller specimens form irregularly stellate thickenings (c. 20-40 μm in diameter: Fig. 8H) at epidermal cell junctions. These features lack clearly defined internal structures or adventitious hyphae but the thickenings show radiating extensions along epidermal cell boundaries (Fig. 8H, I).
Remarks
Various dothideomycete fungal germlings (e.g., Bianchinotti et al. 2020, pl. 1, fig. 1-3) can have a stellate appearance superficially similar to these fossils. Such germlings may become established at weaknesses in the leaf cuticle substrate -either at epidermal cell junctions, loci of physical damage, or around stomata. In some respects, the Åsen examples are similar to diminutive examples of insect mucivory (piercing-and sucking-) damage (see examples described below) but they tend to occur away from veins, are strongly variable in size, and not all form around ruptures in the cuticle. The identity of these features on the Åsen angiosperm leaves is far from clear but their abundance, variable size and stellate form leads us to provisionally consider them to be fungal germlings. (Figs. 8L-O, 9A, B) Material S084294-04, S084294-05, S084298-06, S084299-01, S084386-03, S084413-01, S084413-02, S084413-05, S084413-05, S084468-12 Description Weakly (Fig. 8N) to strongly (Fig. 8M) sinuous hyphae with few or rare septa, variable branching patterns, and simple rounded termini. Hyphae are smooth, typically 5-8 μm wide, hundreds of micrometres long, with walls 1-2 μm thick, and lacking obvious hyphopodia, reproductive structures or other adornments. These hyphae typically skirt stomata (Figs. 8N, O, 9B) and are generally arranged irregularly across the leaf surface. In a few cases, they appear to penetrate stomata (Fig. 9A). Where hyphae are very abundant, they commonly aggregate in bands and follow epidermal cell walls, especially along veins (Fig. 8L).
Remarks
A very large number of relatively featureless fungal hyphae unassociated with reproductive structures occur on both angiosperm and conifer leaves in the collection. Owing to the dearth of morphological characters and lack of attached reproductive bodies, the affinities of these hyphae are unclear. Some may be affiliated with the various epiphyllous fungi described above, since similar crenulate hyphae are known to be associated with a wide range of micropeltidacean thyriothecia (Maslova et al. 2020). Others may represent saprotrophic fungi that grew over the dead leaf surface before burial. We illustrate a range of examples (Figs. 8L-O, 9A-B) to highlight the potential for additional discoveries of fossil (Santonian-Campanian) mycobiota in the Kristianstad Basin.
Description
Featureless, gently curved, aseptate hyphae with simple blunt termini in tangled masses on degraded cuticle.
Remarks
One example of this hyphal type is available. It lacks the marked crenulations or sinuosity of the isolated hyphae described above. Septa are not obvious but overlapping of the hyphae and the degraded nature of the attached angiosperm cuticle make these difficult to detect. In the absence of attached reproductive features we can not assign these remains to any systematic or palaeoecological grouping of fungi.
Remarks
Although broadly similar simple, laevigate, spherical-globose reproductive structures are produced by various fungi, including Chytridiomycota (Krings et al. 2009), zygomycetes (Krings & Taylor 2012) and Glomeromycota (Walker 1983), we regard these remains as probable ascomycote chlamydospores. These reproductive structures are borne on hyphae that resemble some examples of those described above as "sinuous hyphae", although they are generally less contorted. The putative chlamydospores are relatively featureless and lack preserved contents. We draw attention to the similarity of these structures to thick-walled chlamydospores borne on short lateral hyphal branches of modern ascomycotes, such as Fusarium (Pérez-Vicente et al. 2014) and Verticillium (Grum-Grzhimaylo et al. 2016). The lack of any additional distinctive morphological characters detracts from assignment to any particular group of Ascomycota. Chlamydospores are asexual reproductive structures that tend to be produced during unfavourable environmental conditions (e.g., excessive heat or drought) as a resting stage in the fungal life cycle (Lin & Heitman 2005).
Description
Scolecospores 60-163 μm long, 8-18 μm in maximum width, spindle-shaped to linear with tapered base. The spores are typically attached to an indistinct hypha c. 4 μm in diameter on the cuticle surface by a short, tapered, strongly translucent or darkened pedicel (Fig. 9I, J). Scolecospores divided by transverse septa into 8-36 cells (Fig. 9G, H). The apex is generally tapered but, in rare cases, may be capped by an enlarged rounded cell (Fig. 9L). Where attached to cuticle, the scolecospores are typically positioned over or adjacent to stomatal apertures. They occur scattered over the stomatiferous surfaces of angiosperm leaves and, in a few cases, occur in dense aggregations (Fig. 9K).
Remarks
Scolecospores are common among representatives of extant Phyllachoraceae (Ascomycota, Sordariomycetes), which are obligate parasites on living plant hosts. Isolated scolecospores may vary considerably within an individual population in size, gross shape, terminus shape and number of septa ( Fig. 9G-L).
Owing to the dearth of other morphological characters, they are difficult to assign to any species with consistency. The Åsen examples are especially similar to septate ascospores of Scolecopeltidium hormosporum Stevens & Manter 1925(Wu & Hyde 2013. Similar scolecospores (filamentous phragmospores) illustrated as "Scolecospore Fungal multicellate spore" have also been documented from the Bathonian-Tithonian of Libya (Thusu & Vigran 1985), and a range of other comparable spindle-shaped septate spores have been recorded from Upper Cretaceous and Paleogene strata globally (Kalgutkar & Braman 2008;Saxena & Tripathi 2011). These remains are also superficially similar to some examples of isolated elongate dothideomycete conidia (Hyde et al. 2017, figs. 12j-m, 33i-q;Crous et al. 2007, fig. 6B-H), but the Åsen examples are never aggregated into sporodochia or disarticulating chains. Kalgutkar and Jansonius (2000) summarized the few species formally established within Scolecosporites. Of these, the Åsen specimens appear to be most similar to Scolecosporites scalaris (Kalgutkar) Kalgutkar & Jansonius 2000 and S. modicus Kalgutkar & Jansonius 2000 in their gross shape, size and degree of septation. However, the Swedish population seems to encompass the full range of characters expressed by all four fossil species recognized by Kalgutkar and Jansonius (2000).
Description
One chloranthaceous leaf from Åsen (leaf morphotype 1) bears at least two examples of margin feeding. The feeding traces (Fig. 5B) extend c. 4 mm along the leaf margin and penetrate 2 mm into the lamina (i.e., reaching but not transecting the midvein). The traces are represented by roughly semicircular scallops, although the lower example in Fig. 5B appears to show a secondary scallop that extends from the initial damage area. The damaged areas are flanked by a reaction rim consisting of thickened tissue, 100-200 µm wide, with an increased density of fungal hyphae (Fig. 9M).
Remarks
This style of leaf-margin feeding is broadly similar to the damage category DT14 illustrated by Labandeira et al. (2007). Because of extensive convergence in mouthpart architecture and foraging behaviour in leaf-margin-feeding insects, only rare cases of this damage style can be attributed to specific animals. Diverse larval and adult insects, especially among Coleoptera (beetles), Orthoptera (grasshoppers and their relatives), Lepidoptera (moths and butterflies), Phasmatodea (phasmids), and Hymenoptera (ants, bees and their relatives), are known to produce simple semicircular scallops on leaf margins (e.g., Carvalho et al. 2014;Sohn et al. 2017) similar to the examples illustrated here. It is notable that the scalloped damage occurs along parts of the lamina margin between the blunt glandular cusps that are characteristic of this leaf form. Trichomes of various morphologies, including glandular hairs and secretory cells are common in a broad range of flowering plants, including some of the earliest diverging angiosperm clades, and typically represent structures producing chemical defenses against herbivory (Fahn 1979;Agrawal & Fishbein 2006;Chin et al. 2013). However, some glandular cusps (in the form of extrafloral nectaries) produce insect attractants (Elias 1983) and others aid the reduction of water loss via cuticular transpiration (Gonzalez & Tarragó 2009). In extant Chloranthaceae, glandular cusps, in the form of hydathodal glands, are known to aid water regulation via guttation (Todzia & Keating 1991;Feild et al. 2005;Feild & Arens 2007). Marginal cusps also provide physical obstacles that are disruptive to regular margin feeding by insects (Brown et al. 1991;McLoughlin et al. 2015) and the apically orientated teeth of angiosperm leaf type 1 may have directed herbivore traffic distally, and ultimately off the end of the leaf (Vermeij 2015). It is likely that the Åsen chloranthaceous leaf was employing glandular cusps as both physical anti-herbivory and water-regulatory devices, and that the insect was actively avoiding marginal teeth in an ongoing "arms-race" of herbivory versus plant defence at a time of rapid diversification of both angiosperms (Magallón et al. 2019) and at least some groups of insects (Condamine et al. 2016). The record of chemical defences in fossil plants is scant but does extend back to the late Palaeozoic (Krings et al. 2002). The preservation of glandular structures on Cretaceous leaves at Åsen offers one line of investigation for tracking the development of induced (chemical) defence mechanisms against herbivores in early angiosperms.
Description
Prominent circular to elliptical openings in cuticle, typically c. 40-80 μm wide and 70-120 μm long, surrounded by a zone (c. 30 μm wide) of thickened (darkened) cuticle or necrotic tissue (Fig. 9N). Damage features are typically positioned on veins and, in some cases, several damage scars are arrayed in a row at least 300 μm apart (Fig. 9O).
Remarks
Most previous records of piercing-and-sucking damage attributable to a specific group relate to shield scars left by scale insects (Labandeira et al. 2007;Wappler & Ben-Dov 2008;Wilf et al. 2017). Only a small number of puncture damage features on leaves have been illustrated at high resolution from fossil cuticles or mummified leaves, and some of these occur aligned in rows above thickened veins or midribs (Tosolini & Pole 2010;Labandeira et al. 2014). The relative scarcity of clearly defined fossil records attributable to this category of herbivory probably relates to the diminutive size of the damage marks and the difficulty to differentiate these from other forms of fungal, bacterial or physical damage on carbonized leaf compressions or impressions. Nevertheless, mucivory has a patchy fossil record extending back to at least the Early Devonian and is among the oldest styles of herbivory documented in the fossil record (Labandeira 2013).
Discussion and conclusions
The Santonian-early Campanian floras of the northern Kristianstad Basin are apparently the northernmost plant fossil assemblages of this age from Europe. Future studies of the Swedish assemblages could potentially provide insights into a range of palaeobotanical and palaeoecological questions. These include: 1, What was the full floristic diversity and composition of the Åsen flora?; 2, What were its palaeophytogeographic relationships with Central European (warmtemperate) and Siberian (cool-climate) floras?; 3, What anatomical information can be gained from these plants using advanced cuticular analysis, palynology, thin-sectioning, and tomographic approaches to aid whole-plant reconstructions?; 4, Can palynostratigraphy of the host strata provide better age resolution of these units?; 5, Can phylogenetic analysis of the wealth of plant fossils from the Åsen deposit provide a better understanding of the changes in diversity and abundance of major conifer and angiosperm clades at a time when flowering plants were undergoing an explosive radiation, globally?; and 6, Can the Kristianstad Basin Cretaceous floras provide new calibration points for clade divergence in the context of biome re-structuring after the rise of angiosperms to dominance (e.g., Schneider et al. 2004;Le Renard et al. 2020)?
Palaeofloral diversity
Our reconnaissance sampling of the Åsen fossil leaf assemblage (Figs. 5,6), and initial scanning of rock sample surfaces using fluorescence microscopy ( Fig. 2I-N) suggests that the palaeoflora is extremely well preserved and quite diverse. The extensive bulk samples already registered in the museum collections host leaf cuticles amenable to study by transmitted light and fluorescence microscopy, and preserve a diverse array of epidermal and cuticular ornamentation (Figs. 2L-N, 5B-G, 6A-F). To date, described angiosperm remains from these beds are limited to reproductive structures of chloranthaceous, platanaceous, saxifragalean, ericalean or ebenalean, hamamelidacean and fagalean (Normapolles complex) plants, together with mention of undescribed material of thealean affinity (Eklund et al. 1997; Table S1). The few palynological studies of this deposit have documented a considerably greater diversity of some groups (e.g., fern spores and angiosperm pollen: Table S1) than represented, thus far, by mesofossils. We suggest that a dedicated survey of fossil angiosperm leaf cuticles from this deposit would be highly productive and should greatly advance broader phytogeographic reconstructions of the European Late Cretaceous. Moreover, recovery of data on the types of seeds and other disseminules in these deposits will have implications for understanding the roles of plant dispersal mechanisms in the Cretaceous vegetation (McLoughlin & Pott 2019), and potentially provide insights into herbivory that may supplement the meagre terrestrial vertebrate fossil record from the Kristianstad Basin (Table S1). The Ryedal and Holma sandstones host a low-diversity flora of conifer and angiosperm remains. The few fossils available from these units suggest that the flora is, nevertheless, similar in composition to other Late Cretaceous assemblages from central Europe. In general terms, the palaeocommunities best represented in the megafossil record of Central Europe are riparian forests with platanoids and floodbasin conifer forests (e.g., Kvaček et al. 2015;Halamski et al. 2020;Heřmanová et al. 2020). Presumably, the single platanoid leaf and the two Geinitzia twigs identified in this study derive from similar riparian and floodbasin forests of coastal plains flanking the Kristianstad Basin. In that respect, the megafossil assemblages from northern Skåne described herein differ from the Campanian assemblage of Köpinge, southern Skåne , which is dominated by Dewalquea haldemiana, a species with coriaceous leaves tentatively interpreted as a dune-dweller (Halamski et al. 2020).
The Ryedal and Holma sandstones, and the laterally equivalent non-marine organic-rich deposits at Åsen and Axeltorp, are poorly exposed and have discontinuous distributions along the northern margin of the Kristianstad Basin. Future significant fossil discoveries from these units are likely to become available only through quarrying for clay, sand or sandstone resources. Nevertheless, rare permineralized remains of plants recovered from these units offer significant potential for insights into Late Cretaceous plant anatomy and interactions with fungi.
Palaeobotanical applications of neutron tomography
We have shown that neutron tomography (NT) has great potential for recovering anatomical data for the reconstruction of permineralized cones preserved in coarse-grained siliceous facies, such as the Ryedal Sandstone. Neutrons provide a stronger attenuation contrast between organic and inorganic components than X-ray techniques (Dawson et al. 2014). The fission neutron source utilized herein has a high neutron flux (Garbe et al. 2011), thus providing excellent penetration through voluminous siliceous sedimentary rock. Hence, the high-flux neutron tomography technique is particularly promising for the analysis of large permineralized plant remains in general, and particularly permineralized peats (Slater et al. 2015) or root mantles (McLoughlin & Bomfleur 2016) that encompass a diverse array of organic remains and plantanimal-fungal interactions.
Neutron tomography enabled linkages between the cone and the attached leafy axis mould entombed within the sedimentary rock matrix. Although neutron reconstruction of mouldic plant fossils has been conducted by Dawson et al. (2014), here we demonstrate the utility of NT for the virtual extraction of plant organs of different preservation within the same specimen: the mouldic leafy axis and the permineralized ovulate cone. It is common to find plant fossils with differential preservation within a single specimen, which likely reflects underlying anatomy, e.g., fleshy vs woody organs (this study), lignitized wood vs resin (Mays et al. 2017b(Mays et al. , 2018, vascular vs ground tissue (Herrera et al. 2020). Despite the relatively course spatial resolution, the ability of NT to differentiate a wide range of preservation styles may be a critical consideration for future fossil visualization studies. Conwentz (1892) illustrated, using lithographic sketches, various, apparently saprotrophic, fungi associated with woods from the Holma Sandstone. Soon after, Felix (1894) provided a short description (without illustration) of one taxon of probable saprotrophic ascomycete from this collection. These woods and contained fungi will be re-analysed in a forthcoming study.
Fossil fungi
Numerous studies have documented fossil epiphyllous fungi from various deposits around the world. However, there have been few studies devoted to a thorough systematic evaluation of the palaeomycoflora from any one succession, or an evaluation of the stratigraphic or geographic distributions of fungal taxa. Our reconnaissance survey of angiosperm and conifer cuticles from the Åsen deposit indicates that fungal remains are ubiquitous on these plant remains. This suggests that epiphyllous fungi are likely to be commonplace in wetland deposits, especially of late Mesozoic and Cenozoic age. Our survey identified the first occurrence of putative Meliolales in the Cretaceous. Thorough surveys of the fossil mycofloras from such deposits offer the potential for acquiring important temporal calibration points for fungal phylogenies, for documenting the development of plant-fungal interactions through deep time, and for understanding the evolution of Earth's mycofloral diversity and turnover in general. For example, Trichopeltinites, considered to have gone extinct in North America at the K-Pg boundary, has been shown to have survived in the Southern Hemisphere based on the study of a cuticle assemblage (Upchurch & Askin 1989).
Palaeoecology and palaeoenvironments
Epiphyllous fungi may be saprotrophic, obtaining nutrients on the leaf surface from the decay of material in the surrounding forest or from leaf exudates (Cooke & Rayner 1984). Others are biotrophic (parasitic on living hosts), some obligate to specific plant taxa, obtaining nutients via haustoria that penetrate the cells of the leaf substrate (Bannister et al. 2016;Suzuki & Sasaki 2019). High diversities and abundances of epiphyllous fungi are generally signals of warm, ever-wet climates (Wang 1991;Bannister et al. 2016) but higher latitude settings may also host similar fungal biotas under consistently humid conditions (e.g., McLoughlin et al. 2002;Lücking et al. 2009).
The presence of over a dozen taxa of fern, lycopsid and bryophyte spores in the Campanian palynofloras of the Kritianstad Basin (Table S1) and 18 taxa of these groups from the Vomb Trough , together with abundant and diverse Tetraporina (Lindgren 1980), a zygnematacean or sphaeroplealean freshwater alga (Mays et al. 2021), also suggest that a humid climate prevailed across southern Sweden in the Late Cretaceous. Growth features of fossil woods from the Kristianstad Basin have not yet been analysed for palaeoclimatic signals apart from the illustration by Conwentz (1892, pl. 8, fig. 2) of indistinct growth rings in Sequoites holstii wood from the Holma Sandstone that suggest relatively subdued seasonality.
Another signal of humid conditions is the glandular cusps of the chloranthaceous leaves. If the glandular teeth of the chloranthaceous angiosperm leaf type 1 (Figs. 2M, 5C, F) are primarily hydathodal, then this may have been an adaptation for removal of excess water from the leaf in a consistently moist mid-storey or understorey environment where internal flooding of mesophyll intercellular spaces may otherwise have reduced CO 2 diffusion and inhibited photosynthesis (Feild et al. 2005(Feild et al. , 2009). Our study shows that a moderate diversity and great abundance of epiphyllous parasitic Ascomycota, and potentially some generalist saprotrophs were present at Åsen in this humid-climate setting. Chloranthaceae are also known to produce chemicals inhibitive of germination and mycelial growth of fungal (e.g., Botrytis) pathogens (Jacometti et al. 2010). With further study, the interactions between chloranthaceous leaves and fungi may provide insights into the parasite and host-defence mechanisms during the early stages of angiosperm diversification. The generally fine preservation of fungal remains and their great abundance in the Åsen deposit also offers opportunities for expanding the fossil record of other fungal and fungi-like groups, such as Basidiomycota, Chytridiomycota, Glomeromycota, and Peronosporomycetes.
Late Cretaceous plant assemblages belong to the modern, angiosperm-dominated flora (Cenophytic Evolutionary Flora sensu Cleal & Cascales-Miñana 2014). Similarly, the Late Cretaceous entomofauna is also considered "modern" (Grimaldi & Engel 2005). In other words, the broad-scale taxonomic composition of the insect fauna is similar to that of the extant fauna (Szwedo & Nel 2015). Moreover, the ecological relationships between plants and insects were of an equivalent nature and complexity as in the modern biosphere (Labandeira 2006). This broad similarity is related to both insects and plants having suffered from the K-Pg event at lower taxonomic levels, but their diversity being significantly less affected at higher taxonomic levels (Labandeira et al. 2002;Nichols & Johnson 2008), a response markedly different from that of vertebrates (e.g., Kielan-Jaworowska et al. 2004). The insect fossil record is generally sparse and unequal in space and time. Compared to better-studied Early Cretaceous-Cenomanian insects, Turonian-Maastrichtian entomofaunas are less well known owing to the scarcity of exposed strata bearing rich fossil insect assemblages (Szwedo & Nel 2015). In the absence of insect macrofossils, feeding damage on foliage can provide some insights into the insect herbivores and herbivory strategies of Late Cretaceous.
External foliage feeding and piercing-and-sucking represent the oldest kinds of insect-plant interactions dating back to at least the Devonian (Labandeira 2006 and references therein). In the modern fauna, piercing-and-sucking is especially characteristic of true bugs (Hemiptera: Yoshizawa & Lienhard 2016), although similar feeding habits also occur in thrips (Thysanoptera) and spider mites (Tetranychidae, Acari). The modern fauna probably contains several tens of thousands of plant-feeding hemipteran species and the Cretaceous was a time of strong taxonomic turnover and extensive radiations within this group (Szwedo & Nel 2015 and references therein). Although we can not confirm that the mucivory damage was caused by hemipterans, they are strong candidates for this form of herbivory in the Åsen flora. Similarly, the margin-feeding damage on chloranthaceous leaves (Figs. 5B, 9M) can not be attributed definitively to any one group but coleopterans, orthopterans, lepidopterans, Phasma-todea, and possibly even Hymenoptera, are all potential candidates for this type of herbivory.
A recent study that proposed a detailed vegetation reconstruction of a Central European Late Cretaceous flora was achieved only through the assessment of mega-, meso-, and microfossil records (Halamski et al. 2020). Similarly, our preliminary analysis shows that applying multiple analytical approaches to the study of Santonian-Lower Campanian plant remains from the Kristianstad Basin yields information on plant organ associations, palaeophytodiversity and palaeoecology that can not be obtained from any single investigative method. We note that the Åsen flora, in particular, represents one of the largest resources of Santonian-Campanian fossil plants in northern Europe and has great potential to expand upon the known Late Cretaceous terrestrial biota of Sweden (Table S1). We urge a major investigation of the Kristianstad Basin plant fossil assemblages (cuticular, lignitized, charcoalified, permineralized and palynofloral remains) using a broad battery of methodologies to fully document the extensive upper Santonian-lower Campanian floras and place them within a more robust palaebiogeographic, phylogenetic, and palaeoecological context.
|
2021-10-20T13:03:28.781Z
|
2021-07-03T00:00:00.000
|
{
"year": 2021,
"sha1": "602bf3378c7525ad5ecece27c0357333bfe14263",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/11035897.2021.1896574?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "602bf3378c7525ad5ecece27c0357333bfe14263",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
229928837
|
pes2o/s2orc
|
v3-fos-license
|
Development and Feasibility of an App to Decrease Risk Factors for Type 2 Diabetes in Hispanic Women With Recent Gestational Diabetes (Hola Bebé, Adiós Diabetes): Pilot Pre-Post Study
Background Hispanic women have increased risk of gestational diabetes mellitus (GDM), which carries an increased risk for future type 2 diabetes, compared to non-Hispanic women. In addition, Hispanic women are less likely to engage in healthy eating and physical activity, which are both risk factors for type 2 diabetes. Supporting patients to engage in healthy lifestyle behaviors through mobile health (mHealth) interventions is increasingly recognized as a viable, underused tool for disease prevention, as they reduce barriers to access frequently experienced in face-to-face interventions. Despite the high percentage of smartphone ownership among Hispanics, mHealth programs to reduce risk factors for type 2 diabetes in Hispanic women with prior GDM are lacking. Objective This study aimed to (1) develop a mobile app (¡Hola Bebé, Adiós Diabetes!) to pilot test a culturally tailored, bilingual (Spanish/English) lifestyle program to reduce risk factors for type 2 diabetes in Hispanic women with GDM in the prior 5 years; (2) examine the acceptability and usability of the app; and (3) assess the short-term effectiveness of the app in increasing self-efficacy for both healthy eating and physical activity, and in decreasing weight. Methods Social cognitive theory provided the framework for the study. A prototype app was developed based on prior research and cultural tailoring of content. Features included educational audiovisual modules on healthy eating and physical activity; personal action plans; motivational text messages; weight tracking; user-friendly, easy-to-follow recipes; directions on building a balanced plate; and tiered badges to reward achievements. Perceptions of the app’s acceptability and usability were explored through four focus groups. Short-term effectiveness of the app was tested in an 8-week single group pilot study. Results In total, 11 Hispanic women, receiving care at a federally qualified community health center, aged 18-45 years, and with GDM in the last 5 years, participated in four focus groups to evaluate the app’s acceptability and usability. Participants found the following sections most useful: audiovisual modules, badges for completion of activities, weight-tracking graphics, and recipes. Suggested modifications included adjustments in phrasing, graphics, and a tiering system of badges. After app modifications, we conducted usability testing with 4 Hispanic women, with the key result being the suggestion for a “how-to tutorial.” To assess short-term effectiveness, 21 Hispanic women with prior GDM participated in the pilot. There was a statistically significant improvement in both self-efficacy for physical activity (P=.003) and self-efficacy for healthy eating (P=.007). Weight decreased but not significantly. Backend process data revealed a high level of user engagement. Conclusions These data support the app’s acceptability, usability, and short-term effectiveness, suggesting that this mHealth program has the potential to fill the gap in care experienced by Hispanic women with prior GDM following pregnancy. Future studies are needed to determine the effectiveness of an enhanced app in a randomized controlled trial. Trial Registration ClinicalTrials.gov NCT04149054; https://clinicaltrials.gov/ct2/show/NCT04149054
Introduction
Gestational diabetes mellitus (GDM), defined as glucose intolerance diagnosed after the first trimester of pregnancy [1], occurs in 3%-7% of pregnancies in the United States. Due, in part, to the fact that 40% of Hispanic women in the United States of child-bearing age are obese, and 51% experience excessive weight gain during pregnancy [2][3][4][5], this group has 1.5 times the risk of GDM compared to non-Hispanic White women [6]. GDM carries an overall increased risk as high as 60% for the development of type 2 diabetes mellitus (T2DM) [7], placing Hispanic women with prior GDM at high risk for future T2DM. Furthermore, obesity, a major risk factor for GDM and the strongest modifiable risk factor for T2DM, is more prevalent among Hispanic than non-Hispanic White women [8,9].
It is widely acknowledged that Hispanic women in the United States experience disparities in health care access and utilization compared to non-Hispanic women [10]. Cultural, social, and economic barriers also lead to disparities in healthy lifestyle behaviors [10]. Hispanic women face sociocultural barriers to healthy eating (eg, cost of healthy food, knowledge about nutritional values of some foods, and family food preferences) [11], as well as structural barriers (eg, food deserts) to obtaining healthy foods [12]. Hispanic women are also less likely to be physically active compared to non-Hispanic White women [13][14][15]. While some barriers to physical activity are comparable to those for non-Hispanic women (eg, lack of time, lack of childcare, being tired, and having limited self-discipline) [13]. Some additional barriers may be culturally influenced such as being discouraged by family members and friends, as well as environmental factors, such as not having a safe place to exercise [16].
The Diabetes Prevention Program (DPP), which was delivered as a face-to-face intervention, demonstrated that T2DM can be prevented by lifestyle changes focused on healthy eating and physical activity in women with a remote history of self-reported GDM [17]. The scalability of face-to-face DPP-based programs has been a challenge, due to the costs involved in implementing an in-person intervention and the difficulties encountered in attending face-to-face programs [18,19], particularly in postpartum women with recent GDM [20]. A potential approach to overcome barriers to face-to-face implementation is through mobile health (mHealth) technologies that can enable greater patient access.
According to the Pew Research Center, approximately 80% of the Hispanic population owns a smartphone, which is comparable to White and Black populations [21], with Hispanics more likely to use their smartphone to seek health information than their White counterparts [22]. Hispanic people in the United States spend more time using apps than the general population [23]. These data suggest that mobile apps are a viable, underused tool for T2DM prevention in minority populations including Hispanic women with recent GDM.
These findings led us to develop and pilot test a culturally tailored, bilingual (Spanish/English), mobile app-based lifestyle program, ¡Hola Bebé, Adiós Diabetes! (hereafter referred to as Hola Bebé), to reduce risk factors for T2DM in Hispanic women who have had GDM in the prior 5 years. The years after childbirth are well recognized as representing a "window of opportunity" to improve the future health of women who have had GDM, as demonstrated in our previous work [24,25] and by other studies [26,27]. The goal of the Hola Bebé pilot was to determine the feasibility, acceptability, and short-term effectiveness of an mHealth approach to increasing self-efficacy for healthy eating and increased physical activity, and promoting weight loss, in a population of Hispanic women with recent GDM.
Overview
Social cognitive theory (SCT) provided the framework for the Hola Bebé intervention. Self-efficacy, the belief in one's own capabilities to adopt and maintain behavior change [28,29], is a core component of SCT. For the intervention, we developed educational and motivational messages delivered through texts and videos to increase self-efficacy for healthy diet and physical activity. The focus on healthy eating and physical activity for the app was based on the DPP, which demonstrated that lifestyle change targeting healthy eating and increased physical activity led to a decrease in the development of T2DM in individuals at high risk for this condition including women with prior history of GDM [17]. Motivational messages were developed to target self-efficacy, which is associated with initiation and adherence to physical activity and other health-promoting activities [30,31]. Participants chose the times of day and frequency of the text messages. Cultural tailoring involved the development of the app first in Spanish, followed by translation into English with input from Hispanic women with a history of GDM who participated in every stage of app development.
The app included six educational audiovisual modules on healthy eating and physical activity; personal action plans for healthy eating and staying active; motivational and educational text messages; weight tracking; user-friendly, easy-to-follow recipes ( Figure 1); directions on how to build a balanced plate; and tiered badges to reward achievements. For the action plans, participants were taught how to identify barriers to individualized goals and ways to overcome the barriers. Healthy eating advice was based on MyPlate [32]. Tiered badges could be earned by the participants with completion of a module, action plan, and/or inputting of weight. The app was developed to meet the 8th-grade literacy level. All content was in plain-language Spanish and English, with Spanish and English audio voiceover.
Formative Phase
In our formative work, we solicited feedback on the acceptability and usability of the mHealth program through four focus groups. Inclusion criteria included Hispanic women, aged 18-45 years, with prior GDM in the past 5 years, and who received medical care at a federally qualified community health center (CHC), a Level 3 Patient-Centered Medical Home in the Greater Boston area. In total, 11 women participated in the acceptability focus groups. Participants were asked for feedback on the prototype of the app, including what feature(s) they found most useful, preferences for phrasing, wording, graphics, colors, and type of badge tier system. The sections they found most useful were (1) the audiovisual modules, especially those about how to make healthy choices when eating out; (2) the badges for completion of activities; (3) the weight tracking graphic; and (4) the recipes. They also recommended a color scheme from a menu of options and offered suggestions for certain adjustments in phrasing and graphics. Finally, they suggested that the tiering system of badges be based on a system of student achievement, such as "outstanding student" or "honor student." After modifications to the prototype app were made based on participants' input, we conducted usability testing with 4 participants. Participants were given access to the app and asked to perform a number of tasks (eg, click on a tab, complete a module, and open an action plan tab), as well as to explore as they wished. The key result from the usability testing was that participants requested a "how-to tutorial" to make the app easier to use. Some, in fact, offered specific suggestions, such as "click here to add your weight." Participants also asked for more tabs to better label and access specific sections of the app. In addition, for the action plan completion section, participants suggested instead of only providing a free-text box, there be an additional drop down menu of action plan options with prepopulated action plans to choose from (eg, "I will balance my plate at dinner," "I will eat fruit and/or vegetable with every meal"). Feedback on the acceptability and usability was incorporated in the app prior to the pilot (Table 1).
Pilot Trial
A nurse and a medical assistant from the health center identified potential participants who were Hispanic and who had had GDM in the past 5 years from a list generated from the CHC database using the same inclusion criteria as in the formative phase. In addition, participants had to have or be willing to use an Android mobile phone for the study. For women with other smartphones, Android phones were offered on loan for the duration of the study. A member of the study team contacted those women who expressed interest in participating to provide additional information about the study. For those women who agreed to participate, the research assistant scheduled the first study visit. At the first study visit, informed written consent was obtained, and the participant's weight and height were determined by the research assistant. Weight was measured with the DR400C/Detecto Portable Home Health Care Scale, which was zeroed prior to each weight determination, with the participant wearing light clothes. Participants were asked to complete self-efficacy questionnaires for healthy eating (20 items) and physical activity (12 items) developed by Sallis et al [33], scored on a scale of 1-5 with 5 being the most self-efficacious. These questionnaires have been widely used in research in both Spanish [34] and English. A research assistant helped the participants download (via the Google Play store) and open the app on their phone, and review the "how to use the app" tutorial.
Participants were asked to watch one module weekly for the 8 weeks of study duration, complete the corresponding action plan, weigh themselves, and enter their weight into the app. At the end of the 8 weeks, baseline measures were repeated, and structured exit interviews, which focused on what participants liked best and areas for improvement, were performed. Primary outcomes included self-efficacy for healthy eating and self-efficacy for physical activity, with weight as a secondary outcome.
The study was approved by the Pearl Institutional Review Board and the board of the CHC. All participants signed written informed consent.
Statistical Analysis
Descriptive statistics were presented as mean (SD) and frequency (%). For pre-post comparisons from the pilot study, paired t tests were conducted with a 5% significance level.
Results
In total, 30 eligible women were identified from the CHC database; 4 women could not be contacted. Of this, 26 women were successfully notified about the study and 21 (88%) consented to participate. Reasons given for not participating included not being interested in participating (n=3), moving out of state (n=1), and not wanting to use a study-provided Android phone (n=1) ( Table 2). At baseline, 21 participants were assessed and 18 completed the 8-week study.
Engagement
Backend process data revealed a high level of user engagement. In total, 91% (19/21) of participants viewed audiovisual modules and created action plans. There was also a high level of engagement in earning badges, with 95% of participants (20/21) earning badges by completing a learning module and/or an action plan or weighing. Participants posted tips on the community forum for other participants, such as a family recipe, and asked questions that other participants answered. One woman did not participate in any of the trackable app features.
Exit Interviews
We conducted exit interviews at the conclusion of the pilot study. The following quotes are representative of the participants' experiences of using the app: No cambiaría nada de la aplicación, me gusta todo (I would not change anything from the app, I like everything).
Los videos de los módulos me han ayudado a entender la clase de alimentos que son buenos para mi. He comenzado a cambiar los granos por granos integrales y ahora me siento más saludable (The module videos helped me understand what kind of foods were good for me. I started changing my grains for whole wheat grains and now I feel healthier).
Cuando voy a comer con mi familia a un restaurante, ya sé que clase de comida puedo ordenar y no sentirme culpable después (Whenever I go to eat with my family to a restaurant, I know which kind of food I can order and not feel guilty afterwards).
Participants especially liked the personalized action plans, the motivational text messages, the at-home exercise videos, and the recipes. Women commented that they found the "how to use the app" tutorial to be helpful.
Participants had suggestions for incorporation in a future version of the app. They requested more exercise videos including Zumba and with the addition of music, expansion of the recipe section to include more Latin American dishes and vegetarian options, and explanation of portion sizes for each recipe that align with MyPlate. Women also requested videos for recipe preparation. Participants asked for an "ask the expert" option to submit specific exercise and diet questions on the community forum. The 7 iPhone users asked that an app be developed for use on an iPhone.
Principal Findings
The ¡Hola Bebé, Adiós Diabetes! mHealth program was designed to overcome access barriers to T2DM prevention support among Hispanic women with prior GDM. Pilot testing indicated that it was well accepted, usable, and showed preliminary effectiveness at increasing self-efficacy for both physical activity and healthy eating. Weight decreased over the 8-week period but not significantly.
Interventions delivered through apps have great potential to fill the gap experienced by individuals seeking care across a range of conditions. A classification scheme for analyzing apps for preventing and managing disease proposes three dimensions for analysis: health condition (physical versus mental); prevention versus management; and, according to Green and Kreuter's [35] Precede-Proceed Model, predisposing, enabling, and motivating factors [36]. Using this classification scheme, Hola Bebé, addresses a physical condition, that of GDM, for the prevention of type 2 diabetes, and includes factors related to all aspects of the model, such as predisposing (eg, educational audiovisual modules, healthy recipes), enabling (eg, MyPlate demonstrations, action plans, weight tracking, and badges), and motivational (motivational text messages, sharing experiences/recipes, and asking questions through a community forum).
Hola Bebé has the potential to fill the gap in care experienced by women with GDM following pregnancy. Over 86% of women with GDM have no contact with primary care in the first year post delivery, and close to 60% have no contact at 3 years post delivery [37]. This is despite recommendations from the American College of Obstetricians and Gynecologists [38] for referral to primary care and counseling for lifestyle modification in nutrition and exercise for women with a prior pregnancy complicated by GDM. Some have characterized women with prior GDM as falling into a "healthcare chasm" [39]; alternatively, others have referred to this more positively as "a fixable gap in women's preventative healthcare" [37], which app technology can potentially address. A major advantage of an app-delivered program for Hispanic women is the widespread use of apps by this population [40], which experiences significant disparities in health care [10]. Additional strengths of using an app for behavior change include easy access potential for integration with other apps that commonly come with smartphones (eg, pedometer and music apps); faster speed, as data are stored on the smartphone; and the ease of receiving notifications.
Importantly, Hola Bebé takes advantage of the "window of opportunity" following a complicated pregnancy by bridging the gap in care through lifestyle counseling without dependency on visits to the health center and clinician [37]. This app also overcomes many barriers experienced by women who have young children at home and competing priorities for time; it can be used at home or work or while traveling, day or night, and in small doses whenever users have a few minutes. In addition, this app was culturally and linguistically tailored for Hispanic women and was developed first in Spanish. Finally, the app was designed through an iterative approach incorporating feedback from Hispanic women with recent gestational diabetes at several stages of development.
Limitations
Given the nature of the pilot study, we were limited by a small sample size, lack of a control group, and short study duration. A further limitation was the unavailability of the app for iOS users.
Conclusions
The widespread use of apps among Hispanic women of childbearing age holds promise for this particularly high-risk and underserved population to reduce risk factors for diabetes. This app-delivered program should be tested in a randomized controlled trial and be developed for iOS users.
|
2021-01-01T07:29:47.653Z
|
2020-04-30T00:00:00.000
|
{
"year": 2020,
"sha1": "f10fc81ed4a1ec85d4304884be768d976a15a4c1",
"oa_license": "CCBY",
"oa_url": "https://formative.jmir.org/2020/12/e19677/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "05c45cd610a13843e85e74b98b200e8ee7ce3c87",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246028005
|
pes2o/s2orc
|
v3-fos-license
|
Does Cesarean Section Increase the Risk of Postpartum Depression? A Systematic Literature Review
Recieved: 4 Agustus 2021 Final Revision: 27 November 2021 Available Online: 27 Dsemeber 2021 Background: Postpartum depression (PPD) is a psychological disorder experienced by mothers at 4 weeks to 6 months postpartum. One of the risk factors for postpartum depression is the type of delivery. The mode of delivery in its effect on postpartum depression has been studied extensively with conflicting results whether vaginal delivery or cesarean section (CS) that can affect postpartum depression (Rauh et al., 2012). Methods: This is a systematic literature review with a research question using PICO standard namely "What is the correlation between mode of delivery and postpartum depression?". 325 literature were obtained from five different databases. The screening was carried out according to PRISMA flowchart and bringing in a total of 21 literature to be reviewed. Results: Most of the literature reported that mothers with CS have higher EPDS scores than mothers with vaginal delivery. One literature stated that mothers with CS were more protected and less prone to PPD, and other studies stated that the method of delivery was not associated with PPD. Conclusion: Delivery mode is associated with postpartum depression in most studies. Mothers with CS are more at risk of exposure to PPD than mothers with vaginal delivery
I. INTRODUCTION
Becoming a mother is a historical moment for every woman. The transition from pregnancy to the postpartum period involving the delivery process leaving an alteration in physiologic, emotional, and social behaviour. Rubin in Padila (2014) stated three postpartum phases as taking in, taking hold, and letting go. Women who weren't unable to pass those phases are at high risk for mental health disorders such as postpartum depression (PPD).
Diagnostic and Statistical Manual for Mental Disorders describes postpartum depression as a major depression experienced by mothers beginning at 4 weeks after delivery. PPD was found in 1,9%-82,1% of females in high-income countries and 5,2%-74,0% in lower-middle-income countries (Norhayati et al., 2015). A study in Puskesmas Morokrembangan Surabaya reported more than half (53%) of postpartum mothers are having mild depression (Indriasari, 2017). A similar thing happens in Puskesmas Lubuk Alung and Puskesmas Andalas Padang where 62,5% of depressive symptoms were occurred in multiparas and 60% in primiparas. (Syafrianti, 2018) There are several risk factors related to PPD, one of those is stated by Mansur (2009) as obstetric factors including pregnancy experience and mode of delivery. The role of the delivery mode on the development of postpartum depression has been studied extensively with conflicting results on whether vaginal delivery (VD) or cesarean section (CS) can affect postpartum depression (Rauh et al., 2012).
This study aims to determine the correlation between mode of delivery and postpartum depression based on the study of several literary representations. Hopefully, this research will be useful for scientific information in developing midwifery knowledge especially in terms of postpartum care.
II. METHODS
A systematic literature review was performed to provide relevant studies regarding mode of delivery and postpartum depression. Exploring the research question using PICO standard namely "What is the correlation between mode of delivery and postpartum depression?", five databases (PubMed, Science Direct, SCOPUS, SAGE, and Google Scholar) were used and Medical Subject Heading (MeSH) strategy (Table 1) was applied for keyword search. Full text and open access literature that explain the association between mode of delivery and postpartum depression in English and Indonesian range from 2011 to 2021 are collected. Nonresearch studies, interventional, qualitative, and systematic review studies were excluded.
Articles were screened and reported according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) flowchart and will be assessed using Quality Assessment Tool for Quantitative Studies from the Effective Public Health Practice Project (EPHPP).
III. RESULT
A total of 325 articles were identified, duplicate articles were excluded and 312 articles were screened for title and abstract generating 21 eligible articles that met the inclusion and exclusion criteria. (Figure 1). The 21 included studies consist of eight articles with a final rating strong and 13 moderate articles from the Effective Public Health Practice Project (EPHPP) Quality Assessment Tool. There are 28.933 women investigated from Asia, Europe, America, South America, and Africa. Socio-demographic characteristics such as age, education, parity, and mode of delivery are reviewed. The average age of women is in the range of 17-42 years. Education levels are diverse from middle to high school up to university graduates and higher. Most women are high school graduates or below and multipara. 62% of women gave birth with vaginal delivery and 38% are cesarean section. Summary details of the studies are presented in Table 2. Nullipara, pluripara, and delivery mode have no effect on increasing depression in postpartum mother Moderate (Mathisen et al., 2013) EPDS and Pearson's correlation
Cross-sectional and 86 postpartum mothers in Argentina
Postpartum depressive symptoms are associated with cesarean section, multipara, complications during pregnancy and delivery, and incomplete breastfeeding.
Correlation between Mode of Delivery and Postpartum Depression
Seven out of 13 literature specifically reported that cesarean section (CS) is associated with postpartum depression. The EPDS score, which refers to symptoms of postpartum depression, was found higher in mothers who delivered with cesarean section than those with vaginal delivery (VD) (Al Nasr et al., 2020;Kim & Dee, 2018;Mathisen et al., 2013;Yokoyama et al., 2021;Nelson et al.,2013). The level of stress experienced by mothers with CS delivery is more likely to cause postpartum depression in mild to severe levels than mothers with VD. This is due to the high release of the hormone cortisol which triggers stress during the procedure of CS. A mother who deliver with CS also suffers pain from its surgery that is more intense compared to mothers with VD who tend to felt mild to moderate intensity of perineal/pelvic pain. Thus, can inhibit mother's daily activities and add more stress that can trigger postpartum depression (Meky et al., 2019;Al Nasr et al., 2020).
Mothers who give birth with CS are also more prone to complications during surgical procedures, postoperative infections, bleeding, and pelvic inflammation which is also exacerbated by wound care, longer postnatal care, and a less cooperative environment at birthplace (Al Nasr et al., 2020;Meky et al., 2019). Jadoon et al., (2020) added that prolonged labor and poor pregnancy outcomes can leave a big imprint on mother's memory and affect her psychology. In addition, there are also feelings of failure, low self-esteem, and disappointment in the mother who gave birth with CS. Mothers who have undergone CS with preterm delivery tend to be more depressed by anxiety about their babies needing to be incubated caused by prematurity. Some of these things play an important role in increasing the risk of postpartum depression. Sylvén et al., (2017) in their research showed different results with a negative correlation between delivery method and postpartum depression. The study involved primiparous women without a history of psychiatric contact and divided the method of delivery into two variables, spontaneous vaginal delivery, and cesarean section or assisted vaginal delivery. The results showed that mothers who gave birth with CS or assisted VD were more protected and not susceptible to postpartum depression symptoms since they receive a lot more moral support from close relatives during early days after surgery. The EPDS score was assessed twice at 5 days postpartum and 6 weeks postpartum so that it allows to detect greater social support from close relatives during early postpartum period.
Eight literatures reported that delivery method is not associated with postpartum depression (Alharbi et al., 2014;Suhitaran et al., 2019;Eckerdal et al., 2018;Duma et al., 2020;Sadat et al., 2014;Cirik et al., 2016;Kaya et al., 2019;Habibzadeh et al., 2016). This because mode of delivery does not directly affect postpartum depression but there are other psychological factors from the mother such as a history of depression, depression during pregnancy, and a family history of depression that actually has an impact on postpartum depression. (Suhitaran et al., 2019). Three out of eight literatures, which stated that the method of delivery was not associated with postpartum depression, found that emergency cesarean section (emSC) is individually had a role in increasing the risk of postpartum depression. Eckerdal et al. (2018) stated that negative delivery experience, complications, and physical symptoms of depression were the mediating variables that bridged the relationship between emSC and vaginal delivery with postpartum depression. Duma et al., (2020) based on the logistic multiple regression analysis, reported that there is no relationship between the method of delivery and postpartum depression, but the emSC individually is a risk factor of postpartum depression. Cirik et al. (2016) stated the same thing that mode of delivery and postpartum depression was not significantly related, but mothers with a history of depression and suspected fetal distress so that they had to go through an emergency cesarean section significantly increased the risk of postpartum depression.
Effect of Elective Cesarean Section and Emergency Cesarean Section on the Incidence of Postpartum Depression
Seven literatures specifically divide cesarean sections into elective cesarean sections (elCS) and emergency cesarean sections (emCS). Six of the seven literatures stated that the emSC had an effect on postpartum depression and one other literature stated that the elSC had an effect on postpartum depression. Xie et al., (2011) describes elective cesarean section as a planned cesarean section with medical or social indications. In this study, it was found that most of the deliveries were done under social indications. Mothers with an indication of a history of depression, fear of childbirth, and socioeconomic vulnerability (low education, not working, history of domestic violence) are associated with postpartum depression because these variables act as triggers for mothers to choose elCS delivery. In the other hand, ElCS is also associated with a positive birth experience because there was satisfaction felt by mother because she has successfully delivered with the mode of delivery that she wanted (Eckerdal et al., 2018).
Mothers who gave birth by emCS had a higher risk of developing postpartum depression compared to those who gave birth by elCS and spontaneous vaginal delivery. EmCS is performed on mothers who experience obstetrical distress and threaten the lives of mothers their baby. This can leaves an impression as a negative experience for those who are not familiar to the cesarean section surgical procedure (Meky et al., 2019;Duma et al., 2020;Eckerdal et al., 2018). Yokoyama et al. (2021) explained that cesarean section can be a traumatic experience for mothers, especially for mothers who undergo an emCS. This is due to the emergence of medical indications or severe complications at the last moment before delivery such as hypertension, placenta previa, fetal distress, to preterm labor which causes the mother to have to go through a cesarean section and affect their psychological conditions that can also increase the risk of postpartum depression. Cirik et al. (2016) supported this statement and explained that mothers who received news that an emergency SC should be taken due to fetal distress experienced excessive anxiety and were afraid of losing their baby. Smorti et al. (2019) stated that nulliparous women with psychological changes and transition to motherhood while during their pregnancy they had a strong desire to give birth vaginally but had to give birth by cesarean section because obstetric emergencies were more susceptible to the risk of postpartum depression.
V. CONCLUSION
Most studies reported that mode of delivery is associated with postpartum depression and more specific in mothers who gave birth with CS than VD. Emergency CS that was performed during an obstetric emergency and might violate mother's preference of delivery in pregnancy days, can lead to a negative and traumatic experiences. On the other hand, if an elective CS was performed as it has been planned before thus gives mothers satisfaction and a positive impression. One study stated that mothers with CS are less prone and more protected from postpartum depression due to generous support from family during early postpartum days.
Midwives need to provide holistic care since pre-pregnancy days and making sure mothers have prepared for their pregnancy, delivery, and postpartum period well so that any adverse physical and psychological can be avoided. Selecting the mode of delivery is an important thing as midwives are obligated to educate mothers regarding every type of delivery and respect their choice in hope to create a perfect bonding.
|
2022-01-19T16:40:45.078Z
|
2021-12-27T00:00:00.000
|
{
"year": 2021,
"sha1": "5a7d05cac2d1b161f35a48589372345db37e6bb1",
"oa_license": "CCBYNCSA",
"oa_url": "http://jom.fk.unand.ac.id/index.php/jom/article/download/379/141",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6f41f984646d7fc6e8619cca66552e31499c1e92",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
245475523
|
pes2o/s2orc
|
v3-fos-license
|
Assessing the impact of simplified HCV care on linkage to care amongst high-risk patients at primary healthcare clinics in Malaysia: a prospective observational study
Introduction To achieve the elimination of hepatitis C virus (HCV), substantial scale-up in access to testing and treatment is needed. This will require innovation and simplification of the care pathway, through decentralisation of testing and treatment to primary care settings and task-shifting to non-specialists. The objective of this study was to evaluate the feasibility and effectiveness of decentralisation of HCV testing and treatment using rapid diagnostic tests (RDTs) in primary healthcare clinics (PHCs) among high-risk populations, with referral of seropositive patients for confirmatory viral load testing and treatment. Methods This observational study was conducted between December 2018 and October 2019 at 25 PHCs in three regions in Malaysia. Each PHC was linked to one or more hospitals, for referral of seropositive participants for confirmatory testing and pretreatment evaluation. Treatment was provided in PHCs for non-cirrhotic patients and at hospitals for cirrhotic patients. Results During the study period, a total of 15 366 adults were screened at the 25 PHCs, using RDTs for HCV antibodies. Of the 2020 (13.2%) HCV antibody-positive participants, 1481/2020 (73.3%) had a confirmatory viral load test, 1241/1481 (83.8%) were HCV RNA-positive, 991/1241 (79.9%) completed pretreatment assessment, 632/991 (63.8%) initiated treatment, 518/632 (82.0%) completed treatment, 352/518 (68.0%) were eligible for a sustained virological response (SVR) cure assessment, 209/352 (59.4%) had an SVR cure assessment, and SVR was achieved in 202/209 (96.7%) patients. A significantly higher proportion of patients referred to PHCs initiated treatment compared with those who had treatment initiated at hospitals (71.0% vs 48.8%, p<0.001). Conclusions This study demonstrated the effectiveness and feasibility of a simplified decentralised HCV testing and treatment model in primary healthcare settings, targeting high-risk groups in Malaysia. There were good outcomes across most steps of the cascade of care when treatment was provided at PHCs compared with hospitals.
INTRODUCTION
Hepatitis C virus (HCV) is a major cause of chronic liver disease globally, with an estimated 58 million individuals chronically infected and 290 000 HCV-related deaths each year. [1][2][3] In 2016, the WHO launched the Global Health Sector Strategy on Hepatitis 2016-2021, 4 with the goal of eliminating viral hepatitis as a public health threat by 2030. However, as of 2019, just 21% of individuals with HCV infection worldwide had been Strengths and limitations of this study ► A strength of this study is the ability to assess associations of hepatitis C virus (HCV) positivity and demographic factors and risk factors. ► A strength of this study is the comparison of retention in care cascade between participants initiated in treatment at district hospital compared with primary health clinic. ► A strength of this study is that it is a pragmatic study of feasibility of decentralised HCV care integrated into existing health system in Malaysia which led to national scale up of aspects of the study model. ► A limitation of this study is the rates of treatment initiation were not as high as targeted, impacted in large part by COVID-19 related disruptions.
Open access
tested and approximately one-quarter of diagnosed individuals had been treated. 3 The global response for the elimination of HCV infection have been transformed by recent advances in treatment and diagnostics, as well as reductions in costs. These advances include direct-acting antiviral (DAA) therapy and the availability of point-of-care serological and nucleic acid testing for HCV. The development of evidence-based WHO guidelines on who and how to test has provided further support for the scale-up of testing and treatment. 2 5 Malaysia is an upper middle-income country of more than 32 million people and an estimated HCV seroprevalence in the general population between 0.3% and 2.5%. 6 7 People who inject drugs (PWID) represent just 0.24% (75 000) of the adult population; however, they have an HCV prevalence of 67.5%-89.9%. 7 Other key populations in Malaysia at higher risk of HCV include 77 903 people living with HIV (PLHIV) (0.24% of the population), 8 221 698 men who have sex with men (MSM) (0.69%), 22 000 female sex workers (FSWs) (0.069%) and 15 000 transgender sex workers (TGSWs) (0.047%). 9 In 2017, it was estimated that only 6.1% (23 258) of people infected with HCV were diagnosed. [10][11][12] A likely cause of this low rate of diagnosis was the highly complex and centralised testing model used. Prior to the commencement of this project in Malaysia, to screen for HCV antibodies, staff at primary healthcare clinics (PHCs) send samples to a central laboratory leading to long turnaround times and loss to follow-up. An overall goal of the national programme is to expand HCV services to the 1027 PHCs nationally. 13 14 The objective of this study was to demonstrate the feasibility and effectiveness of decentralisation of HCV testing using rapid diagnostic tests (RDTs) at PHCs among highrisk populations, with referral of seropositive patients for confirmatory viral load testing and treatment. Effectiveness was evaluated through retention across the HCV care cascade. A further objective was to derive lessons learnt and to inform scale-up of HCV national and regional strategies.
Study design and settings
This was an observational, prospective cohort study (figure 1 and online supplemental figure 1), with enrolment conducted between December 2018 and October 2019 in three regions of Malaysia: (a) the state of Kedah, (b) the state of Kelantan and (c) the region of Kuala Lumpur/Putrajaya/state of Selangor. This observational study was designed to evolve with the national HCV programme and therefore included several protocol changes during the study duration outlined below. This enabled the possibility to carry out several subanalyses within the study that were not initially part of the study outcomes. This study was also designed to feed eligible RNA positive participants into a clinical trial entitled 'Open label phase II/III, multicentre trial to assess the efficacy, safety, tolerance and pharmacokinetics of sofosbuvir plus ravidasvir in HCV (+/−HIV) chronically infected adults with no or compensated cirrhosis in Thailand and Malaysia' (Malaysian Medical Research Ethics Committee, approval number NMRR-16-747-29183, coordinated by DNDi, hereafter called the DNDi trial). 15 16 Study outcomes The outcomes of the study were the proportion of patients with a positive anti-HCV RDT who have a confirmatory HCV RNA test done, the proportion of patients with a positive HCV RNA test result who initiate hepatitis C treatment. Additional outcomes included; the proportion of patients who tested positive when screened for anti-HCV using RDT, the time required to progress from anti-HCV screening to in the HCV care cascade and the primary cost and resource use of the HCV care cascade services including screening, confirmatory test, pretreatment assessment, monitoring and treatment.
Site selection
Twenty-five PHCs were selected for enrolment and screening of participants (online supplemental figure 1). Site feasibility assessments were conducted for 31 PHCs recommended by the Ministry of Health (MOH), based on the existence of a methadone maintenance therapy programme, presence of a family medicine specialist, sufficient staffing and proximity to the catchment area of five selected hospitals (<100 km) (online supplemental table 1). Sites were selected using a points-based system coupled with a laboratory assessment. Each PHC was linked to one or more hospitals for referral of seropositive participants (Hospital Sultanah Bahiyah in the state of Kedah, Hospital Raja Perempuan Zainab II in the state of Kelantan, and Hospital Selayang, Hospital Ampang, and Hospital Sungai Buloh in the region of Kuala Lumpur/ Putrajaya/state of Selangor). The median distance from selected PHCs to the selected hospitals was 29.6 km.
Study participants
Adult participants were enrolled consecutively at the 25 PHCs, based on routine clinical indications for an HCV test as per the Malaysian national guidelines 17 and according to one of the following HCV risk factors (obtained either based on routine triage and/or clinical indications as per national guidelines, self-reported or obtained from medical records): a history of invasive medical procedures (eg, surgery, biopsy, endoscopy, solid organ donation); long-term haemodialysis; received blood/blood products/clotting factor concentrates/ organ transplant prior to 1994; a needle-stick injury or mucosal exposure to HCV-infected blood; chronic liver disease and/or hepatitis; tattoos; body piercing; born to an HCV-infected mother; has an HCV-infected partner; is an MSM; is transgender; is an SW; was previously in prison; is HIV-positive; injects drugs; uses illicit intranasal drugs; has any other or undisclosed risk of HCV. Patients Open access already diagnosed as HCV RNA-positive or already initiated on treatment for the management of HCV infection were excluded from the study.
Study procedures HCV screening
Eligible study participants were enrolled at PHCs and following pretest counselling, offered anti-HCV screening using finger-stick capillary or venous blood and tested with an SD Bioline HCV RDT (Standard Diagnostic, Korea). If the result was positive, participants were referred to one of the five selected hospitals for confirmatory testing by appointment (2-4 weeks after screening).
Confirmatory testing and pre-treatment evaluations At the selected hospital, a 10 mL venous blood sample was drawn into EDTA-containing tubes and plasma was prepared within 72 hours, then referred to a reference laboratory in Kuala Lumpur (Institute of Medical Research, IMR) for HCV RNA testing using the Roche cobas 4800 HCV assay. The HCV RNA results were returned to the hospital and, at a subsequent patient visit. A second venous blood sample (5 mL) was obtained from patients who were HCV RNA-positive, for pretreatment evaluations. This sample was tested at the hospital laboratory and included a full blood evaluation and liver function tests measuring alanine aminotransferase, aspartate aminotransferase, bilirubin (direct/indirect), alkaline phosphatase and serum creatinine. At the selected hospitals, patients also received a FibroScan to assess the presence of cirrhosis (cirrhosis: >12.5 kPA with an M probe or >10 kPA with an XL probe, absence of cirrhosis: ≤12.5 kPA with an M probe or ≤10 kPA with an XL probe). Venous blood (10 mL) was obtained for genotyping which was carried out at IMR using the Roche cobas 4800 HCV GT assay.
Treatment and evaluation of cure Following a clinical evaluation, participants were referred for enrolment into the DNDi trial. 15 16 If patients were eligible and gave their written informed consent to take part in the DNDi trial, they were also initiated on treatment and managed as per the DNDi trial. Patients who were not eligible or who did not give consent to participate in the DNDi trial were referred to the standard of 5 Does not include the 67 patients enrolled in DNDi trial. 6 Refer to figure 3 for flowchart of patients that initiated treatment at PHCs and hospitals. 7 Sustained virological response (done in 12-week to 24week window after end of treatment). 8 As the study period ended on 31 October 2020, some patients had completed treatment by this date but were not yet eligible for SVR testing, therefore data were not collected for these patients for the purposes of the study. These patients were however offered SVR testing through the Ministry of Health programme.
Open access
care, the MOH national programme under which. Participants were treated for 12 weeks or 24 weeks depending on cirrhotic status.
At 12-24 weeks after the end of treatment, patients were requested to return to the treatment centre for a final venous blood sample to be collected (5 mL) for sustained virological response (SVR) HCV viral load testing, with plasma referral for testing at a designated MOH hospital or central laboratory. Patients with treatment failure were referred for further management by the gastroenterology or hepatology specialist, in accordance to the consensus of the 2019 National Clinical Practice Guidelines Development Group. 18 Treatment outcomes for the 67 RNApositive DNDi trial enrolees were embargoed until published separately and therefore have been excluded from the treatment outcomes of this publication. The screened population however could not be identified for exclusion from the final analysis.
Protocol changes during the study At the commencement of the study, HCV treatment in Malaysia was delivered through the MOH national programme in hospitals. However, during the study duration (quarter 3, 2019), the national guidelines were changed to recommend treatment of non-cirrhotic patients (including those coinfected with HIV) at PHCs under the care of family medicine specialists. This enabled a subanalysis of treatment outcomes for patients that received treatment at hospitals versus those that received treatment at PHCs. Cirrhotic patients continued to be treated at hospitals; however, a subgroup of compensated cirrhotic patients (n=59) were treated (using sofosbuvir/ daclatasavir without ribavirin 18 19 at six PHCs (in the state of Kedah).
Patient and public involvement
Public involvement, via civil society groups, included sharing the protocol design for review and input during the conception phase as well as active participation of civil society groups in results dissemination activities.
Data collection and analysis
Data were collected from primary source documents (screening registers, patient medical records, laboratory registers and laboratory reports) using paper case report forms (pCRFs) at PHCs by PHC study staff. These pCRFs were then manually transcribed into electronic case report forms (eCRFs) by research assistants using Open-Clinica enterprise version 3.14 open-source software. At the hospitals, data were directly collected using eCRFs.
To ensure data quality, regular site monitoring visits were carried out (one visit per month), with every CRF checked for completeness and general errors. In addition, both manual and automated data cleaning were carried out on completed database exports.
Data were analysed using R V.3.6.1 to provide descriptive and inferential statistics. Characteristics of HCV antibody-positive and HCV antibody-negative individuals were summarised according to demographic, clinical, laboratory and treatment categories, with median and IQRs for quantitative data and frequencies and percentages for qualitative data. Associations between demographic characteristics and the frequency of HCV-positive patients were assessed using simple and multiple logistic regression (S/MLR). With MLR, all other demographic factors were accounted for by including them in the model as covariates. Variables examined included age, sex, ethnicity, antenatal status, reported risk factors and the total number of confirmed risk factors for each patient. Resulting p values were adjusted for multiple hypotheses using the Benjamini-Hochberg method.
Outcomes across the cascade of care were reported as numbers and percentages for each step for total populations and separately for cirrhotic/non-cirrhotic, hospital/ PHC and key population (PWID/non-PWID, PLHIV/ non-PLHIV) subgroups. Similarly, the times between HCV care cascade steps were reported as median and IQR values. Subgroup outputs at each step of the care cascade and turnaround time analyses were compared using Pearson's χ 2 test. Multiple hypothesis adjustment for subgroup comparison was performed using the Bonferroni correction. 20 Associations between SVR output and demographic characteristics of treated patients were assessed using Pearson's χ 2 test.
Assessment of costs
Estimates of the costs associated with testing were collected from the study sites. An ingredients-based approach was used to estimate the average cost per person of an antibody test and an RNA test. Unit costs included the costs of diagnostics tests and other consumables used; staff time, recorded as minutes of healthcare worker, administrative staff and laboratory technician time (with costs assigned by multiplying average minutes spent by salary); and overheads, including a proportion of the costs of utilities, phones, computers and other equipment (with costs assigned by dividing the annual or one-off cost of each item by the estimated number of appointments in a year or its estimated lifetime). Estimates of costs associated with treatment and auxiliary tests, such as liver function tests, were provided by the MOH. To assess the relative costeffectiveness of the testing and care pathways, we used a state-transition model, MATCH (Markov-based Analyses of Treatments for Chronic Hepatitis C), which simulates HCV disease progression. Natural history outcomes from this model have been validated previously. [21][22][23] We adapted this model to simulate the epidemiology of HCV in Malaysia (MATCH-Malaysia) and extended the model to evaluate the cost-effectiveness of the three different care pathways: the total cohort, the treatment pathway at the PHCs and the treatment pathway at the hospitals. The model was developed following the principles of economic analyses with respect to viral hepatitis recommended by WHO. 24 Open access In terms of self-reported risk factors for HCV exposure, a significant proportion (38.2%) did not disclose any specific risk factors. The most common risk factors reported were body piercings (21.1%), a history of invasive medical procedures (18.2%), injection drug use (13.0%), intranasal illicit drug use (14.7%) and previous imprisonment (12.4%).
Of the 991 patients who were HCV RNA positive and completed the pretreatment assessments, 660 (66.6%) were non-cirrhotic and 331 (33.4%) were cirrhotic. Of the 660 non-cirrhotic patients, 596 (90.3%) were referred to a PHC for treatment and 64 (9.7%) were referred to a hospital for treatment. Seven serious adverse events were reported during the study period, none were assessed to be caused by the study procedures/interventions. Two participants were hospitalised due to injuries associated with accidents, five participants died (one accident, one stroke, one heart failure, one pneumonia and one chronic liver disease (death was prior to HCV confirmatory testing)).
Cascade outcomes for non-cirrhotic and cirrhotic patients referred to PHCs or hospitals for treatment were similar except for the following: the treatment initiation rate among cirrhotic patients referred to PHCs was significantly higher (80.8%) than those referred to hospitals (46.9%, p<0.001); the treatment completion rate among non-cirrhotic patients referred to PHCs was also significantly higher (94.0%) than those referred to hospitals (69.4%, p<0.001).
Among those who returned for SVR testing, there were 7/209 (3.3%) participants who experienced treatment failure, all of whom were non-cirrhotic.
There were 983/15 299 (6.4%) participants who were PLHIV and, of these, 298/983 (30.3%) were HCV seropositive compared with 1722/14 316 (12.0%) in the non-PLHIV group (p<0.001). There was no significant difference in the uptake of positive HCV confirmatory RNA testing according to HIV status. Slightly lower proportions of RNA-positive PLHIV completed pretreatment assessment compared with the non-PLHIV group: 145/197 (73.6-%) and 846/1250 (81.0%), respectively (p=0.02). Similar proportions of RNA-positive PLHIV initiated treatment, completed treatment, received SVR testing and achieved SVR compared with non-PLHIV. There was also no difference in the outcomes between PLHIV and non-PLHIV when stratified by treatment site (PHC vs hospital).
Turnaround time between HCV care cascade steps
The total median time (IQR) in days between of HCV serological testing and treatment initiation was 214 (148-269) (table 2). The turnaround time from completion of pretreatment assessment to treatment initiation was longest at 103 (47-149) days, followed by time from RNA results returned to patient to completion of pretreatment assessment (33 (13-58) days). There were some differences in the times from HCV serological testing to treatment initiation between those referred for treatment at PHCs and at hospitals: 217 (167-277) days versus 191 (124-249) days, respectively (p=0.004). In addition, there was no difference in the turnaround times along the HCV care cascade between PLHIV and non-PLHIV; and the only significant difference in time from HCV serological testing to treatment initiation between PWID and
DISCUSSION
Overall, this project demonstrated the effectiveness and feasibility of a simplified, decentralised HCV testing and treatment model in primary care settings that targeted high-risk groups in Malaysia. Good outcomes were attained across most steps of the cascade of care when patients were offered treatment at decentralised sites compared with centralised hospitals, however, considerable attrition was reported for linkage to care.
A distinctive feature of this model was its demonstration that HCV case-finding at PHCs using RDTs is both feasible (reflected by the marked increase in testing uptake, with more than 15 000 individuals being tested in 10 months) and achieved a high rate of case-finding (prevalence 13.2%). This was achieved through a targeted HCV casefinding strategy of identifying high-risk patients within the PHC catchment population. The resultant enrolled population included high proportions of key populations including PWID, PLHIV, MSM, SW, intranasal illicit drug users, those who were previously in prisons as well as individuals with chronic liver disease or a history of invasive medical procedures. Targeting these populations resulted in an overall high yield of HCV-positive cases (13.2%).
While overall retention was good for most steps in the cascade, our findings also highlight some cascade steps where there was significant attrition and suboptimal linkage following a positive HCV antibody RDT that provide opportunities for improvement. There were 539 patients (26.7%) who did not have a confirmatory viral load test and a further 250 (20.1%) who had an HCV viral load test that was positive but did not receive pretreatment assessments (ie, a total of 789 (39.1%) of the HCV antibody-positive individuals). In addition, there was attrition of RNA positive patients initiating treatment, but markedly more so in patients that received treatment at hospitals compared with those treated at PHC (48.8% vs 71.0%, p<0.001).
It is likely that at both points in the cascade a key driver of attrition is the provision of services at hospitals. All Open access seropositive participants were referred up from PHCs to designated hospital sites for viral load testing and subsequent pretreatment assessment, rather than having on-site blood sample collection for HCV viral load and pretreatment assessment. This was due to the preenrolment requirements of the DNDi clinical trial (viral load and pretreatment assessment). During follow-up calls, study staff logged the following reasons for attrition from hospital visits; the distance participants had to travel to the hospital (online supplemental table 1); high transportation costs. In addition, it is likely that follow-up of patients was not carried out to the highest standard due to limited information exchange between hospital and PHC staff and poor tracking and tracing of patients lost. In addition, limited appointment availability and long lead-times for blood collection at the hospitals; and a reluctance to attend hospital appointments because of fear of stigmatisation in contrast to the 'high-risk population-friendly' PHCs were also likely causes of attrition of patients throughout the cascade.
These findings are consistent with evidence from a recent systematic review which reported lower rates of linkage to care and treatment uptake in partially decentralised models of care (29 studies) compared with fully decentralised models of care (ie, all testing and treatment provided at a single site). Although, this finding was only reported for key populations whereas results in the general population were heterogeneous. In addition, only 49% of studies included in this review were from low and middleincome countries. 25 Other studies with similar models of partially decentralised HCV care assessed within existing public health systems have reported poorer retention between diagnosis of HCV and treatment 59.3% in the Cherokee nation HCV elimination programme and 73.7% for partially decentralised arm in the HEAD-Start Project Delhi. 26 27 Whereas, high rates of retention have been reported in several studies where fully decentralised care is provided. 28 29 Preliminary analyses of loss to follow-up led to changes in the national programme to allow for the provision of HCV treatment at PHCs for non-cirrhotic patients (including patients who were coinfected with HIV), from quarter 3 of 2019. This change enabled a subanalysis of patients, according to where their treatment was provided. We observed a significant improvement cascade of care retention when patients had treatment provided at a PHC compared with receiving treatment at a hospital. In addition, at six PHCs (in the state of Kedah), treatment of Open access a small cohort of compensated cirrhotic patients (n=59, using sofosbuvir/daclatasavir without ribavirin) 18 19 was successful.
There were several key limitations in this study: first, the study is observational in nature, with inherent challenges taking into account the many confounders affecting outcomes across the cascade of care, particularly when comparing PHCs versus hospitals; second, standard of care HCV practices evolved during the conduct of the study-decentralisation of HCV treatment under the MOH national programme for non-cirrhotic patients to PHCs, including patients who were coinfected with HIV and a small cohort of compensated cirrhotic patients, commenced in quarter 3 of 2019; third, the costing estimates did not take into account the differential costs of the HCV care pathway between PHCs and hospitals as well as the costs to the patients.
The major challenges encountered were due to the COVID-19 pandemic, which resulted in delays to treatment initiation (65 (24-112) days pre-COVID-19 vs 154 (127-211) days post-COVID-19, p<0.001). These delays may have contributed to increased numbers of patients lost to follow-up. In particular, loss to follow-up in treatment initiation, follow-up treatment visits and SVR testing during the COVID-19 outbreak may have been higher among key populations, such as PWID, who were reportedly less willing to travel to treatment sites amid increased police surveillance during COVID-19 lockdowns due to fear of arrest. Indeed, others have also reported access to all healthcare and other services for PWID was affected by COVID-19. 30 In addition, HCV screening at PHCs was reduced, laboratory turnaround times were increased and an increase in SVR samples being misplaced was reported during the COVID-19 outbreak, due to prioritisation of laboratory resources for processing COVID-19 samples.
Initiatives by the MOH after the study started have led to a tremendous expansion in screening and treatment for HCV. [31][32][33][34] The evidence from this study has catalysed plans for MOH to roll-out decentralised HCV care from the 25 sites to all PHCs nationwide, in a stepwise manner from quarter 1 of 2020. 34 The model for scale-up builds on key aspects that were integral to this study design, including the use of RDTs for point of care results, optimal turnaround times and the use of DAA therapies at PHCs for non-cirrhotic and uncomplicated cases of HCV, where there is capacity present on-site. In addition, MOH has already recently further decentralised services by ensuring venous blood collection for HCV viral load confirmation and reflex biochemistry blood tests for pretreatment assessments, including AST to Platelet Ratio Index scores, will be carried out at PHCs rather than referring these patients to hospitals. 18 This includes sending the sample for either HCVcAg or HCV RNA testing at a designated hospital or laboratory or use of on-site GeneXpert testing making the model a fully decentralised care package.
In addition to the scale-up of this fully decentralised model of HCV care at PHCs nationally, this study has provided evidence, with regards to the yield of
Open access
HCV RDT+, for the MOH to continue the successful approach of targeting high-risk groups of individuals for HCV screening. To this end, the MOH has also started programmes in prisons 35 and drug rehabilitation centres and is developing plans for novel screening strategies, including self-testing, to further target key populations including MSM, transgender people and SW. To ensure successful HCV programmes can be implemented within these populations, the MOH has begun to drive the coordination of different government departments including primary care, public health, the Ministry of Home Affairs and relevant non-governmental organisations (NGOs). Further lessons from this study that are being adopted for scale-up include an emphasis on the importance of developing robust monitoring and evaluation and data collection systems at every step of the HCV care cascade. The use of a central database for capturing data in a systematic way during this study can be translated for use in the national programme in simplified and practical ways. As well as the importance of strategies to raise community awareness, outreach activities and engagement with NGOs.
CONCLUSIONS
In conclusion, using an innovative model of partially decentralised care, this study demonstrated a high rate of case-finding for HCV-positive individuals. There were significantly higher levels of retention in the care cascade when patients were treated at PHCs compared with hospitals supporting existing evidence of improved outcomes using decentralised care. Several improvements were made during the study and in the national programme to address these limitations, including the decentralisation of confirmatory testing and pretreatment assessment and the provision of HCV treatment at PHCs. This optimised model of fully decentralised HCV care, is now being adopted by MOH as part of a scale-up nationwide and serves as a good model for implementation in other settings. Contributors JM designed data collection tools, implemented the study, monitored data collection for the study, cleaned and analysed the data, and drafted and revised the paper. She is guarantor. SShilton initiated the collaborative project, designed the study and data collection tools, implemented the study, monitored data collection for the study and drafted and revised the paper. She is guarantor. XHS designed data collection tools, implemented the study, monitored data collection for the study, cleaned and analysed the data, and revised the draft paper. CHK revised the draft paper. RMS, ZZ, NAB, HO, SK and RH implemented the study and monitored data collection for the study. SSiva designed the study and data collection tools, implemented the study, monitored data collection for the study and revised the draft paper. RJR implemented the study and monitored data collection for the study. MG and AT wrote the statistical analysis plan, cleaned and analysed the data, and revised the draft paper. MA and JC analysed the data and revised the draft paper. J-MP initiated the collaborative project, implemented the study, and revised the draft paper. RMZ initiated the collaborative project, designed and implemented the study, and monitored data collection for the study. CM implemented the study, monitored data collection for the study, and revised the draft paper. FY, NHN, FI and RZ implemented the study. IA-M initiated the collaborative project, designed and implemented the study, and revised the draft paper. SM initiated the collaborative project and implemented the study. PE drafted and revised the paper. MRAH initiated the collaborative project, designed and implemented the study, monitored data collection for the study and revised the draft paper. All authors revised the paper critically for intellectual content and approved the final version.
Funding This study was funded by Unitaid as part of HEAD-Start (Hepatitis Elimination through Access to Diagnostics).
Map disclaimer The inclusion of any map (including the depiction of any boundaries therein), or of any geographic or locational reference, does not imply the expression of any opinion whatsoever on the part of BMJ concerning the legal status of any country, territory, jurisdiction or area or of its authorities. Any such expression remains solely that of the relevant source and is not endorsed by BMJ. Maps are provided without any warranty of any kind, either express or implied.
Competing interests None declared.
Patient consent for publication Not applicable.
Open access
Ethics approval This study involves human participants and the ethics committee that approved this study is the Malaysian Medical Research and Ethics Committee (approval number: NMRR-18-2282-43132). Participants gave informed consent to participate in the study before taking part.
Provenance and peer review Not commissioned; externally peer reviewed.
Data availability statement Data are available upon reasonable request.
Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.
Open access This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http:// creativecommons. org/ licenses/ by-nc/ 4. 0/.
|
2021-12-26T06:16:20.911Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "154f19930b80dd90b3771729601d2d1641cb0f8f",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/11/12/e055142.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "29715187a2643e40b035f169de4ada69524931a7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248357210
|
pes2o/s2orc
|
v3-fos-license
|
Cardioprotective effects of sinomenine in myocardial ischemia/reperfusion injury in a rat model
Background Ischemia reperfusion (I/R) play an imperative role in the expansion of cardiovascular disease. Sinomenine (SM) has been exhibited to possess antioxidant, anticancer, anti-inflammatory, antiviral and anticarcinogenic properties. The aim of the study was scrutinized the cardioprotective effect of SM against I/R injury in rat. Methods Rat were randomly divided into normal control (NC), I/R control and I/R + SM (5, 10 and 20 mg/kg), respectively. Ventricular arrhythmias, body weight and heart weight were estimated. Antioxidant, inflammatory cytokines, inflammatory mediators and plasmin system indicator were accessed. Results Pre-treated SM group rats exhibited the reduction in the duration and incidence of ventricular fibrillation, ventricular ectopic beat (VEB) and ventricular tachycardia along with suppression of arrhythmia score during the ischemia (30 and 120 min). SM treated rats significantly (P < 0.001) altered the level of antioxidant parameters. SM treatment significantly (P < 0.001) repressed the level of creatine kinase MB (CK-MB), creatine kinase (CK) and troponin I (Tnl). SM treated rats significantly (P < 0.001) repressed the tissue factor (TF), thromboxane B2 (TXB2), plasminogen activator inhibitor 1 (PAI-1) and plasma fibrinogen (Fbg) and inflammatory cytokines and inflammatory mediators. Conclusion Our result clearly indicated that SM plays anti-arrhythmia effect in I/R injury in the rats via alteration of oxidative stress and inflammatory reaction.
Introduction
As per the World Health Organization (WHO) reports, almost 2.3 million deaths are related to ischemic heart disease every year (Badalzadeh et al. 2014;Tang et al. 2020). Ischemia reperfusion (I/ R) takes part in mortality, tissue injury and morbidity in various types of cardiovascular diseases especially myocardial infarction (Badalzadeh et al. 2014). Tissue injury occurs as a result of the initial ischemia insult, which is dictated by the length and magnitude of the blood supply disruption, and the subsequent injury caused by reperfusion (Granier et al. 2013;Tse et al. 2016a). The accumulation of lactate and anaerobic metabolism causes a decrease in intracellular adenosine triphosphate (ATP) and pH during prolonged ischemia (Tse et al. 2016b). Furthermore, the ATPase dependent ion transport mechanisms become impaired, contributing to enhanced calcium overload (intra-mitochondrial calcium and intracellular level, cell rupture and swelling, and apoptotic, cell death via necrotic, necrotic, autophagic and necroptotic mechanisms (Najafi et al. 2018;Williams et al. 2020). During reperfusion, the restoration of oxygen molecules results in the creation of reactive oxygen species (ROS) (Najafi et al. 2018). Inflammatory cytokines also promote neutrophil infiltration into ischemic tissue, speeding up the I/R damage (Wu et al. 2017). Arrhythmias, transitory mechanical impairment of the heart, microvascular damage and ''noreflow" phenomena, as well as an inflammatory reaction, are all caused by I/R injury . During the reperfusion phase of I/R damage, autophagy, apoptosis, and necrosis all cause cell death . Recent years have seen significant enhancements in the protective strategies to suppress all features of post ischemic injury in cardiovascular diseases (Gatzke et al. 2018;Liu et al. 2019). Due to the scarcity of therapy options, a safer and more effective technique for developing cardiovascular drugs is urgently needed (Badalzadeh et al. 2014;Geldi et al. 2018).
Myocardial ischemic injury (MII) is causing the greatest number of deaths and disabilities in the world (Yang et al. 2018). I/R damage causes myocardial injury, which is a pathological state of coronary artery disease (Badalzadeh et al. 2014). The most common alterations associated with ischemic heart disease (IHD) include metabolite deposition, decreased intracellular [K+] and pH, irreversible cellular injury, Ca2+ overload and increased oxidative stress by increasing the generation of ROS (Badalzadeh et al. 2014). During the I/R injury, an imbalance between the endogenous antioxidant and ROS occurred (Chang et al. 2002;Vilskersts et al. 2009). During the disease, start the production of ROS due to the continuous generation of free radicals (Badalzadeh et al. 2014). Therefore, the therapy available for ischemia such as reperfusion and it has contrary aspects that can suppress the protective effect of myocardial reperfusion such as myocardial stunning, remodelling of left ventricular extracellular matrix, microvascular impairment, progressive cell death, ventricular arrhythmias and finally cause death (Badalzadeh et al. 2014;Tang et al. 2020).
Ventricular arrhythmias are split into three distinct phases during ischemia: phase 1a arrhythmias occur during the first 10 min, phase 1b arrhythmias occur between 15 and 60 min (beginning of ischemia), and phase 2 arrhythmias occur after 90 min. According to research, myocardial arrhythmias are the most common complication of I/R damage . Arrhythmias enhance the ROS production, which further start the production of H + gradient, endow to an influx of Na + and increase the [Ca2 + ]i via 2Na + / Ca 2+ exchanger which resultant start the accumulation of [Ca2 + ]I and start the diminution of ATP (Badalzadeh et al. 2014;Han et al. 2019;. Due to increase [Ca2 + ]i is consider as the potential target for reperfusion arrhythmogenesis. Clinically, myocardial arrhythmias is the serious problem of I/R injury and 80% of patients suffering from the acute myocardial infarction. Additionally, free radicals and inflammatory reaction have been compromised in the pathophysiology of cardiac cell death, electrophysiological dysregulation and post ischemic contractile impairment ). According to previous study, the inflammatory response plays a crucial role in the I/R damage (Badalzadeh et al. 2014;Han et al. 2019). Actually, the incidence of arrhythmias in myocardial reperfusion might have a direct effect on the enhanced inflammatory reaction and production of ROS during myocardial I/R (Han et al. 2019;Liu et al. 2019). During the reperfusion injury, increase the inflammatory reaction which further activates the NF-jB and results in an increase in chemokine genes and inflammatory cytokines and boost the myocardial injury (Badalzadeh et al. 2014;Qiao et al. 2019).
Sinomenine (IUPAC name (7,8-didehydro-4-hydroxy3,7-dime thoxy-17-methyl-9a, 13a, 14a-morphinan-6-one) isolated from the Sinomenium acutum (a Chinese herb) (L. Zhou et al., 2020). The sinomenine is very popular herb among the Chinese doctors to treat the various inflammatory disease such as rheumatic (Lin et al., 2008;Zhou et al., 2020). Some pharmacological investigation showed that it has remarkable antiinflammatory, antiarthritic and analgesic effect (Geng et al., 2021;L. Li et al., 2021). Last few decades, sinomenine widely used for the treatment of chronic glomerulonephritis, allograft rejection, autoimmune nephritis condition and mesangial proliferative nephritis (Y. Lin et al., 2008;Zhang et al., 2012;Zhou et al., 2020). Recent investigation showed that sinomenine suppress the synovial and lymphocyte fibroblast proliferation, macrophage infiltration and the production of inflammatory cytokines (Geng et al., 2021;Yang et al., 2017). As my best knowledge, the myocardial protective effect of sinomenine against the I/R induced ventricular arrhythmias not explored. In this experimental study, we try to explore the cardio-protective effect of sinomenine against the I/R injury rat model and explore the underling mechanism.
Drugs and chemical compounds
Sinomenine was purchased from the Sigma Aldrich (St. Louis, USA).
Experimental animal
Wistar rats (250 ± 50 g; sex both) were used in this protocol. The rats were received the standard controlled diet (Table 1) and water ad libitum. The rats were kept in the controlled laboratory condition (temperature 22 ± 5°C; 65% relative humidity; 12/12 h light/dark cycle). The experimental study was carried out according to the International standard animal protocol (QFCH2021A0901).
Myocardial ischemia/reperfusion
The rats were kept under intermittent positive pressure ventilation with room air, after the intratracheal cannula was placed. Myocardial ischemia was caused by externalising the heart using a left thoracic incision and a slipknot (5-0 silk) around the left anterior descending coronary artery (LAD) (Yang et al. 2018).
Experimental protocol
The rats were divided into following groups presented in Table 1. The rats were acclimated for 7 days before the experimental protocol. All the experimental and surgical protocols were adapted as per the international animal guidelines.
Langendorff heart perfusion
All the experimental rats were heparinized (500 IU) and after that anesthetized using the 60 mg/kg ketamine and 10 mg/kg xylazine mixture, and then hearts were successfully isolated from all rats and immediately mounted on the Langendorff apparatus. The heart tissues were perfused using the Krebs-Henseleit (K-H) Table 1 List of experimental group.
S. No
Group Symptoms solution. Also, a mixture of CO2 (5%) and O2 (95%) was bubbled via perfusion for maintaining the pH 7.4. a thermostatically controlled water circulator was used for maintained the temperature (37°C) and perfusate (Badalzadeh et al. 2014).
Ventricular arrhythmias
Lambeth conventions was used for classified the ventricular arrhythmias. Ventricular extopic beat (VEB) was classified as the identifiable premature QRS complexes. Ventricular tachycardia (VT) was considered as the incidence of 4 or more consecutive VEBs at a rate faster that the resting sinus rate. Another arrhythmias parameter like ventricular fibrillation (VF) was low voltage and unidentifiable QRS complexes. The VEBs pattern such as couplet, salvos and bigeminy were analysed.
Arrhythmia score
The previous reported method was used for the estimation of arrhythmias score with minor modification. The arrhythmias were scrutinized using the Lambeth Conventions and arrhythmia severity was showed on the basis of Walker and Curtis criteria. 5 grade evaluation system was used for arrhythmias scoring presented in Table 2 (Yang et al. 2018).
Oxidative stress parameter
The standard available kits were used for the determination of antioxidant enzymes includes malonaldehyde (MDA), glutathione peroxidase (GPx), catalase (CAT) and superoxide dismutase (SOD) using the manufacture protocol (Nanjing Jiancheng Biological Product, Nanjing, China).
Hepatic and heart parameters
Hepatic parameter such as aspartate aminotransferase (AST) and heart parameters includes Tnl, CK, LDH and CK-MB were analyzed using the available kits following the given instruction (Beyotime Biotechnology, Shanghai, China).
Fibrinolytic enzyme and coagulation system indicators
Coagulation system indicators and fibrinolytic enzyme such as TXB2, TF, PAI-1 and Fbg were estimated using the ELISA kits following the manufacture instruction (Beijing Expand biotech Ltd. Beijing, China).
Statistical analysis
The data was presented as mean standard deviation (SD) and analysed using one way ANOVA, followed by the Tukey test in GraphPad Prism 8.0 software. P < 0.05 was consider as significant.
Ventricular arrhythmias
The effect of sinomenine on the number of episodes of VT + VF in 30 min of ischemia was shown in Fig. 1. After 30 min of ischemia, normal rats showed no signs of VT + VF episodes. VT + VF episodes, duration and incidence (VF and VT) were observed higher in the I/R group rats after 30 min of ischemia. I/R-induced rats had a higher arrythmia score, while sinomenine-treated rats had a lower the VT + VF episode, duration, incidence (VF and VT) and arrythmia score. In 30 min of ischemia, Sinomenine (20 mg/kg) treated group significantly (P < 0.001) reduced the VT + VF episodes, duration, and incidence (VF and VT) Fig. 2.
A similar momentum was observed, after the 120 min ischemia. In 120 min of ischemia, the I/R group exhibited the boosted VT + VF episode, duration and VT, VF incidence along with the enhanced arrythmia score. In 120 min of ischemia, Sinomenine therapy reduced the episode, occurrence, and duration of VT + VF. I/R induced rats exhibited the enhanced arrythmia score in 120 min ischemia and sinomenine treated rats suppressed the arrythmia score (Fig. 2).
Ventricular ectopic beat
I/R induced rats demonstrated the reduction of bigminy, couplet and salvos as compared to normal group rats. I/R induced rats treated with the sinomenine significantly (P < 0.001) suppressed the bigminy, couplet and salvos (Fig. 3). Sinomenine (20 mg/kg) group rats exhibited the maximum reduction in the bigminy, couplet and salvos as compared to the sinomenine (5 and 10 mg/kg) group.
Myocardial infarct area
Infarct area commonly used for the estimation the myocardia disease. During the myocardial injury increase the size of infarct area. No infarct area was observed in the normal rats. I/R induced rats exhibited the boosted infarct area which was suggesting the induction of cardiac disease and sinomenine treated rats significantly (P < 0.001) suppressed the infarct area (Fig. 4) and exhibited the cardioprotective effect.
Cardiac parameters
The myocardial enzymes such as Tnl, CK and CK-MB are the significant marker to use to estimation the degree of myocardial injury. During the myocardial injury, the level of myocardial Table 2 showed the arrhythmia score.
S. No
Score Symptoms parameter increased. The level of myocardial parameter within range observed in the normal group and I/R group exhibited the enhanced level of Ck-MB (Fig. 6a), Ck (Fig. 6b), Tnl (Fig. 6c) and sinomenine treatment significantly (P < 0.001) repressed the cardiac parameters.
3.6. Antioxidant parameters SOD, CAT and GSH are the significant antioxidant enzymes and MDA exhibit the level of lipid peroxide and use for the estimation of oxidative stress. Oxidative stress is a major contributor to the progression of heart disease. It's no secret that oxidative stress exacerbated the I/R injury. In this investigation, I/R induced group rats had enhanced level of MDA (Fig. 7a) and lower levels of SOD (Fig. 7), CAT (Fig. 7c) and GPx (Fig. 7d). Sinomenine therapy considerably (P < 0.001) increased SOD, GPx, CAT, and lowered MDA levels.
LDH and AST
LDH and AST are considered as the significant marker for myocardial injury. Both parameters exhibited the degree of myocardial injury. The level of LDH (Fig. 8a) and AST (Fig. 8b) were higher observed in I/R group and sinomenine treatment (5, 10 and 20 mg/kg) significantly (P < 0.001) suppressed the level of LDH and AST.
Hs-CRP and MCP-1
MCP-1 is the monocytokines that commonly used for prediction the coronary heart disease. Hs-CRP is the marker of thrombosis, which accelerates the instability and formation of atheromatous plaques. Both the parameters used for estimation the heart vascular events. In this study, the level of hs-CRP (Fig. 9a) and MCP-1 (Fig. 9b) boosted in the I/R injury group rats and sinomenine treatment significantly (P < 0.001) suppressed the level of hs-CRP and MCP-1.
Inflammatory cytokines
Inflammation plays a key part in the progress of MIRI. Inflammatory factors can increase platelet adhesion, vascular endothelial damage, collagen exposure, and platelet activation. Inflammatory reaction plays a crucial role in the progression of MIRI. Inflammatory factors also boost the vascular endothelial injury, platelet adhesion, platelet activation and collagen exposure. I/R induced injury rats showed the enhanced level of TNF-a (Fig. 10a), IL-1b, (Fig. 10b), Il-6 ( Fig. 10c) and sinomenine treatment significantly (P < 0.001) repressed the inflammatory cytokines.
Discussion
In this experimental protocol, we used the classical method of myocardial I/R injury to scrutinize the protective effect of sinomenine. Recently years, sinomenine has gained more popularity for improving cardiac qualities. Sinomenine suppressed inflammatory mediators, resulting in an anti-inflammatory action against STZinduced diabetes . Additionally, sinomenine demonstrated an anti-oxidative and hypo-lipidemic effect on the high fet diet induced atherosclerosis (Feng et al. 2019). According to this study, sinomenine may be a useful chemical for maintaining hypercholesterolemia, a key cause of cardiovascular disease, by reducing oxidative stress and improving lipid markers (Zhang et al. 2012;Yuan et al. 2018;Zhou et al. 2020). Li et al.,reported Fig. 3. Exhibited the ventricular ecotopic beat. Data were presented as mean ± SEM. Tested group rats compared I/R where *P < 0.05, **P < 0.01 and ***P < 0.001. Where I/ R = Ischemia reperfusion, SM = Sinomenine, NC = Normal Control, NS = Non-significant. Fig. 4. Exhibited the myocardial infract area. Data were presented as mean ± SEM. Tested group rats compared I/R where *P < 0.05, **P < 0.01 and ***P < 0.001. Where I/R = Ischemia reperfusion, SM = Sinomenine, NC = Normal Control, NS = Nonsignificant. the protective effect of sinomenine against isoproterenol induced myocardial infarction in experimental stays via antiinflammatory and antioxidant effects (Li et al. 2013). In this experimental study, sinomenine exhibited an anti-arrhythmic effect in an isolated heart. Sinomenine treated group exhibited the suppression the incidence, number and duration of VF, VT and arrhythmia severity as compared to the control group. Furthermore, these findings suggested that sinomenine cardioprotective and antiarrhythmic properties may be attributable to its anti-oxidant and antiinflammatory properties. Furthermore, the underlying mechanism of sinomenine cardioprotective action has not been extensively investigated. Primary percutaneous coronary intervention (PCI) and systemic thrombolysis are the most commonly used for perfusion (Badalzadeh et al. 2014;Tang et al. 2020). Because it allows for the re-establishment of blood flow in the cardiac area, that has been impacted by the obstruction of a branch of the coronary artery and for the same, PCI is the most successful approach. The ischemic area is re-perfused during this process, boosting the ischemia/reperfusion event, which start the production of ROS (Han et al. 2019;Wang et al. 2020). This method increases the tissue injury (lethal reperfusion). Effective drug treatment could be used for the I/R to estimation the cardioprotective effect to protect the tissue from lethal reperfusion (Han et al. 2019;Tang et al. 2020). ROS production begins at low levels during the physiological conditions and is thought to be a significant mediator of cell apoptosis, expansion, differentiation, adhesion and senescence (Badalzadeh et al. 2014). Overproduction of oxidative stress during pathologic conditions such as I/R induces cell injury, which leads to the DNA oxidation, enhancing lipid peroxidation membrane chain reactions and changing the member fluidity (Han et al. 2019;Wang et al. 2020). Antioxidant substances are crucial in countering the damage caused by free radicals. It is widely known that during I/ R injury, the antioxidant capability is suppressed, and an imbalance of oxidative/antioxidative molecules contributes to the oxidative balance in myocardial ischemia patients (Geldi et al. 2018;. The similar result was observed in the I/R group and sinomenine treated group exhibited the improved the antioxidant level and suppressed the production of free radicals. During the expansion and pathogenesis of cardiac I/R, blood flow is blocked to activate the coagulation platelet factors and vascular endothelial cells which boots the Fbg to fibrin conversion (Han et al. 2019;Qiao et al. 2019;Tang et al. 2020). After that, the balance between the fibrinolysis system and body coagulation is obliterated and reduces the fibrinolytic activity and coagulation, which helpful for generating the thrombus on the blood vessel wall via fibrin accumulation (Najafi et al. 2018). The result showed the expansion of the acute myocardial infarction in the I/R group and sinomenine treatment considerably altered the level of platelet parameters.
Reperfusion of ischemic myocardium further aggravates tissue injury induced by ischemia despite providing cells with oxygen Fig. 9. Exhibited the hs-CRP and MCP-1 parameters. a: hs-CRP and b: MCP-1. Data were presented as mean ± SEM. Tested group rats compared I/R where *P < 0.05, **P < 0.01 and ***P < 0.001. Where I/R = Ischemia reperfusion, SM = Sinomenine, NC = Normal Control, MCP-1 = Monocyte chemoattractant protein-1, hs-CRP = C-reactive protein. Fig. 10. Exhibited the inflammatory parameters. a: TNF-a, b: IL-1b and c: IL-6. Data were presented as mean ± SEM. Tested group rats compared I/R where *P < 0.05, **P < 0.01 and ***P < 0.001. Where I/R = Ischemia reperfusion, SM = Sinomenine, NC = Normal Control, IL-1b = Interleukin-1b, IL-6 = Interleukin-6, TNF-a = Tumor necrosis factor-a. and trophic substances Yi et al. 2019). This injury occurs due to neutrophil infiltration from the tissue vasculature and ROS production (Najafi et al. 2018). Superoxide is a significant marker of vascular tissue I/R that begins with NADPH oxidase catalysis in neutrophils or the outflow of electron transport chain in mitochondria Wang et al. 2020). It is widely known that heart tissue is prone to oxidative destruction. I/Rinduced oxidative stress causes the cardiac tissue injury to undergo cellular apoptosis, which can be reduced via scavenging the free radicals (Han et al. 2019;Liu et al. 2019). I/R damage is the most common cause of cardiac dysfunction, indicating that reperfusion is a key trigger for a number of processes that contribute to cardiac dysfunction caused by I/R injury (Han et al. 2019;Tang et al. 2020).
It is well documented that ROS is generated upon reperfusion of the ischemic organ rather than during ischemia (Najafi et al. 2018). The generation of ROS begins during the I/R injury, causing oxidative stress, which plays a crucial role in the I/R damage that disrupts cardiac function. The production of ROS from the reperfusion of ischemic heart during the I/R damage (Han et al. 2019;Liu et al. 2019). ROS causes DNA oxidation and membranous phospholipid protein oxidation, which are linked to the I/R pathogenesis, carcinogenesis, aging, and degenerative disease (Qu et al. 2019;Rinaldi et al. 2019). During the I/R injury, ROS starts the dysfunction in endothelial cells, cardiac myocytes and initiates the chemical reaction during the I/R injury. Ischemic cardiac tissue showed the production of ROS, during the reperfusion and could be related to the myocardial stunning, after the I/R injury reversible . During the I/R injury, start the production of ROS and starts damaging the mitochondrial DNA, that leads to more ROS generation and maybe burst production of ROS. Furthermore, myocardial stunning (dysfunction) may help to regulate the massive amount of ROS production in myocytes following an I/R injury (Najafi et al. 2018). CAT along with the SOD and GPx, play a significant role in the protection against LPO (Gatzke et al. 2018;Han et al. 2019). According to a recent study, erythrocyte reduction of CAT and SOD in acute myocardial infarction patients is caused by inactivation/alteration of these antioxidant enzymes through cross linking or exhaustion of these antioxidant enzymes through LPO (Qu et al. 2019;. During the normal process, GPx catalyses the peroxide reduction GSH utilisation as a substrate, and starts the conversion into GSSG. Other antioxidant such as GSH play a dual role as substrate in scavenging the reaction catalyzed by GPx and also scavenge the vitamin (C and E) radicals Zheng et al. 2020). GSH deficit has been linked to coronary restenosis following percutaneous coronary intervention, and its deficiency has been linked to the significant postreperfusion syndrome Jing et al. 2020). The reduced level of GSH may contribute to diminished the GPx activity because GSH is the one substrate of GPx. During the I/R injury, boosted the ROS production that can further detoxified the endogenous antioxidant enzymes. GPx and SOD are important enzymes that serve as free radical scavengers and may help to reduce ROS production. SOD catalyses the dismutation of the superoxide anion radical (O 2 ) to H 2 O 2 , which is then scavenged to water by GPx at the expense of GSH. The findings revealed that SM has a protective effect against free radicals by increasing the levels of GPx and SOD Zhang et al. 2017;Zheng et al. 2020). Sinomenine treatment considerably suppressed the MDA level and boosted the SOD, GPx, indicating the cardioprotective effect may be due to attenuating the lipid peroxidation following myocardial I/R Based on the findings, we can deduce that SM protects against I/R injury via reducing oxidative stress.
Reperfusion of the heart after an ischemic period could cause the dangerous arrhythmias. VT and VF are the most common causes of sudden death after spontaneous integrated flow restoration. According to previous study, oxygen-derived free radicals play a key role in the development of ventricular arrhythmias (Badalzadeh et al. 2014;. Sinomenine treatment considerably reduces the duration and number of VT + VF during ischemia (30 min). except this, the frequency of VF, number of VT + VF during reperfusion (120 min). The VT + VF duration after reperfusion (120 min), I/R were considerably suppressed in the myocardial infarction area, after the sinomenine treatment. The under lying reason may be the stress that might contributed to this abnormal heart rhythm causing Ventricular arrhythmias in 120 min group have a lower value than the 30 min group. Such reports are available that stress can lead Ventricular arrhythmias (Adameova et al. 2020). For estimation of the underlying mechanism of sinomenine, we determined the protective effect of sinomenine against myocardial I/R induced ventricular arrhythmias.
I/R injury leads to the induction of arrhythmias, microvascular injury, myocardial dysfunction and ''no-reflow" phenomenon (Najafi et al. 2018). Previous research has suggested that necrosis, autophagy, and apoptosis are important factors in inducing cell death during the reperfusion phase of I/R injury (Geldi et al. 2018;Qiao et al. 2019). Normally, the weakness in impulse conduction or dysfunction in the impulse generation occurs due to a lack of hypoxia and ATP that results in mitochondrial dysfunction, which is considered the main parameter for inducing ischemia induced arrhythmias (Han et al. 2019;Qiao et al. 2019). But still, the main cause for induction of arrhythmias remains unexplored. But few study suggest that the ionic alteration and disturbance in the level of electrolytes across the mitochondrial and sarcolemmal, particularly enhance the concentration of Na + and Ca 2+ in the circulation (Badalzadeh et al. 2014;Wang et al. 2020). Previous research suggests that the sarcolemmal calcium channels antagonizing showed a preventive effect against reperfusion induced arrhythmias in rats Wang et al. 2020). I/R induced rats exhibited an increased concentration of Na + and Ca 2+ and SM treatment considerably suppressed the concentration of Na + and Ca 2+ .
The huge amounts of oxygen derived free radicals and intracellular pH alteration during the initial stage of reperfusion disprove the potential effect of reperfusion on the ischemic heart (Ito et al. 2003;Hadi and Al-Amran, 2019). The production of inflammatory cytokines and inflammatory reactions can be triggered by the excessive generation of free radicals and increased oxidative stress. Therefore, the overproduction of ROS and inflammatory reactions would be the significant pathophysiological mediators and mechanisms which are responsible for the alteration in ionic distributions and thereby reperfusion induced arrhythmias Bi et al. 2020;Xin et al. 2020). The inflammatory response plays an important role in cardiac reperfusion. That increased the platelet adhesion, vascular endothelia injury, collagen exposure and platelet activation (Yi et al. 2019;Bi et al. 2020;Zhang et al. 2020). TNFis a potent inflammatory cytokine that contributes significantly to myocardial injury. Because of the increased TNF-a level, leukocytes and endothelial cells begin to adhere and interact, and granulocyte infiltration into the I/R area increases. IL-6 and IL-1b levels are increased during I/R injury, which also increases myocardial damage by increasing endothelial cell and neutrophil adhesion (Xin et al. 2020;Zhang et al. 2020). The level of inflammatory cytokines increased after the I/R injury, and a similar result was seen in the I/ R injury group rats, while SM therapy significantly reduced the level of cytokines and had an anti-inflammatory impact.
MCP-1 (monocytokines) commonly observed in the myocardial tissue and its increases monocyte/macrophage migration, which aggregates under the intima of blood vessels and suppresses their movement and chemotaxis, after becoming activated macrophages (Hadi and Al-Amran, 2019;Yi et al. 2019). During the I/R injury, boosted the hs-CRP level, which is closely related to the prognosis, severity and occurrence of atherosclerosis and acute cerebral infraction and it is considered as an important biomarker of cardiovascular disease (Yi et al. 2019;FENG et al. 2020). During the I/R injury, start the secretion of hs-CRP into the circulation which further increases the atheromatous plaques and instability (Ito et al. 2003;Hadi et al. 2013). In this study, I/R injury rats exhibited the boosted level of MCP-1 and hs-CRP and sinomenine considerably suppressed the level.
Conclusion
In short, sinomenine can suppress the myocardial infarct size along with reduction the myocardial enzyme level. The mechanism of myocardial protection of sinomenine is closely related to maintain the balance between the endogenous antioxidant enzymes and oxidation, suppress the oxidative stress along with inflammatory response, thrombosis and alter the platelet function. However, existing experimental study exhibited the exact interaction between inflammation, platelet function and oxidative stress is insufficient, and more investigation is required to fully comprehend the mechanism of sinomenine on heart protection. In future, we selected the more number of rodent to scrutinized the cardioprotective effect and explored the underlying mechanism.
|
2022-04-24T15:13:15.393Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "19694eca269cacd53540d6f0ab1bff01c736fdde",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jsps.2022.04.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28c5a51b8bdd46226f09126806270bb73f815ba6",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
119606006
|
pes2o/s2orc
|
v3-fos-license
|
Quantum dynamics of elliptic curves
We calculate the $K$-theory of a crossed product $C^*$-algebra $\mathscr{A}_{RM}\rtimes\mathscr{E}(K)$, where $\mathscr{A}_{RM}$ is the noncommutative torus with real multiplication and $\mathscr{E}(K)$ is an elliptic curve over the number field $K$. We use this result to evaluate the rank and the Shafarevich-Tate group of $\mathscr{E}(K)$.
Introduction
The noncommutative torus A θ is a C * -algebra on the generators u and v satisfying relation vu = e 2πiθ uv for a real constant θ. The algebra A θ is said to have real multiplication (RM), if θ is an irrational quadratic number. We shall denote such an algebra by A RM .
Let K be a number field and let E (K) be an elliptic curve over K. Here we consider a functor F between elliptic curves E (K) and the C * -algebras A RM , see [10,Section 1.3] for the details. Such a functor maps K-isomorphic elliptic curves E (K) and E ′ (K) to isomorphic C * -algebras A RM and A ′ RM , respectively. It is useful to think of the A RM as a non-commutative analog of the coordinate ring of E (K).
Recall that E (K) is an algebraic group over K; such a group is compact and thus abelian. The Mordell-Weil Theorem says that E (K) ∼ = Z r ⊕ E tors (K), where r = rk E (K) ≥ 0 is the rank of E (K) and E tors (K) is a finite abelian group. The group operation E (K) × E (K) → E (K) defines an action of the group E (K) by the K-automorphisms of E (K). Recall that each K-automorphism of E (K) gives rise to an automorphism of A RM . Thus one gets an action of E (K) on A RM by automorphisms of the algebra A RM . The object of our study is a crossed product C * -algebra coming from such an action, i.e. the C * -algebra: (1.1) Denote by τ the canonical tracial state on the crossed product A RM ⋊ E (K); existence of τ follows from [Phillips 2005] [13,Theorem 3.4]. It is well known, that K 0 (A θ ) ∼ = Z 2 . Moreover, τ defines an embedding K 0 (A θ ) ֒→ R given by the formula τ (K 0 (A θ )) = Z + θZ ⊂ R [Blackadar 1986] [1, Exercise 10.11.6]. Following Yu. I. Manin, we shall call Z + θZ a "pseudo-lattice". By Λ we understand the ring of endomorphisms of the pseudo-lattice τ (K 0 (A RM )). Since θ is an irrational quadratic number, the ring Λ is an order in the real quadratic field k = Q(θ). Thus Λ ∼ = Z + f O k , where O k is the ring of integers of k and f ≥ 1 is a conductor of the order. We shall write Cl (Λ) to denote the class group of the ring Λ. By h Λ = |Cl (Λ)| we understand the class number of Λ. Denote by K ab the maximal abelian extension of the field k modulo conductor f . If f = 1, then the extension K ab is unramified, i.e. the Hilbert class field of k. It is known, that Gal (K ab |k) ∼ = Cl (Λ), where Gal (K ab |k) is the Galois group of the extension k ⊆ K ab . Let {α i | 1 ≤ i ≤ h Λ } be generators of the field K ab , such that α i are conjugate algebraic numbers. Consider a normalization of α i given by the formula Our main result can be formulated as follows.
Theorem 1.1. The K-theory of the crossed product C * -algebra (1.1) is described by the following formulas: Denote by rk E (K) the rank of elliptic curve E (K). By X(E (K)) we understand the Shafarevich-Tate group of E (K). Theorem 1.1 implies the following formulas.
Remark 1.4. For the sake of simplicity, we treat the case of elliptic curves only. However, the results of 1.1 -1.3 can be extended to any abelian variety over a number field K. Remark 1.5. It follows from 1.2 and 1.3, that It is hard to verify (1.3) directly, since the group X(E (K)) is unknown for a single E (K) [Tate 1974] [16, p.193]. Indirectly, one can predict "analytic" values of rk E (K) and |X(E (K))| assuming the BSD Conjecture [Swinnerton-Dyer 1967] [15]. While many of such values satisfy (1.3), the other are not [Cremona et al. 2017] [4]. We do not know an exact relation between the analytic values and those described by formula (1.3).
The article is organized as follows. The preliminary facts are introduced in Section 2. The proof of theorem 1.1, corollary 1.2 and 1.3 can be found in Section 3.
2.1.1. C * -algebras. A C * -algebra A is an algebra over C with a norm a → ||a|| and an involution a → a * such that it is complete with respect to the norm and ||ab|| ≤ ||a|| ||b|| and ||a * a|| = ||a|| 2 for all a, b ∈ A . Any commutative C *algebra is isomorphic to the algebra C 0 (X) of continuous complex-valued functions on some locally compact Hausdorff space X; otherwise, A can be thought of as a noncommutative topological space.
2.1.2. K-theory of C * -algebras. For a unital C * -algebra A , let V (A ) be the union over n of projections in the n × n matrix C * -algebra with entries in A ; projections p, q ∈ V (A ) are equivalent if there exists a partial isometry u such that p = u * u and q = uu * . The equivalence class of projection p is denoted by [p]; the equivalence classes of orthogonal projections can be made to a semigroup by putting [p] + [q] = [p+q]. The Grothendieck completion of this semigroup to an abelian group is called the K 0 -group of the algebra A . The functor A → K 0 (A ) maps the category of unital C * -algebras into the category of abelian groups, so that projections in the algebra A correspond to a positive cone K + 0 ⊂ K 0 (A ) and the unit element 1 ∈ A corresponds to an order unit u ∈ K 0 (A ). The ordered abelian group (K 0 , K + 0 , u) with an order unit is called a dimension group; an order-isomorphism class of the latter we denote by (G, G + ).
Crossed products. Let
A be a C * -algebra and G a locally compact group. We shall consider a continuous homomorphism α from G to the group Aut A of * -automorphisms of A endowed with the topology of pointwise norm-convergence. Roughly speaking, the idea of the crossed product construction is to embed A into a larger C * -algebra in which the automorphism becomes the inner automorphism. A covariant representation of the triple (A , G, α) is a pair of representations (π, ρ) of A and G on the same Hilbert space H , such that ρ(g)π(a)ρ(g) * = π(α g (a)) for all a ∈ A and g ∈ G. Each covariant representation of (A , G, α) gives rise to a convolution algebra C(G, A ) of continuous functions from G to A ; the completion of C(G, A ) in the norm topology is a C * -algebra A ⋊ α G called a crossed product of A by G. If α is a single automorphism of A , one gets an action of Z on A ; the crossed product in this case is called simply the crossed product of A by α.
AF-algebras.
An AF-algebra (Approximately Finite C * -algebra) is defined to be the norm closure of an ascending sequence of finite dimensional C * -algebras M n , where M n is the C * -algebra of the n × n matrices with entries in C. Here the index n = (n 1 , . . . , n k ) represents the semi-simple matrix algebra M n = M n1 ⊕ · · · ⊕ M n k . The ascending sequence mentioned above can be written as where M i are the finite dimensional C * -algebras and ϕ i the homomorphisms between such algebras. If A is an AF-algebra, then its dimension group is an invariant of the Morita equivalence of algebra A, i.e. an isomorphism class in the category of finitely generated projective modules over A.
2.2.
Abelian extensions of quadratic fields. Let D be a square-free integer and let k = Q( √ D) be a quadratic number field, i.e. an extension of degree two of the field of rationals. Denote by O k the ring of integers of k and by Λ an order in O k , i.e. a subring of the ring O k containing 1. The order Λ can be written in the Denote by Cl (Λ) the ideal class group and by h Λ = |Cl (Λ)| the class number of the ring Λ. If Λ ∼ = O k , then h Λ coincides with the class number h of the field k. The integer h ≤ h Λ is always a divisor of h Λ given by the formula: where e f is the index of the group of units of Λ in the group of units of O k , p is a prime number and D p is the Legendre symbol. Let K ab be the maximal abelian extension of the field k modulo conductor f ≥ 1. The class field theory says that where Gal (K ab |k) is the Galois group of the extension (K ab |k). The K ab is the Hilbert class field (i.e. a maximal unramified abelian extension) of k if and only if f = 1.
For D < 0 an explicit construction of generators of the field K ab is realized by elliptic curves with complex multiplication, see e.g. [Neukirch 1999] [9, Theorem 6.10]. For D > 0 an explicit construction of generators of the field K ab is realized by noncommutative tori with real multiplication [10, Theorem 6.4.1].
2.3. Shafarevich-Tate group of elliptic curve. The Shafarevich-Tate group X(E (K)) is a measure of failure of the Hasse principle for the elliptic curve E (K). Recall that if E (K) has a K-rational point, then it has also a K v -point for every completion K v of the number field K. A converse of this statement is called the Hasse principle. In general, the Hasse principle fails for the elliptic curve E (K).
Denote by H 1 (K, E ) the first Galois cohomology group of E (K) [Silverman 1985] [14, Appendix B]. There exists a natural homomorphism is the first Galois cohomology over the field K v . The Shafarevich-Tate group of an elliptic curve E (K) is The group X(E (K) is trivial if and only if elliptic curve E (K) satisfies the Hasse principle.
Remark 2.1. The Shafarevich-Tate group X(A(K)) of an abelian variety A(K) over the number field K is defined similarly and has the same properties as X(E (K)).
Proofs
3.1. Proof of theorem 1.1. For the sake of clarity, let us outline the main ideas. Our proof is based on a "rigidity principle" for extensions of the pseudo-lattice Z + θZ corresponding to the algebra A RM . Such a rigidity follows from the class field theory for the real quadratic field k = Q(θ). Namely, the canonical embedding Using the canonical tracial state τ 1 on A RM ⋊ E (K), one gets an inclusion: where λ i are generators of the pseudo-lattice τ (K 0 (A RM ⋊ E (K))). It is easy to see, that each λ i ∈ R is an integer algebraic number. But the crossed product (1.1) depends solely on the algebra A RM , see formula (3.5). Therefore the extension (3.1) satisfies a "rigidity principle". In other words, the arithmetic of the number field k(λ i ) must be controlled by the arithmetic of the field k. It is well known, that this happens if and only if k(λ i ) ∼ = K ab , where K ab is the maximal abelian extension of the field k modulo conductor f ≥ 1. Thus m = h Λ , where h Λ is the class number of the order Λ ⊆ O k ; we refer the reader to (2.1) for an explicit formula. We pass to a detailed argument by splitting the proof in a series of lemmas and corollaries. Proof. Recall that the endomorphism ring Λ of the pseudo-lattice Z + θZ is an order Z + f O k in the number field k. In particular, since f = 0 we conclude that Λ is a non-trivial ring, i.e. Λ ∼ = Z.
Recall that the endomorphisms of pseudo-lattice λ 1 Z + · · · + λ m Z ⊂ R coincide with multiplication by the real numbers. In other words, the ring End (λ 1 Z + · · · + λ m Z) is the coefficient ring of a Z-module λ 1 Z + · · · + λ m Z ⊂ R [Borevich & Shafarevich 1966] [3, p. 87]. Up to a multiple, any such ring must be an order in a real number field K. Thus we have a field extension K | k and the following inclusions: where O K is the ring of integers of the field K.
On the other hand, it is known that the full Z-module λ 1 Z + · · · + λ m Z is contained in its coefficient ring O K [Borevich & Shafarevich 1966] [3, Lemma 1, p. 88]. In particular, each λ i is an algebraic integer. Lemma 3.1 is proved.
It is useful to scale the RHS of inclusion (3.1) dividing it by the real number λ m = 1. Such a normalization is always possible, since the embedding τ : K 0 (A RM ⋊ E (K)) → R is defined up to a scalar multiple. Thus we can rewrite inclusion (3.1) in the form: Proof. Let E (K) be an elliptic curve over the number field K and let A RM = F (E (K)) be the corresponding noncommutative torus with real multiplication [10, Section 1.3]. The functor F is faithful on the category of K-rational elliptic curves and therefore F has a correctly defined inverse F −1 . Thus E (K) = F −1 (A RM ) and one can write the crossed product (1.1) in the form: Consider an endomorphism ring, M , of the pseudo-lattice λ 1 Z + · · · + λ m Z ⊂ R. (3.6) On the other hand, it follows from the formula (3.5) that the crossed product A RM ⋊ E (K) depends only on the inner structure of algebra A RM . The same is true for the inclusions of groups K 0 (A RM ) ⊆ K 0 (A RM ⋊ E (K)), the inclusion of pseudo-lattices τ (K 0 (A RM )) ⊆ τ (K 0 (A RM ⋊ E (K))) ⊂ R and the inclusion of rings End (τ (K 0 (A RM ))) ⊆ End (τ (K 0 (A RM ⋊ E (K)))). In particular, the last inclusion says that arithmetic of the number field K in formula (3.6) is controlled by the arithmetic of the field k. In other words, there exists an isomorphism: where Gal (K|k) is the Galois group of the extension k ⊆ K. Therefore K is the maximal abelian extension of the field k modulo conductor f ≥ 1, see Section 2. Remark 3.5. To prove our results, we do not need an explicit formula for the values of generators λ i in terms of θ ∈ k; however, we refer an interested reader to [10, Theorem 6.4.1] for such a formula.
Proof. The formula follows from remark 3.2 and corollary 3.4.
Proof. Indeed, the rank of the abelian group K 0 (A RM ⋊ E (K)) is equal to the number of generators of the pseudo-lattice τ (K 0 (A RM ⋊ E (K))) ⊂ R. It follows from corollary 3.6, that such a number is equal to h Λ + 1. Corollary 3.7 follows. Theorem 1.1 follows from the corollaries 3.6 and 3.7.
3.2.
Proof of corollary 1.2. Let E (K) be an elliptic curve over the number field K. The Mordell-Weil Theorem says that E (K) ∼ = Z r ⊕E tors (K), where r = rk E (K) and E tors (K) is a finite abelian group.
Consider again the pseudo-lattice τ (K 0 (A RM ⋊ E (K))) ⊂ R and substitute E (K) ∼ = Z r ⊕ E tors (K): (3.8) In the last line of (3.8) we have the following two terms: For r ≥ 2 the formula is proved by an induction. Namely, it is verified directly that the case i+1 adds an extra generator λ i+1 of the pseudo-lattice Z+θZ+λ 1 Z+· · ·+λ i Z corresponding to the case i.
It follows from (i) and (ii) that after a scaling, one gets the following inclusion of the pseudo-lattices: (3.9) From (3.8) and (3.9) we get the following equality: (3.10) Using formula (1.2) and calculations of item (ii), one obtains from (3.10) the following equation: Z + θZ + λ 1 Z + · · · + λ hΛ−1 Z = Z + θZ + λ 1 Z + · · · + λ r Z. A relation between quadratic number fields and ranks of elliptic curves has been known for a while [Goldfeld 1976] [7]. In fact, the famous Birch and Swinnerton-Dyer Conjecture uses the relation to compare (special values of) the Dirichlet Lfunctions of a number field with the Hasse-Weil L-function of an elliptic curve [Swinnerton-Dyer 1967] [15]. Let us mention a recent generalization of this idea by [Bloch & Kato 1990] [2].
Our idea is to show that there exists a natural correspondence between the arithmetic of ideals of the real quadratic fields and the Hasse principle for elliptic curves. Namely, denote by A Since the companion algebras A (i) RM have the same endomorphisms, so will be their "quantum dynamics", i.e. the crossed product A (i) RM ⋊ E (K), see lemma 3.8. On the other hand, we establish a natural isomorphism between the abelian groups K 0 (A RM ) ∼ = H 1 (K, E ) and K 0 (A RM ⋊ E (K)) ∼ = v H 1 (K v , E ), see lemma 3.9. In view of formula (2.3), this means that the preimage of each cocycle in v H 1 (K v , E ) under the homomorphism ω consists of the h Λ ≥ 1 distinct cocycles of the H 1 (K, E ). In other words, we get an inclusion Cl (Λ) ⊂ X(E (K)).
A precise formula is derived from Atiyah's pairing between the K-theory and the K-homology of C * -algebras, see e.g. [10, Section 10.2]. Namely, it is known that is the zero K-homology group of the noncommutative torus A θ [Hadfield 2004] [8,Proposition 4]. Repeating the argument for the group K 0 (A RM ), we get another subgroup Cl (Λ) ⊂ X(E (K)). In view of the Atiyah pairing, one gets X(E (K)) ∼ = Cl (Λ) ⊕ Cl (Λ). We pass to a detailed argument by splitting the proof in a series of lemmas.
RM be companion noncommutative tori. In this case we have: Lemma 3.9. Let H 1 (K, E ) and H 1 (K v , E ) be the first Galois cohomology over the field K and over the completion K v of K, respectively. There exists a natural isomorphism between the following groups: (3.14) Proof. (i) Let us show that H 1 (K, E ) ∼ = K 0 (A RM ). Indeed, such an isomorphism is a special case of [11,Theorem 1.1] saying that H 1 (Gal(C|K), Aut ab . For that, one has to restrict to the case V = E (K) and notice that A V = A RM . On the other hand, since E (K) is an algebraic group, one gets Aut ab C (E (K)) ∼ = E (K). The rest of the formula follows from the definition of the group H 1 (K, E ).
). An idea of the proof is to construct an AF-algebra, A, connected to the profinite group v H 1 (K v , E ); we refer the reader to Section 2.1.4 or [Blackadar 1986] [1,Chapter 7] for the definition of an AF-algebra. Next we show that the crossed product A RM ⋊ E (K) embeds into A, so that K 0 (A RM ⋊ E (K)) ∼ = K 0 (A). The rest of the proof will follow from the properties of the AF-algebra A. We pass to a detailed argument. Recall corresponding to G k . Notice that the C[G k ] is a finite-dimensional C * -algebra. The inverse limit (3.15) defines an ascending sequence of the finite-dimensional C * -algebras: In other words, the limit A is an AF-algebra, such that K 0 (A) To prove that K 0 (A RM ⋊ E (K)) ∼ = K 0 (A), we shall use the "rigidity principle" described in Section 3.1. Namely, the extension H 1 (K, E ) ⊂ v H 1 (K v , E ) is defined solely by the group H 1 (K, E ) [Silverman 1985] [14,Appendix B]. Since H 1 (K, E ) ∼ = K 0 (A RM ) and v H 1 (K v , E ) ∼ = K 0 (A), we conclude that the extension K 0 (A RM ) ⊂ K 0 (A) is defined by the group K 0 (A RM ) alone. But the extension K 0 (A RM ) ⊂ K 0 (A RM ⋊ E (K)) is the only extension with such a property. Thus K 0 (A) ∼ = K 0 (A RM ⋊ E (K)) and the crossed product A RM ⋊ E (K) embeds into the AF-algebra A.
To finish the proof of lemma 3.9, we recall that be the companion noncommutative tori of the A RM . Consider a group homomorphism In other words, one gets Ker h ∼ = Cl (Λ), where Cl (Λ) is the class group of the order Λ in the real quadratic field Q(θ).
, see lemma 3.9. Therefore, in view of the formulas (2.3) and (2.4), the abelian group Cl (Λ) is an obstacle to the Hasse principle for the elliptic curve E (K). In other words, Cl (Λ) ⊂ X(E (K)).
To calculate an exact relation between the groups Cl (Λ) and X(E (K)), recall that the K-homology is the dual theory to the K-theory, see e.g. [Blackadar 1986] [1,Section 16.3]. Roughly speaking, cocycles in K-theory are represented by vector bundles. Atiyah proposed using elliptic operators to represent the K-homology cycles. An elliptic operator can be twisted by a vector bundle, and the Fredholm index of the twisted operator defines a pairing between the K-homology and the K-theory with values in Z.
In partucular, it is known that for the algebra A θ , it holds K 0 (A θ ) ∼ = K 0 (A θ ), where K 0 (A θ ) is the zero K-homology group of A θ [Hadfield 2004] [8,Proposition 4]. Repeating the argument for the group K 0 (A RM ), one can prove an analog of theorem 1.1 for such a group. In other words, we get another subgroup Cl (Λ) ⊂ X(E (K)). Since there are no other duals to the K-theory of C * -algebras, we conclude from the Atiyah pairing, that X(E (K)) ∼ = Cl (Λ) ⊕ Cl (Λ). Lemma 3.10 is proved. Corollary 1.3 follows from lemma 3.10.
Remark 3.11. The reader can observe, that construction of a generator of E (K) is similar to construction of an "ideal number" (i.e. a principal ideal) of the number field k. Namely, it is well known that not every ideal of the ring Λ ⊂ O k is principal; an obstruction is a non-trivial group Cl (Λ). However, this can be repaired in a bigger field K = K ab ; there exists a finite extension k ⊆ K, such that every ideal of Λ is principal in the ring O K . Likewise, one cannot construct a generator of E (K) by a finite descent in general; an obstruction is a non-trivial group X(E (K)). However, in an extension A RM ⋊ E (K) of the coordinate ring A RM of E (K), the descent will be always finite and give a generator of the E (K). Such an analogy explains the formula X(E (K)) ∼ = Cl (Λ) ⊕ Cl (Λ) on an intuitive level. Notice also that the A RM ⋊ E (K) is the coordinate ring of an abelian variety A(K), which is related to the Euler variety V E coming from the continued fraction of θ [10, Section 6.2.1].
|
2018-10-01T11:49:56.000Z
|
2018-08-10T00:00:00.000
|
{
"year": 2018,
"sha1": "7ec6e68f96302635dc47ec713dedca9f1bbbf869",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7ec6e68f96302635dc47ec713dedca9f1bbbf869",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
1886786
|
pes2o/s2orc
|
v3-fos-license
|
β-Cell Replication Is Increased in Donor Organs From Young Patients After Prolonged Life Support
OBJECTIVE This study assesses β-cell replication in human donor organs and examines possible influences of the preterminal clinical conditions. RESEARCH DESIGN AND METHODS β-Cell replication was quantified in a consecutive series of n = 363 human organ donors using double immunohistochemistry for Ki67 and insulin. Uni- and multivariate analysis was used to correlate replication levels to clinical donor characteristics and histopathologic findings. RESULTS β-Cell replication was virtually absent in most donors, with ≤0.1% Ki67-positive β-cells in 72% of donors. A subpopulation of donors, however, showed markedly elevated levels of replication of up to 7.0% Ki67-positive β-cells. β-Cell replication was accompanied by the increased replication of glucagon-, somatostatin-, and CA19.9-positive cells. Prolonged life support, kidney dysfunction, relatively young donor age, inflammatory infiltration, and prolonged brain death before organ retrieval were all found to be significantly associated with an increased level (≥90th percentile) of β-cell replication, with the first three risk factors being independent predictors. Increased β-cell replication was most often noted in relatively young donors (≤25 years) who received prolonged (≥3 days) life support (68%); in contrast, it was rare in donors with a short duration of life support regardless of age (1%). Prolonged life support was accompanied by increased levels of CD68+ and LCA/CD45+ infiltration in the pancreatic parenchyma. CONCLUSION These results indicate that preterminal clinical conditions in (young) organ donors can lead to increased inflammatory infiltration of the pancreas and to increased β-cell replication.
H uman diabetes is a heterogeneous group of disorders with increased glycemia levels and a decreased functional -cell mass in common. Type 1A diabetes is characterized by a T-cellmediated autoimmune destruction of 50 -70% of -cell mass at clinical onset, whereas type 2 diabetes is characterized by a smaller decrease in -cell mass in association with insulin resistance and loss of -cell function (1,2). Clinical interventions aimed at restoring a functionally adequate endogenous -cell mass are therefore of consid-erable interest, but they are hampered by a relative lack of knowledge about the in vivo conditions that stimulate -cell replication and neoformation in the adult pancreas (3). Quantification of -cell replication in the developing human pancreas shows that replication is high in the early fetal pancreas, but decreases rapidly after birth and is only rarely observed in the adult pancreas (4 -7). Interestingly, several cases have been described in which patients with a variety of diseases, including lobar pneumonia, hemochromatosis, or acute liver disease, were reported to display prominent mitotic activity in adult islet tissue (8 -10). Such chance observations indicate that although replication in the adult pancreas is normally low, adult human islet cells apparently do retain a capacity for replication that can be activated under selected clinical conditions. To characterize such conditions we investigated -cell replication in a large consecutive series of human organ donors and correlated our findings to the preterminal clinical characteristics of the patients involved.
RESEARCH DESIGN AND METHODS
Collection of pancreatic tissue. Pancreas biopsy specimens were obtained from the Beta Cell Bank in Brussels, which operates for a clinical trial on islet cell transplantation in Belgium (11). The biopsy specimens were taken as part of a quality control procedure that was approved by the medical ethics committee of our university. A single biopsy specimen of ϳ0.5 cm 3 was taken from the body region of the cold-preserved (University of Wisconsin preservation solution flushed) donor pancreas immediately before the remaining tissue was digested for islet isolation. Biopsy specimens were fixed in 4% (v/v) phosphate-buffered formaldehyde, pH 7.4, and embedded in paraffin for routine histopathologic examination. Tissue blocks from 363 of 500 consecutive donors fulfilled all inclusion criteria (minimal biopsy surface area Ͼ0.25 cm 2 ; minimal clinical data including age, sex, BMI, time in hospital, cause of death; and availability of a serum sample) and were analyzed by immunohistochemistry. Immunohistochemistry. Consecutive 4-m paraffin sections were immunohistochemically double stained for the replication marker Ki67 (mouse anti-Ki67; Dako, Glostrup, Denmark) and insulin (guinea pig anti-insulin; a gift of Dr. Van Schravendijk, Brussels Free University, Brussels, Belgium), glucagon (rabbit anti-glucagon; Dr. Van Schravendijk), somatostatin (rabbit anti-SRIF; a gift of Dr. De Mey, Brussels Free University) or synaptophysin (rabbit anti-synaptophysin; Dako). Rabbit anti-Ki67 (Acris Antibodies, Hiddenhausen, Germany) was used in conjunction with mouse anti-carbohydrate antigen-19.9 (Novocastra Laboratories, Newcastle upon Tyne, U.K.) and with mouse anti-LCA/CD45 (Clones 2B11 plus PD7/26; Dako). Double stainings were also performed using rabbit anti-phosphohistone H3 (Upstate Biotechnology, Lake Placid, NY), mouse anti-LCA (Dako), mouse anti-CD68 (clone KP1; Dako) or mouse anti-CD3 (Novocastra Laboratories), and guinea pig anti-insulin. Binding of primary antibodies was detected with biotinylated anti-mouse or anti-rabbit Ig (Amersham, Little Chalfont, U.K.) or biotinylated anti-guinea pig Ig (Vector Laboratories, Burlingame, CA) in combination with streptavidin horseradish peroxidase or alkaline phosphatase complex (both from Dako). For immunofluorescence microscopy the following second antibodies were used: FITC anti-rabbit Ig, AMCA anti-guinea pig Ig, FITC anti-guinea pig Ig, Cy3 anti-mouse Ig, Cy3 anti-rabbit Ig (all from Jackson ImmunoResearch Laboratories, West Grove, PA), Alexa Fluor 488 anti-guinea pig and anti-rabbit Ig, and Alexa Fluor 647 anti-rabbit and anti-mouse Ig (all from Invitrogen, Carlsbad, CA). Quantification of replication and relative -cell area. Islet cell replication was assessed in slides double stained for the replication marker Ki67 and for insulin, glucagon, somatostatin, and the panendocrine marker synaptophysin. Ductal cell replication was assessed in slides double stained for the replication marker Ki67 and for the ductal marker carbohydrate antigen-19.9. All quantitative analyses were performed by transmitted light microscopy on coded slides at a final magnification of ϫ400 by two independent observers. Minimally 1,000 cells per case were evaluated, except for glucagon and somatostatin for which 400 and 100 cells per case were evaluated, respectively. Relative -cell area was determined according to Rahier et al. (12) on immunostained sections using a 180-point counting grid at a final magnification of ϫ140. The number of points overlaying insulin immunoreactive cells (Ni) and parenchyma (Np) was counted in 20 random microscope fields per case. Grid points overlaying the lumen of ducts, arteries, connective tissue, or fatty tissue were not included in the analysis. Relative -cell area was calculated as (Ni/Np) ϫ 100 and expressed as a percentage. All morphometric analyses were carried out in a blinded fashion on coded slides by two independent observers. Quantification of leukocytic infiltration. Leukocytic infiltration was assessed in slides double stained for insulin and leukocyte common antigen (LCA/CD45), CD68, or CD3. The number of infiltrating CD68 ϩ and LCA/CD45 ϩ cells was quantified at a final magnification of ϫ400 and is expressed as mean Ϯ SE of cell numbers per 10 high power microscope fields (corresponding to a total area of 2.83 mm 2
RESULTS
Increased -cell replication is found in a subset of organ donors. In a consecutive series of n ϭ 363 donor pancreata (donor age 2-75 years), immunohistochemical double staining for the replication marker Ki67 and insulin indicated for most organs (72%) a low level of replication (Յ0.1%) in the 1,000 -cells that were evaluated for each organ. The remaining donors (28%) presented levels of -cell replication between 0.2 and 7.0% (Table 1). Donors with a high level of replication (defined as Ն90th percentile; n ϭ 36 patients) were between 6 and 66 years of age, with different causes of death ( Table 2). The replicating -cells were found scattered throughout the islets ( Fig. 1A-D); they were only infrequently found to be associated with ducts. Donors who showed increased levels of Ki67positive -cells also presented with mitotic figures and cells that costained for insulin and the G 2 to M transition marker phosphohistone H3 ( Fig. 1E and F). When the n ϭ 36 donors with a high level of -cell replication were compared with the n ϭ 327 donors with a lower level of -cell replication, they were found to be of significantly younger median age (29 vs. 48 years; P Ͻ 0.0001; Table 2), whereas BMI in the two groups was comparable. A high male-to-female ratio (24 male vs. 12 female donors) in the group of donors with high -cell replication was most probably caused by the high number of male donors in the youngest age group (37 male vs. 14 female donors in the age-group Յ25 years vs. 189 male and 172 female donors in the total study population; sex data on two patients were missing).
Relative -cell area in organ donors with a high level of -cell replication. When the relative -cell area was determined in the group with high replication (Ն90th percentile; 36 patients) a mean relative -cell area of 1.78 Ϯ 0.18% was found. This mean relative area was higher, but not statistically different, from that found in the n ϭ 36 matched (age, sex, and BMI) controls from the group with a low level of -cell replication (1.28 Ϯ 0.05%; P ϭ 0.085). To assess whether severe -cell degranulation could have influenced quantification of -cell area, we performed double labeling for the -cell transcription factor Nkx6.1 and insulin in a subset of donors from both groups. It was observed that most (Ͼ90%) Nkx6.1-positive cells also showed cytoplasmic insulin immunoreactivity, indicating that the majority of -cells were insulin-positive and that the fraction of severely degranulated "hidden" -cells was relatively low ( Fig. 1G and H). Increased -cell replication is accompanied by an increase in both endocrine and exocrine cell replication. Immunohistochemical double labeling for Ki67 and the islet cell markers glucagon and somatostatin showed that a 20-fold higher mean level of -cell replication in the Ն90th percentile group was accompanied by a 52-fold higher level of ␣-cell replication and a fivefold higher level of ␦-cell replication (Figs. 1I-L and 2). Virtually all replicating islet cells showed positivity for the panendocrine marker synaptophysin (Fig. 1M and N). Replication was not only observed in islet cells but also throughout the pancreatic parenchyma outside the islets where it was observed in acinar cells and ductal cells ( Fig. 1O and P).
Ductal replication was quantified on Ki67/CA19.9 coimmunostained slides and a 3.7-fold higher level of ductal cell replication was found in the group with high -cell replication (Fig. 2). Increased -cell replication is associated with prolonged duration of life support and young donor age.
Multi-and univariate analysis was performed to test 13 clinical parameters (risk factors) for their association with -cell replication. Thirty-six donors with a high level of -cell replication (Ն90th percentile) were compared with the n ϭ 327 patients with low replication (Ͻ90th percentile). Seven risk factors showed a significant association in univariate analysis: prolonged duration of mechanical respiration (OR31.1), prolonged duration of stay in an the intensive care unit (24.5), kidney dysfunction (7.1), young donor age (6.7), increased CD68 ϩ monocytic infiltration (6.2), increased LCA/CD45 ϩ leukocytic infiltration (3.7), and a prolonged duration of brain death (3.0). Three of these parameters were found to be independent predictors of -cell replication in multivariate analysis: young donor age, prolonged duration of stay in an intensive care unit, and kidney dysfunction (Table 3). High -cell replication levels were observed in 15 of 22 (68%) donors who combined prolonged life support with young donor age (Յ25 years), which is significantly more frequent than in donors of the same age with shorter duration of life support (1 of 28 or 4%; P Ͻ 0.001, 2 test) ( Table 4).
Comparison with the older age-group shows that in older patients, the prevalence of high replication levels is less and occurs after a more prolonged stay in an intensive care unit. Regardless of donor age, the average frequency of high replication was 1% in donors with Ͻ3 days stay in an intensive care unit. Average duration of mechanical ventilation was 6 Ϯ 3 days in the Ն90th percentile group vs. 2 Ϯ 2 days for the Ͻ90th percentile group (P Ͻ 0.001).
Six clinical parameters, including hyperglycemia and use of steroid hormones, showed no significant association with increased -cell replication. As blood glucose values may change rapidly, we also tested serum fructosamine as a surrogate marker for prolonged hyperglycemia and compared levels of circulating C-peptide. We found no significant difference in the level of serum fructosamine between the 36 donors with high replication (152 Ϯ 31 mol/l) and their 36 matched controls (175 Ϯ 24 mol/l), nor did we find significant differences in the level of circulating C-peptide (1.75 Ϯ 3.22 vs. 1.35 Ϯ 2.43 g/l).
Prolonged duration of life support is associated with increased inflammatory infiltration.
Immunohistochemical staining for CD68 showed a diffuse infiltration of positive cells throughout the pancreatic parenchyma. The infiltration was variable between patients but was most pronounced in the group with high replication and in patients on prolonged life support ( Fig. 3A and B). Small numbers of CD68 ϩ cells were observed in the islet interstitium, but no apparent colocalization with islet cell replication was found. Immunohistochemical staining for LCA/CD45 and CD3 showed focal areas of infiltration around the vasculature, in the interstitial connective tissue, and in the parenchyma, but was rare in islets ( Fig. 3C-F). Quantification showed that prolonged life support was accompanied by a significantly increased infiltration of CD68 ϩ and LCA/CD45 ϩ cells in both young and older donors, with the increase in CD68 positivity preceding the increase in LCA/CD45 positivity (Table 5). Donors with high -cell replication (Ն90th percentile; n ϭ 36 patients) showed a significant (P Ͻ 0.001) 1.7-fold increase in CD68 positivity (274 Ϯ 26 vs. 161 Ϯ 18), and a significant (P Ͻ 0.001) 2.0-fold increase in LCA/CD45 positivity (85 Ϯ 12 vs. 43 Ϯ 6) when compared with n ϭ 36 matched patients with low replication.
DISCUSSION
In the present study we investigated -cell replication levels in the normal human pancreas. We identified a subgroup of organ donors who presented with increased levels of -cell replication and correlated our findings to the periterminal clinical conditions of the patients involved. We report that a prolonged period on life support (Ն3 days), kidney dysfunction, a relatively young donor age (Յ25 years), inflammatory infiltration, and a prolonged period of brain death before organ retrieval were all found to be significantly associated with an increased level (Ն90th percentile) of -cell replication. The effect was most pronounced in young donors who received prolonged life support; in contrast, it was rare in donors with a short duration of life support regardless of age. The increase in replicative rate was not limited to -cells but was also observed in ␣-cells, ␦-cells, and ductal cells. We found that prolonged life support was associated with increased pancreatic infiltration of both CD68 ϩ monocytic cells and LCA/CD45 ϩ leukocytes. The increase in CD68 positivity appeared to precede the increase in LCA/CD45 positivity by several days. These results indicate that preterminal clinical conditions in organ donors can lead to both increased inflammatory infiltration of the pancreatic parenchyma and to an activation of -cell replication that is most pronounced in patients in the younger age category.
Replication of adult human -cells is a rare finding in histopathologic studies of human pancreas. In a study of 327 autopsy cases, only 14 patients were found to express one or more mitotic figures in the islets of Langerhans (9). In a similar study of 174 autopsy cases, only 18 patients showed one or more islet cell mitoses (10). Both studies were limited by the lack of specific immunohistochemical techniques to identify islet cell types and by the lack of sensitive techniques for the detection of islet cell replication. More recent studies, using immunohistochemistry for the nuclear marker Ki67 that is expressed during late G 1 , S, M and G 2 phases of the cell cycle, found that -cell replication decreased progressively from 3.2% at 17-32 weeks of gestation to 1.1% perinatally (5). After birth, the degree of -cell replication was found to drop further, with initial levels being sufficient to account for the expansion of -cell mass from birth to adulthood, but with -cell replication levels decreasing hyperbolically with age to reach levels that are generally Ͻ0.1% in young adults (6,7). The low level of -cell replication in the adult pancreas is supported by our present studies in which 72% of donors Uni-and multivariate analysis were performed to test the association between 13 clinical and histopathologic parameters (risk factors) and high levels of -cell replication (Ն90th percentile) including prolonged duration of mechanical respiration (Ն3 days), prolonged time in the intensive care unit (Ն3 days), kidney dysfunction (serum creatinine Ն150 mol/l), young donor age (Յ25 years), increased CD68 ϩ cell infiltration (Ն90th percentile), increased LCA/CD45 ϩ cell infiltration (Ն90th percentile), prolonged duration of brain death (Ն12 h to start of cold perfusion), the use of steroid hormones (yes/no), high BMI (Ͼ30 kg/m 2 ), hyperglycemia (glucose Ն200 mg/dl), hypotensive periods (systemic blood pressure Ͻ100 mmHg), liver damage (bilirubin Ն2 mg/dl combined with aspartate aminotransferase Ն25 units/l), and pancreas damage (amylase Ͼ200 units/l). Logistic regression analysis was performed with -cell replication level as a dependent variable, with inclusion of all variables with P Յ 0.10 in univariate analysis. Duration of mechanical ventilation was not included as a variable because of the number of missing data and the good correlation with time in the intensive care unit. show a replication level in this range. However, the remaining donors presented with replication levels that were significantly higher (0.2-7.0%), reaching levels normally found only in early fetal pancreas (4 -6).
In the studied donor organs, high levels of -cell replication were found to be accompanied by an increased replication of ␣-cells, ␦-cells, and ductal cells. The increase in replication thus appears to be a generalized phenomenon with virtually all pancreatic cell types being induced into a replicative state. In patients with a high replicative activity, we also noted the presence of mitotic figures inside the islets of Langerhans and positivity for the G 2 to M transition marker phosphohistone H3, albeit at a much lower frequency than that of Ki67 positivity. These observations indicate that Ki67-positive cells are driven toward a proliferative pathway, rather than toward a polyploid state that is relatively frequent in normal human pancreas (14).
When the 10% of patients with the highest replication level (P90) were correlated with the available clinical data, a significant association was found with a prolonged duration of life support, kidney dysfunction, relatively young donor age, inflammatory infiltration, and a prolonged period of brain death. A total of 68% of patients with both a prolonged life support and young donor age were found to present with high levels of -cell replication, in contrast with 1% of patients with a shorter duration of life support, irrespective of age. These observations suggest that -cell replication was induced only after admission to the hospital and took several days to develop: donors with high levels of replication had a duration of mechanical respiration that on average exceeded 6 days, whereas donors with low replication were on average mechanically respirated for 2 days. The mechanism behind this induction is unknown, but several hypotheses can be proposed.
A first possibility is that it is caused by prolonged treatment with drugs that are known to induce -cell replication: patients in the intensive care unit often receive treatment with high doses of steroids, which were shown to induce a marked elevation of plasma insulin levels and 20-to 30-fold higher levels of islet cell replication in primates (15). However, we did not find any evidence for this possibility: treatment with steroids was not significantly associated with high -cell replication, although the duration of drug treatment was not always known and the use of steroids may not always have been registered in the donor file. We therefore also tested for differences in circulating C-peptide levels, as steroid treatment was reported to result in increased circulating insulin levels (15), but no significant differences were observed.
A second possibility is that a prolonged period of hyperglycemia may contribute to the induction of -cell replication in these patients. Although no significant differences in plasma glycemia could be found between the two groups, it cannot be excluded that such differences did exist before the time point of blood sampling just before death. We therefore measured fructosamine levels as a surrogate marker for prolonged hyperglycemia but did not find any evidence for a significant difference between the two groups.
A third possibility is that -cell replication is activated by a prolonged period of hypoxia leading to cellular damage as might be the case in the subpopulation of patients with extended life support. The presence of higher numbers of CD68 ϩ macrophages and LCA/CD45 ϩ leukocytes seen dispersed throughout the pancreatic parenchyma in such patients may be indicative of cellular damage, and the macrophages may be involved in clearing cellular debris. It is so far not known which signals are responsible for the proliferative stimulus. Cytokines may be released by pancreatic cells such as ductal cells (16), as well as by the infiltrating monocytes and leukocytes. Release of proinflammatory cytokines and/or leukocytic infiltration has been described in both human donor kidney and liver (17,18) and in rodent islets after brain death (19). Several recent studies suggested that -cell replication and neogenesis are stimulated by inflammatory lesions induced by autoimmunity (20 -23) or injury (24). Follow-up studies will be necessary to dissect the mechanism of the replicative response that is described in the present report. Gene expression analysis of human pan- creas samples and isolated islet fractions collected during the present study may indicate the nature of the factors involved. Exposure of isolated human islets to proinflammatory cytokines in vitro may help establish their stimulatory effect on -cell replication. Our observation of increased inflammatory infiltration throughout the pancreatic parenchyma in organ donors with extended life support is also relevant in the context of islet transplantation. It cannot be excluded that islets isolated from such donors may either contain higher numbers of passenger leukocytes or that islet cells are activated by cytokine exposure leading to changed allograft reactivity. It also warrants caution in the interpretation of histopathologic changes in postmortem pancreas in the context of presumed autoimmune lesions and stresses the importance of obtaining control groups that are adequately matched in terms of clinical history.
The presence of replicating -cells in adult organ donors indicates that although such cells are rare under normal circumstances, they have retained their potential for growth and can be induced to enter the mitotic cycle upon activation by preterminal clinical conditions. Alternatively, replicating cells may be derived from progenitor cells (24) or from existing adult cells, such as peripheral blood monocytes (25), by a process of transdifferentiation.
In summary, we have quantified -cell replication in a large consecutive series of human organ donors and found evidence that a subgroup of donors present with high levels of replication in pancreatic endocrine cells, including islet -cells. Multivariate analysis of clinical data showed that high levels of replication were significantly and independently associated with extended life support, kidney damage, and young donor age. These patients were also characterized by significantly increased levels of inflammatory infiltration in the pancreatic parenchyma.
The results indicate that preterminal clinical conditions in organ donors can activate -cell replication. Elucidation of the cellular and molecular pathways involved in this process may help researchers devise new strategies for stimulating -cell growth in vivo.
|
2014-10-01T00:00:00.000Z
|
2010-04-22T00:00:00.000
|
{
"year": 2010,
"sha1": "03989bd6417bbf8dab4d406a9428430ceca68e09",
"oa_license": "CCBYNCND",
"oa_url": "http://diabetes.diabetesjournals.org/content/59/7/1702.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "03989bd6417bbf8dab4d406a9428430ceca68e09",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195761881
|
pes2o/s2orc
|
v3-fos-license
|
Other Iatrogenic Immunodeficiency-Associated Lymphoproliferative Disorders with a T- or NK-cell phenotype
Other iatrogenic immunodeficiency-associated lymphoproliferative disorders (OIIA-LPDs) with a T- or NK-cell phenotype are markedly rare, with only a limited number of cases having been reported thus far. Methotrexate (MTX) is the most common agent used for OIIA-LPD patients, and 43 cases of MTX-associated T-LPDs (MTX T-LPDs) and five cases of MTX-associated NK/T-LPDs (MTX NK-LPDs) have been described. In addition to MTX T-LPDs and MTX NK/T-LPDs, T-LPD and NK/T-LPDs have been reported in patients receiving other immunosuppressive agents such as thiopurines, TNF antagonists, and cyclosporine. Hepatosplenic T-cell lymphoma (HSTL) is specifically associated with iatrogenic immunodeficiency, and 10% of HSTL cases develop in patients receiving thiopurines and/or TNF antagonists for inflammatory bowel disease (IBD). In this review, we focused on MTX T-LPD, MTX NK/T-LPD, and HSTL in patients with IBD. These T- and NK/T-cell associated OIIA-LPDs are the most common in daily medical practice.
INTRODUCTION
Immunodeficiency plays a key role in the pathogenesis of some lymphoproliferative disorders (LPDs). Immunodeficiency is caused by aging, primary immune disorders, HIV infection, and immunosuppressive drugs. 1 In the 2017 WHO classification, 1 LPDs that are associated with immunosuppressive agents are termed post-transplant LPDs (PTLDs) or other iatrogenic immunodeficiency-associated LPDs (OIIA-LPDs). OIIA-LPDs are defined as lymphoid proliferations or lymphomas that develop in patients receiving immunosuppressive drugs for an autoimmune disease or conditions other than post-transplantation. OIIA-LPDs are a heterogeneous group mainly consisting of polymorphic B-cell LPDs (B-LPDs), monomorphic LPDs, and Hodgkin lymphoma (HL). These OIIA-LPDs are often diagnostically and therapeutically challenging for both pathologists and clinicians. Monomorphic LPDs include cases that fulfill the criteria of diffuse large cell lymphoma, follicular lymphoma, peripheral T-cell lymphoma (PTCL), or extranodal natural killer (NK)/ T-cell lymphoma, nasal type. Cases accompanying Epstein-Barr virus (EBV)-positive (EBV + ) mucocutaneous ulcers in patients receiving immunosuppressive drugs are also considered OIIA-LPD.
Other Iatrogenic Immunodeficiency-Associated Lymphoproliferative Disorders with a T-or NK-cell phenotype Akira Satou,1) Toyonori Tsuzuki 1) and Shigeo Nakamura 2) Other iatrogenic immunodeficiency-associated lymphoproliferative disorders (OIIA-LPDs) with a T-or NK-cell phenotype are markedly rare, with only a limited number of cases having been reported thus far. Methotrexate (MTX) is the most common agent used for OIIA-LPD patients, and 43 cases of MTX-associated T-LPDs (MTX T-LPDs) and five cases of MTX-associated NK/T-LPDs (MTX NK-LPDs) have been described. In addition to MTX T-LPDs and MTX NK/T-LPDs, T-LPD and NK/ T-LPDs have been reported in patients receiving other immunosuppressive agents such as thiopurines, TNF antagonists, and cyclosporine. Hepatosplenic T-cell lymphoma (HSTL) is specifically associated with iatrogenic immunodeficiency, and 10% of HSTL cases develop in patients receiving thiopurines and/or TNF antagonists for inflammatory bowel disease (IBD). In this review, we focused on MTX T-LPD, MTX NK/T-LPD, and HSTL in patients with IBD. These T-and NK/T-cell associated OIIA-LPDs are the most common in daily medical practice.
Keywords: other iatrogenic immunodeficiency-associated lymphoproliferative disorder, T or NK/T-cell lymphoma, methotrexate, inflammatory bowel disease, hepatosplenic T-cell lymphoma We focused on MTX T-LPDs, MTX NK/T-LPDs, and HSTL in patients with IBD. These T-and NK/T-cell-associated OIIA-LPDs are the most common in daily medical practice.
MTX-ASSOCIATED LPDS WITH A T-OR NK-CELL PHENOTYPE
MTX is an anti-rheumatic drug that is administered to patients with autoimmune diseases, particularly rheumatoid arthritis (RA). MTX suppresses the hyper-immune state of RA patients and is an excellent inhibitor of articular destruction. Therefore, MTX is currently used as a first-line anchor drug for RA therapy. 22 However, the immunosuppressive state induced by MTX leads to the development of LPDs, and is the cause of MTX-associated LPDs, although the mechanism for its development is unclear. In addition, patients with RA develop LPDs 2.0-to 5.5-times more often than the general population. [23][24][25] The hyper-immune state of RA may play a role in the tumorigenesis of LPDs. Therefore, how MTX influences the development of LPDs remains controversial. On the other hand, a significant proportion of patients with MTX-associated LPDs, particularly EBV + patients, have presented spontaneous regression (SR) after MTX cessation. [2][3][4][5]26 This phenomenon is characteristic of MTX-associated LPDs and is regarded as strong evidence for a potential tumorigenic role of MTX.
Histological and immunohistochemical findings in MTX T-LPD
As stated above, the largest category of MTX T-LPDs includes cases that are classified as the AITL type (MTX-AITL). The histological and immunohistochemical features of MTX-AITL are almost the same or similar to those of AITL in immunocompetent patients. MTX-AITL patients exhibited diffuse and polymorphous infiltration of small-to medium-sized lymphocytes intermingled with plasma cells, histiocytes, and eosinophils, accompanied by the proliferation of high endothelial venules ( Figure 1A). Small-to medium-sized lymphocytes were characterized by clear cytoplasm ( Figure 1B). Immunohistochemically, most cases were positive for CD3, CD4, the follicular helper T-cell markers, such as PD-1 ( Figure 1C), and CXCL13, but negative for CD8.
The second largest category is the CD8 + T-LPDs, including both EBV + and EBVcases. Histologically, CD8 + T-LPDs are mostly characterized by infiltration of medium-sized atypical lymphocytes ( Figure 2A). Immunohistochemically, all cases were positive for CD3 and CD8 ( Figure 2B). In addition, all of the cases examined were positive for the cytotoxic molecule ( Figure 2C), but were negative for CD56 regardless of EBV positivity.
The PTCL-NOS type is a heterogeneous group that includes cases that are difficult to precisely categorize. Previously reported cases exhibited diffuse infiltration of medium-to large-sized atypical lymphocytes. The one case of ALCL was characterized by the proliferation of CD30 + atypical large lymphocytes that were positive for CD3, TIA-1, and CD8, but negative for CD4. Only one case of ATLL type has been reported thus far. This case was morphologically characterized by the diffuse proliferation of mediumsized abnormal lymphocytes. The tumor cells were positive for CD3 and CD4, but negative for CD20, CD8, and CD30. Southern blotting for the HTLV-1 provirus revealed monoclonal proliferation of HTLV-1-infected cells.
Clinical characteristics of MTX T-LPD
The 43 MTX T-LPD patients consisted of 30 men and 13 women with a median age of 66 years (range, 31-85). Among them, 42 were treated for RA and one for polymyalgia rheumatica. Information of duration of MTX use was available for 33 patients, and the median time was 4 years (range, 0.5-21). For 6 of 16 patients with available data, iguratimoid, mizoribine, salazosulfapyridine, etanercept, and bucillamine were used as immunosuppressive agents in addition to MTX. At the time of diagnosis, 38 patients (88%) had lymphadenopathy, including 12 who also had extranodal involvement. The remaining 5 patients (12%) had only extranodal lesions as follows: skin (n=2), subcutis (n=1), subcutis and abdominal cavity (n=1), and oral cavity (n=1). T-cell and NK-cell PTLDs were reported to have a late onset after organ transplantation. Satou et al., however, recently reported that there was no significant difference in the MTXuse duration between MTX T-and B-LPD. 6
Therapy and prognosis for MTX T-LPDs
After the diagnosis of MTX T-LPD, MTX was immediately withdrawn in 40 patients and 33 (83%) presented with SR (complete remission [CR] or partial response [PR]) after cessation. Ten patients received cytotoxic chemotherapy as the initial treatment, including two patients who developed SR after MTX cessation. Eventually, 38 patients achieved CR and two achieved partial response PR. Nine of the 40 patients had relapse or progression. Notably, all 10 CD8 + T-LPD patients achieved CR after cessation of MTX regardless of EBV positivity. Furthermore, none of these 10 patients relapsed or required cytotoxic chemotherapy during their entire clinical course.
EBV infection in MTX T-LPDs
The EBV status was assessed in all of the previous cases. Five of 43 cases had EBV + tumor cells. Notably, all of the EBV + cases were exclusively CD8 + T-LPD ( Figure 2D). A recent paper revealed that patients with MTX T-LPDs had a significantly lower proportion of EBV + tumor cells than those with MTX B-LPDs. Among the 38 EBVcases in the present series, 32 (84%) had scattered EBV-infected B cells in the background ( Figure 1D). The reactivation of EBV in the background B cells is suggestive of the immunodeficient status of the patients.
MTX-associated NK/T-LPDs
Only five cases of MTX NK/T-LPD have been described thus far. 2,7,16,17 The clinical features are summarized in Table 1. The five cases consisted of one male and four female patients with a median age of 73 years (range, 55-85). The primary sites were the nasal cavity (n=2), nose (n=1), gingiva (n=1), and both lungs (n=1). The histological and immunohistochemical features are summarized in Table 2. The features of MTX-associated NK-LPD were identical to those of extranodal NK/T-lymphoma. The size of lymphoma cells varied from medium to large. One case with a detailed description of histological features was accompanied by necrosis. Immunohistochemically, all of the cases with data were positive for CD3 and cytotoxic molecule (TIA-1 and/or granzyme B). Two of three cases were positive for CD8, one of three was positive for CD56, and all cases were EBV + . These CD8 + MTX NK/T-LPDs may overlap with EBV + CD8 + MTX T-LPDs, and it was difficult to make a clear line between the two. For that reason, we followed the diagnoses given in the cited papers.
MTX was immediately withdrawn for all patients, and they all presented with SR after cessation except for one patient (case no.4) who received radiotherapy and achieved CR. Case no.5 first presented partial response (PR), but the tumor subsequently progressed. The patient received SMILE therapy (combination of steroid, methotrexate, ifosfamide, l-asparaginase, and etoposide) as additional treatment. The disease was evaluated as PR after two courses of SMILE therapy, but there were no follow-up data.
How MTX-associated T-and NK/T-LPDs should be treated
As stated above, although some cases may relapse or progress later, SR may be expected in the majority of MTX T-LPD and NK/T-LPD cases. In particular, all of the CD8 + T-LPD patients achieved CR after MTX cessation without relapse. It is well known that a significant proportion of patients with MTX B-LPD and HL types present SR after MTX cessation. [2][3][4][5]26 Therefore, chemotherapy cannot be started immediately after the diagnosis of MTX B-LPD and HL type. In addition, withdrawal of MTX should be the initial management after the diagnosis of MTX T-LPDs and NK/T-LPDs, although a definite conclusion cannot be made yet. For patients that do not exhibit SR, or develop relapse or progression after SR, an aggressive therapy suitable for each lymphoma subtype is needed.
Pathogenesis of MTX T-and NK/T-LPDs
The genetic and molecular characteristics of MTX T-and NK/T-LPDs are unclear, and their pathogenesis remains to be elucidated. Possible mechanisms of their tumorigenesis are described below.
In general, regardless of cell lineage and EBV status, the immunosuppressive state induced by MTX is considered a common cause of MTX-associated LPDs. Feng et al. 28 suggested another potential cause of EBV + MTX-associated LPDs. They indicated that, in contrast to other causes of immunodeficiency, MTX may directly reactivate latent EBV, leading to the development of LPDs. However, most MTX T-LPDs are negative for EBV in proliferative T-and NK-cells, meaning that this model cannot directly be applied. The majority of EBV -MTX T-LPDs had scattered EBVinfected B cells in the background; therefore, the function of EBV-specific cytotoxic T lymphocytes may be suppressed due to immunodeficiency. 29 Thus, the reactivation of EBV implies that the patients have an immune disorder and may suppress any immune response to inhibit tumor growth.
In cases of EBV -CD8 + T-LPD, the proliferation of CD8 + cytotoxic T-cell lymphoma cells may be induced by the growth of EBV-infected B cells. The CD8 + T-cells act to prevent the transformation of EBV-infected B cells into a B-cell malignancy. 30 In addition, Sandhu et al. 31 reported that MTX preferentially affects subsets of CD8 + T lymphocytes. They revealed that, after treatment with MTX, there was a significant decline in CD8 + IFNγ + and increase in CD8 + IL17 + T-cells. The ability of MTX to increase a subset of CD8 + T-cells may also aid in the development of the CD8 + T-LPD type. Future studies are expected to clarify these issues.
HSTL IN PATIENTS WITH INFLAMMATORY BOWEL DISEASE
Farcet et al. first described HSTL in 1990 as a PTCL of γδ phenotype with tumor cells localized in the liver and spleen. 32 Subsequently, HSTL with a αβ phenotype was also reported. 33 The majority of HSTL cases express γδ T-cell receptors, and only a minority of cases are the αβ type. HSTL is a rare and fatal extranodal lymphoma that primarily affects young men. It accounts for <1% of all non-Hodgkin lymphomas and 1-2% of all PTCL cases. Patients typically present splenomegaly and hepatomegaly. Bone marrow involvement is detected in almost all patients. It is known that 20% of HSTL cases develop during chronic immune suppression, and 10% occur in individuals receiving thiopurines and/or TNF antagonists for IBD. 1,19,34 Thiopurines (e.g. azathioprine and 6-mercaptopurine) and TNF antagonists (e.g. infliximab, adalimumab, and etanercept) are now widely used for IBD patients, including those with Crohn's disease (CD) and ulcerative colitis (UC). Herrinton et al. 34 reported that IBD alone is not associated with a risk of lymphoma. However, previous reports suggested that the use of thiopurines and TNF antagonists, alone or in combination, was associated with an increased risk of lymphoma in IBD patients. [34][35][36][37][38] According to the report of Herrinton et al., the majority of lymphomas developing in IBD patients were B-cell lymphomas and HL. The T-cell lymphomas in IBD patients were HSTL and mycosis fungoides, which accounted for 5% and 2% of the lymphomas arising in IBD, respectively. 34 Considering that HSTL accounts for only 1-2% of all PTCLs, using these immunosuppressive agents may increase the risk for developing this rare lymphoma.
Clinicopathological findings of HSTL in IBD patients
The 52 cases of HSTL were in 46 male and 6 female IBD patients with a median age of 23 years (range, 12-79). Among the 52 patients, 45 were treated for CD and seven for UC. All of the patients received thiopurines and/or TNF antagonists, or other immunosuppressive agents. Eighteen patients received only thiopurines, one received only TNF antagonist, and 32 received both.
Histologically, all cases were consistent with the histopathological findings documented in HSTL. Namely, the tumor cells were small to medium in size with pale cytoplasm ( Figure 3A). In the spleen, the neoplastic cells involved the cords and sinuses of expanded red pulp. The liver demonstrated predominant sinusoidal infiltration ( Figure 3B). The bone marrow contained neoplastic cells in most cases, and exhibited an interstitial and sinusoidal pattern of involvement. Immunohistochemically, most cases were positive for CD3 ( Figure 3C), CD8, and TIA-1 ( Figure 3D). TCR expression was evaluated in 24 cases: γδ phenotype in 18 and αβ phenotype in 6 cases.
Overall, the clinicopathological findings of HSTL in IBD patients were not notably different from those of HSTL in immunocompetent patients.
Therapy and prognosis for HSTL in IBD patients
Information on the treatment and outcome was available for 47 of the 52 previously reported cases. Most patients received chemotherapy such as CHOP (combination of cyclophosphamide, vincristine, Adriamycin, and prednisolone), hyper CVAD (combination of cyclophosphamide, vincristine, doxorubicin, and dexamethasone), IVAC (combination of ifosfamide, etoposide, and cytarabine), and ICE (combination of ifosfamide, carboplatin, and etoposide). The outcomes of the patients were markedly poor. Only eight of 47 patients were alive at the time of the last followup. Information on transplantation was available for 27 cases. Eight patients received allogenic stem cell transplantation (SCT), five received autologous SCT, three received allogenic bone marrow transplantation, and 11 did not undergo transplantation. Among the 14 patients who received transplantation, seven were alive at the time of the last follow up. On the other hand, two of the 11 patients who did not receive transplantation were alive. In some cases, immunosuppressive drugs were withdrawn as the initial management after diagnosis. However, the tumor did not exhibit SR and the patients received chemotherapy. 40,42 As mentioned above, HSTL in IBD patients was characterized by a markedly poor prognosis. Although the number of reported cases is small, SR after cessation of immunosuppressive agents was not expected. Therefore, the patients should receive chemotherapy as first-line treatment. Transplantation should be also considered because it may improve the prognosis. Indeed, although the enrolled patients were not limited to those with IBD, recent reports revealed that long-term survival may be expected in HSTL patients who receive SCT, particularly allogenic SCT. 48,51,52 The graft-versus-lymphoma effect conferred by allogenic SCT was considered beneficial for the patients. In addition, Yabe et al. reported that hyper CVAD, an intensive chemotherapy, may result in better survival than a non-hyper CVAD regimen in HSTL patients. 19
CONCLUSION
In this review, we mainly focused on summarizing the clinicopathological characteristics of MTX T-LPDs, MTX NK/T-LPDs, and HSTL in patients with IBD. The MTX T-LPD cases mainly consisted of three types: AITL, PTCL-NOS, and CD8 + T-LPD. The EBV + rate of MTX T-LPDs was 12%, which is significantly lower than that for MTX B-LPDs. Notably, all of the EBV + cases were CD8 + T-LPDs. SR may be expected in the majority of MTX T-LPD and NK/ T-LPD cases. In particular, all of the CD8 + T-LPD patients achieved CR after MTX cessation and none of the patients experienced relapse. As for the MTX B-LPD and HL type, withdrawal of MTX should be the initial management for MTX T-LPD and NK/T-LPD patients after diagnosis. In patients with IBD, the use of thiopurines and/or TNF antagonist may increase the risk for developing HSTL. The clinical and pathological features of HSTL in IBD overlap with those of HSTL in immunocompetent patients.
Due to the rarity of T-cell and NK-cell OIIA-LPDs, the number of previously reported cases is limited; therefore, more cases are needed to further clarify their features. Moreover, the molecular features and mechanism of pathogenesis remained to be elucidated.
|
2019-07-02T13:47:56.070Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "f833d576a1b0ae2827c85ea20136572ec528f6ab",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.jstage.jst.go.jp/article/jslrt/59/2/59_19013/_pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f833d576a1b0ae2827c85ea20136572ec528f6ab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3825011
|
pes2o/s2orc
|
v3-fos-license
|
Preparation and Dielectric Properties of SiC/LSR Nanocomposites for Insulation of High Voltage Direct Current Cable Accessories
The conductivity mismatch in the composite insulation of high voltage direct current (HVDC) cable accessories causes electric field distribution distortion and even insulation breakdown. Therefore, a liquid silicone rubber (LSR) filled with SiC nanoparticles is prepared for the insulation of cable accessories. The micro-morphology of the SiC/LSR nanocomposites is observed by scanning electron microscopy, and their trap parameters are characterized using thermal stimulated current (TSC) tests. Moreover, the dielectric properties of SiC/LSR nanocomposites with different SiC concentrations are tested. The results show that the 3 wt % SiC/LSR sample has the best nonlinear conductivity, more than one order of magnitude higher than that of pure LSR with improved temperature and nonlinear conductivity coefficients. The relative permittivity increased 0.2 and dielectric loss factor increased 0.003, while its breakdown strength decreased 5 kV/mm compared to those of pure LSR. Moreover, the TSC results indicate the introduction of SiC nanoparticles reduced the trap level and trap density. Furthermore, the SiC nanoparticles filling significantly increased the sensitivity of LSR to electric field stress and temperature changes, enhancing the conductivity and electric field distribution within the HVDC cable accessories, thus improving the reliability of the HVDC cable accessories.
Introduction
High voltage direct current (HVDC) transmissions have attracted increasing attention because of their many advantages [1][2][3], such as large capacity, long distance, fast and flexible power regulation, high transient stability, and low line loss. HVDC cables are indispensable to HVDC systems and have been widely used in asynchronous networks, underground power grids, and submarine transmission cables recently [4][5][6].
The operational safety and reliability of HVDC cables is very important for the stability of the HVDC transmission network [7]. Cable accessories have always been the weakest part of HVDC cables because of their complex insulation structure, where most failures occur [8,9]. The composite insulation of cable accessories generally comprises cross-linked polyethylene (XLPE; cable insulation) and silicone rubber (SR; reinforced insulation of accessories). Under DC voltage, the electric field distribution of cable accessories only depends on the conductivity of the composite insulation and the interface space charge [10]. Moreover, the conductivity is severely affected by temperature and electric field strength, while the dependence of two materials is different [11][12][13]. In general, the conductivity of XLPE is one or two orders of magnitude higher than SR insulation, which easily leads to electric field distortion or even insulation breakdown [14,15]. Further, it is difficult to realize uniform electric field distribution of cable accessories by varying the temperature and electric field stress. Nevertheless, the application of nanocomposite dielectrics provides a solution for the conductivity matching problems in HVDC cable accessories.
MgO/XLPE nanocomposites as main insulators for ±500 kV HVDC cables have been successfully developed in Japan [16,17]. ABB Company has added a nonlinear transitional layer between the main insulation of the cable and the reinforced insulation of the cable accessories to match the conductivity of XLPE/SR [18,19]. This additional layer realizes uniform electric field distribution within the cable accessories and reduces the space charge on the interface. Nano modification has also been widely used to improve various properties of materials, such as thermal conductivity, flame retardancy, and dielectric and mechanical properties [20][21][22][23]. Numerous related experimental studies have been reported globally. For example, preparing XLPE composite dielectrics by adding a small amount of nanoparticles (such as SiO 2 , ZnO, Al 2 O 3 , and MgO) has achieved a certain degree of success in improving the insulation properties of these dielectrics [24][25][26] and suppressing the space charge. These nano modification studies mainly focus on cable insulation (XLPE), and research on nano modification of reinforced insulation of cable accessories (SR) has been relatively scarce. In this study, a nanocomposite dielectric was prepared for insulation of cable accessories using SiC nanoparticles as the filler and liquid silicone rubber (LSR) as the matrix, achieve a conductivity matching with XLPE and promoting uniform distribution of the electric field within the DC cable accessories to improve the safety and reliability of HVDC cable accessories.
Sample Preparation
POWERSIL ® 737 manufactured by Wacker Chemical Co., Ltd. (Munich, Germany), was used for sample preparation. This is an A/B two-component addition type LSR and is different from traditionally used high temperature vulcanization or room temperature vulcanization SR because of its low viscosity, excellent mechanical properties, and outstanding dielectric behavior. SiC nanoparticles were obtained from Hefei Kelvin Energy Technology Co., Ltd. (Hefei, China) (purity ≥ 99.9%, average size 60 nm).
The flow chart of the preparation of the SiC/LSR nanocomposites is shown in Figure 1. The sample preparation process was as follows: a weighed quantity of A/B rubber and a certain amount of SiC nanoparticles were taken in a beaker and mixed using a multifunctional agitating machine and evacuated repeatedly until no bubbles were generated. Then, the rubber was molded under a flat vulcanizing machine for 10 min at 393 K and 15 MPa; the secondary vulcanization was performed in a circulating air oven with fresh air supply at 473 K for 4 h. Finally, the samples were placed in a drying oven for 24 h to eliminate the impurities and moisture. The samples had a radius of 50 mm and a thickness of 0.3 mm. conductivity of XLPE is one or two orders of magnitude higher than SR insulation, which easily leads to electric field distortion or even insulation breakdown [14,15]. Further, it is difficult to realize uniform electric field distribution of cable accessories by varying the temperature and electric field stress. Nevertheless, the application of nanocomposite dielectrics provides a solution for the conductivity matching problems in HVDC cable accessories. MgO/XLPE nanocomposites as main insulators for ±500 kV HVDC cables have been successfully developed in Japan [16,17]. ABB Company has added a nonlinear transitional layer between the main insulation of the cable and the reinforced insulation of the cable accessories to match the conductivity of XLPE/SR [18,19]. This additional layer realizes uniform electric field distribution within the cable accessories and reduces the space charge on the interface. Nano modification has also been widely used to improve various properties of materials, such as thermal conductivity, flame retardancy, and dielectric and mechanical properties [20][21][22][23]. Numerous related experimental studies have been reported globally. For example, preparing XLPE composite dielectrics by adding a small amount of nanoparticles (such as SiO2, ZnO, Al2O3, and MgO) has achieved a certain degree of success in improving the insulation properties of these dielectrics [24][25][26] and suppressing the space charge. These nano modification studies mainly focus on cable insulation (XLPE), and research on nano modification of reinforced insulation of cable accessories (SR) has been relatively scarce. In this study, a nanocomposite dielectric was prepared for insulation of cable accessories using SiC nanoparticles as the filler and liquid silicone rubber (LSR) as the matrix, achieve a conductivity matching with XLPE and promoting uniform distribution of the electric field within the DC cable accessories to improve the safety and reliability of HVDC cable accessories.
Sample Preparation
POWERSIL ® 737 manufactured by Wacker Chemical Co., Ltd. (Munich, Germany), was used for sample preparation. This is an A/B two-component addition type LSR and is different from traditionally used high temperature vulcanization or room temperature vulcanization SR because of its low viscosity, excellent mechanical properties, and outstanding dielectric behavior. SiC nanoparticles were obtained from Hefei Kelvin Energy Technology Co., Ltd. (Hefei, China) (purity ≥ 99.9%, average size 60 nm).
The flow chart of the preparation of the SiC/LSR nanocomposites is shown in Figure 1. The sample preparation process was as follows: a weighed quantity of A/B rubber and a certain amount of SiC nanoparticles were taken in a beaker and mixed using a multifunctional agitating machine and evacuated repeatedly until no bubbles were generated. Then, the rubber was molded under a flat vulcanizing machine for 10 min at 393 K and 15 MPa; the secondary vulcanization was performed in a circulating air oven with fresh air supply at 473 K for 4 h. Finally, the samples were placed in a drying oven for 24 h to eliminate the impurities and moisture. The samples had a radius of 50 mm and a thickness of 0.3 mm.
Dielectric Properties
The dielectric properties of the pure LSR and SiC/LSR nanocomposites were tested according to the following aspects: (a) Conductivity: The DC conductivity of the LSR and SiC/LSR samples was measured at room temperature by a three-electrode system, as shown previously [27]. A high-voltage electrode was connected to a DC source through a series of resistance. The DC source had an output voltage of 0-10 kV to obtain DC electric field conductivity varying from 0.1 to 30 kV/mm. The stable current (I) was recorded after applying the DC voltage for 10 min. For accuracy, multiple samples were employed to ensure repeatability, and the average values were considered.
(b) Dielectric spectrum: The dielectric spectrum of the pure LSR and SiC/LSR samples was tested using a broadband dielectric/impedance spectrometer (Concept 80, Novocontrol Technologies, Montabaur, Germany). The test frequency range was 10 −1 -10 7 Hz. The samples were deposited on both sides of a gold electrode, and the diameter of the test specimen was 20 mm.
(c) Dielectric strength: The DC breakdown electric field strength of the samples was measured with a two-electrode system, and the entire testing system was placed in an epoxy resin drum with transformer oil (as shown in Figure 2
Dielectric Properties
The dielectric properties of the pure LSR and SiC/LSR nanocomposites were tested according to the following aspects: (a) Conductivity: The DC conductivity of the LSR and SiC/LSR samples was measured at room temperature by a three-electrode system, as shown previously [27]. A high-voltage electrode was connected to a DC source through a series of resistance. The DC source had an output voltage of 0-10 kV to obtain DC electric field conductivity varying from 0.1 to 30 kV/mm. The stable current (I) was recorded after applying the DC voltage for 10 min. For accuracy, multiple samples were employed to ensure repeatability, and the average values were considered.
(b) Dielectric spectrum: The dielectric spectrum of the pure LSR and SiC/LSR samples was tested using a broadband dielectric/impedance spectrometer (Concept 80, Novocontrol Technologies, Montabaur, Germany). The test frequency range was 10 −1 -10 7 Hz. The samples were deposited on both sides of a gold electrode, and the diameter of the test specimen was 20 mm.
(c) Dielectric strength: The DC breakdown electric field strength of the samples was measured with a two-electrode system, and the entire testing system was placed in an epoxy resin drum with transformer oil (as shown in Figure 2). The test was based on Standardization Administration of China (SAC) Publication No. GB/T 1408. , which is equivalent to standard IEC 60243. The Weibull distribution was used to characterize the DC breakdown strength after discarding the maximum and minimum values. (d) Thermal stimulated current (TSC) test: The SiC/LSR samples was polarized under an electric field of 10 kV/mm at 333 K for 10 min to characterize their trap parameters. Afterwards, the temperature was decreased to 273 K quickly using liquid nitrogen until the depolarization current of sample was less than 1 pA. Then, the temperature was linearly increased to 393 K at 3 K/min and the TSC of the sample was measured. The TSC measurement system included a Keithley 6517B electrometer, a DC high voltage generator, vacuum equipment, and heating and cooling systems. The setup of the TSC measurement system and test conditions are shown in Figures 3 and 4, respectively. (d) Thermal stimulated current (TSC) test: The SiC/LSR samples was polarized under an electric field of 10 kV/mm at 333 K for 10 min to characterize their trap parameters. Afterwards, the temperature was decreased to 273 K quickly using liquid nitrogen until the depolarization current of sample was less than 1 pA. Then, the temperature was linearly increased to 393 K at 3 K/min and the TSC of the sample was measured. The TSC measurement system included a Keithley 6517B electrometer, a DC high voltage generator, vacuum equipment, and heating and cooling systems. The setup of the TSC measurement system and test conditions are shown in Figures 3 and 4, respectively. Electric field stress (kV/mm)
Microstructure
The micro morphology of the pure LSR and SiC/LSR nanocomposites with different SiC concentrations was observed using scanning electron microscopy (SEM, SU8020 Hitachi High Technologies Corp., Tokyo, Japan). The central area of observation and the cross-sectional SEM photographs are shown in Figure 5. In the figure, we can observe that the dispersion of the SiC nanoparticles in the matrix decreased with increasing SiC concentration because high concentrations made the nanocomposite prone to agglomeration. Electric field stress (kV/mm)
Microstructure
The micro morphology of the pure LSR and SiC/LSR nanocomposites with different SiC concentrations was observed using scanning electron microscopy (SEM, SU8020 Hitachi High Technologies Corp., Tokyo, Japan). The central area of observation and the cross-sectional SEM photographs are shown in Figure 5. In the figure, we can observe that the dispersion of the SiC nanoparticles in the matrix decreased with increasing SiC concentration because high concentrations made the nanocomposite prone to agglomeration.
Microstructure
The micro morphology of the pure LSR and SiC/LSR nanocomposites with different SiC concentrations was observed using scanning electron microscopy (SEM, SU8020 Hitachi High Technologies Corp., Tokyo, Japan). The central area of observation and the cross-sectional SEM photographs are shown in Figure 5. In the figure, we can observe that the dispersion of the SiC nanoparticles in the matrix decreased with increasing SiC concentration because high concentrations made the nanocomposite prone to agglomeration.
Conductivity
The curve of conductivity vs. temperature of the pure LSR and SiC/LSR nanocomposites with different SiC concentrations at 20 kV/mm is shown in Figure 6. The relationship between the conductivity (γ) and temperature of polymers can be expressed by the following equation: where γ0 is a polymer-related constant and α is the temperature coefficient.
Using Equation (1) for experimental data fitting calculations, the temperature coefficient of pure LSR was 0.12, indicating that the conductivity of pure LSR showed little change with increasing temperature, while the conductivity of XLPE increased by two orders of magnitude with increasing temperature, which would lead to severe electric field distribution distortion and interface space charge accumulation. The temperature coefficient of the SiC/LSR nanocomposites was 0.6. The SEM results showed that agglomeration could occur when the nanoparticles had a higher concentration of SiC; therefore, the 3 wt % SiC/LSR nanocomposites showed highest conductivity and the value increased by more than one order of magnitude with increasing temperature. This indicated that the SiC nanoparticles doping could effectively improve the nonlinear conductivity of LSR by varying the temperature. Thus, the electric field distribution in cable accessories using SiC/LSR nanocomposites is better than that using pure LSR.
The plot of conductivity vs. electric field strength at 70 °C for the pure LSR and SiC/LSR samples with different SiC concentrations is shown in Figure 7.
Conductivity
The curve of conductivity vs. temperature of the pure LSR and SiC/LSR nanocomposites with different SiC concentrations at 20 kV/mm is shown in Figure 6. The relationship between the conductivity (γ) and temperature of polymers can be expressed by the following equation: where γ 0 is a polymer-related constant and α is the temperature coefficient. Using Equation (1) for experimental data fitting calculations, the temperature coefficient of pure LSR was 0.12, indicating that the conductivity of pure LSR showed little change with increasing temperature, while the conductivity of XLPE increased by two orders of magnitude with increasing temperature, which would lead to severe electric field distribution distortion and interface space charge accumulation. The temperature coefficient of the SiC/LSR nanocomposites was 0.6. The SEM results showed that agglomeration could occur when the nanoparticles had a higher concentration of SiC; therefore, the 3 wt % SiC/LSR nanocomposites showed highest conductivity and the value increased by more than one order of magnitude with increasing temperature. This indicated that the SiC nanoparticles doping could effectively improve the nonlinear conductivity of LSR by varying the temperature. Thus, the electric field distribution in cable accessories using SiC/LSR nanocomposites is better than that using pure LSR.
The plot of conductivity vs. electric field strength at 70 • C for the pure LSR and SiC/LSR samples with different SiC concentrations is shown in Figure 7.
where A is a constant related to properties of the material and β is the nonlinear coefficient. Thus, there is a linear relationship between lgγ and lgE, and the slope of the changing curve β represents the degree of nonlinear characteristics [27]. Using linear fitting for the two segments, the threshold electric field, the electric filed stress at which nonlinear conductivity is observed, is represented as Ec and the nonlinear coefficient β are shown in Table 1.
where A is a constant related to properties of the material and β is the nonlinear coefficient. Thus, there is a linear relationship between lgγ and lgE, and the slope of the changing curve β represents the degree of nonlinear characteristics [27]. Using linear fitting for the two segments, the threshold electric field, the electric filed stress at which nonlinear conductivity is observed, is represented as Ec and the nonlinear coefficient β are shown in Table 1. The relationship between conductivity and electric field stress can be obtained by the following equation: By logarithmic transformation of Equation (2), we get where A is a constant related to properties of the material and β is the nonlinear coefficient. Thus, there is a linear relationship between lgγ and lgE, and the slope of the changing curve β represents the degree of nonlinear characteristics [27]. Using linear fitting for the two segments, the threshold electric field, the electric filed stress at which nonlinear conductivity is observed, is represented as Ec and the nonlinear coefficient β are shown in Table 1. As shown in Table 1 and Figure 7, the nonlinear coefficient of pure LSR was small and the conductivity showed little change with increasing electric field stress. Further, the nonlinear coefficient of the SiC/LSR samples was several times greater than that of pure LSR, and the value for the sample containing 3 wt % SiC was the highest. As shown in Figure 7, the conductivity of the SiC/LSR samples reached an inflection point at which nonlinear conductivity was observed. The electric field stress value corresponding to this inflection point was called the threshold electric field. The SiC/LSR samples with different SiC concentrations showed different values for the threshold electric field. The 3 wt % SiC/LSR sample exhibited the best nonlinear conductivity and the lowest threshold electric field (about 7 kV/mm). The nonlinear conductivity of pure LSR could be considered insignificant, and its threshold electric field was 12 kV/mm. This indicated that the SiC nanoparticles doping could improve the nonlinear conductivity of LSR and lower its threshold electric field.
The conductivity of insulation material was depended on the concentration of charge carriers, the charge of the charge carriers, and charge mobility. The doping of nanoparticles has a great influence on the charge mobility, while the charge of charge carriers is basically unchanged and the concentration of charger carriers was determined by matrix material. The charge mobility is related to temperature and the jump barrier height needed overcome. In the presence of an external electric field, the jump barrier height could be decreased. When the threshold electric field was reached, the jump barrier height obviously decreased. At this moment, the numbers of carriers increased greatly and rapidly [28,29]. Therefore, a substantial increase in conductivity and polymers showed a nonlinear relationship with the electric field. In addition, the SiC nanoparticles doping led to the overlapping of the interface between adjacent nanoparticles. It is generally considered that the interface has a higher conductivity than the nanoparticles and matrix. In the presence of an external electric field, many charge carriers absorb sufficient energy to cross the potential barrier and participate in the conduction.
The conductivity of the SiC/LSR nanocomposites was higher than that of pure LSR by more than one order of magnitude under the same electric field stress and this difference in conductivity increased with increasing electric field stress. This showed that the addition of the SiC nanoparticles could increase the sensitivity of LSR to electric field stress and reduce the conductivity mismatch between the insulation of the cables and of the cable accessories caused by electric field stress changes, thus improving the electric field distribution within the cable accessories.
Relative Permittivity and Dielectric Loss Factor
The relationship between relative permittivity/dielectric loss factor and frequency of the LSR and SiC/LSR nanocomposites with different SiC concentrations is shown in Figure 8.
In Figure 8, the relative permittivity of the SiC/LSR nanocomposites was higher than that of pure LSR in the given frequency range. Moreover, the SiC/LSR samples with higher SiC concentrations showed a larger relative permittivity. The relative permittivity of the pure LSR and SiC/LSR samples was both a fixed value with the increase of frequency.
The dielectric loss factor of the pure LSR and SiC/LSR nanocomposites showed the same trend in the given frequency range, and the value of dielectric loss increased with increasing SiC concentration. Both displacement and relaxation polarizations could be established in detail from these results. The dielectric loss factor was high at low frequency (<1 Hz). As the relaxation polarization is too difficult to build, the loss factor decreased and gradually stabilized with increasing frequency (10 Hz < f < 10 4 Hz). Then, because of the increased displacement polarization, the dielectric loss factor increased at high frequency (10 5 Hz < f < 10 7 Hz).
DC Breakdown Strength
The Weibull distribution of the DC breakdown strength of the pure LSR and SiC/LSR samples is shown in Figure 9. It could be seen the breakdown strength of pure LSR was higher than that of the 3 wt% SiC/LSR sample. Because the SiC nanoparticles are semi-conductive, the increased conductivity due to increased carrier mobility led to the formation of an internal discharge path in the presence of an external electric field, which in turn resulted in the decreased breakdown strength of the polymer dielectric.
TSC
TSC tests were conducted to represent the trap parameters of the LSR nanocomposites. The TSC curves of the pure LSR and 3 wt % SiC/LSR nanocomposites are shown in Figure 10.
Under high temperature and DC electric field, the migrated carriers of the sample were easily trapped by the polymer. On decreasing the temperature to 273 K rapidly using liquid nitrogen, the trapped carriers were "frozen". During the subsequent slow warming process, the trapped carriers were able to "escape" because of thermal excitation, and the weak currents were recorded by the
DC Breakdown Strength
The Weibull distribution of the DC breakdown strength of the pure LSR and SiC/LSR samples is shown in Figure 9. It could be seen the breakdown strength of pure LSR was higher than that of the 3 wt% SiC/LSR sample. Because the SiC nanoparticles are semi-conductive, the increased conductivity due to increased carrier mobility led to the formation of an internal discharge path in the presence of an external electric field, which in turn resulted in the decreased breakdown strength of the polymer dielectric.
DC Breakdown Strength
The Weibull distribution of the DC breakdown strength of the pure LSR and SiC/LSR samples is shown in Figure 9. It could be seen the breakdown strength of pure LSR was higher than that of the 3 wt% SiC/LSR sample. Because the SiC nanoparticles are semi-conductive, the increased conductivity due to increased carrier mobility led to the formation of an internal discharge path in the presence of an external electric field, which in turn resulted in the decreased breakdown strength of the polymer dielectric.
TSC
TSC tests were conducted to represent the trap parameters of the LSR nanocomposites. The TSC curves of the pure LSR and 3 wt % SiC/LSR nanocomposites are shown in Figure 10.
Under high temperature and DC electric field, the migrated carriers of the sample were easily trapped by the polymer. On decreasing the temperature to 273 K rapidly using liquid nitrogen, the trapped carriers were "frozen". During the subsequent slow warming process, the trapped carriers were able to "escape" because of thermal excitation, and the weak currents were recorded by the
TSC
TSC tests were conducted to represent the trap parameters of the LSR nanocomposites. The TSC curves of the pure LSR and 3 wt % SiC/LSR nanocomposites are shown in Figure 10.
Under high temperature and DC electric field, the migrated carriers of the sample were easily trapped by the polymer. On decreasing the temperature to 273 K rapidly using liquid nitrogen, the trapped carriers were "frozen". During the subsequent slow warming process, the trapped carriers were able to "escape" because of thermal excitation, and the weak currents were recorded by the 6517B electrometer (Keithley, Cleveland, OH, USA). By analyzing the TSC curve in Figure 10, trapped charge could be obtained by the following equation: where I(T) is the TSC current value; T 1 and T 2 are the initial and end temperatures, respectively; and β is the temperature increase rate (3 K/min). Meanwhile, the trap level could be calculated according to the half-width method by the following equation: where T m is the temperature corresponding to the peak current, ∆T is the temperature difference between the two half-peak values, and k is the Boltzmann constant [30]. The trap parameters of the pure LSR and SiC/LSR samples are shown in Table 2.
Materials 2018, 11, x FOR PEER REVIEW 9 of 11 6517B electrometer (Keithley, Cleveland, OH, USA). By analyzing the TSC curve in Figure 10, trapped charge could be obtained by the following equation: where I(T) is the TSC current value; T1 and T2 are the initial and end temperatures, respectively; and β is the temperature increase rate (3 K/min). Meanwhile, the trap level could be calculated according to the half-width method by the following equation: where Tm is the temperature corresponding to the peak current, ∆T is the temperature difference between the two half-peak values, and k is the Boltzmann constant [30]. The trap parameters of the pure LSR and SiC/LSR samples are shown in Table 2. As shown in Table 2 and Figure 9, the trap charge quantity and trap level of the SiC/LSR nanocomposites were lower than those of pure LSR. The trap density and charge trap depth were also decreased due to SiC nanoparticles provide the shallow traps. The trap parameters of the polymer were closely related to their macroscopic dielectric properties [31]. The charge trap depth, which controls the charge mobility, represents the required energy for carriers to jump from the trap energy level to the specific energy level in which they can participate in electric conduction in nanocomposite. Therefore, the probability of the charge carriers trapping of the SiC/LSR nanocomposites was reduced, making more carriers participate in the conduction easily, thereby increasing the charge mobility. Thus, the SiC/LSR nanocomposites have better nonlinear conductivity and the reduction in energy required corresponds to the lower threshold electric field of SiC/LSR nanocomposites. Specifically, the DC conductivity was inversely proportional to the trap charge and the breakdown strength was directly proportional to the trap level. Thus, the results of this study are As shown in Table 2 and Figure 9, the trap charge quantity and trap level of the SiC/LSR nanocomposites were lower than those of pure LSR. The trap density and charge trap depth were also decreased due to SiC nanoparticles provide the shallow traps. The trap parameters of the polymer were closely related to their macroscopic dielectric properties [31]. The charge trap depth, which controls the charge mobility, represents the required energy for carriers to jump from the trap energy level to the specific energy level in which they can participate in electric conduction in nanocomposite. Therefore, the probability of the charge carriers trapping of the SiC/LSR nanocomposites was reduced, making more carriers participate in the conduction easily, thereby increasing the charge mobility.
Thus, the SiC/LSR nanocomposites have better nonlinear conductivity and the reduction in energy required corresponds to the lower threshold electric field of SiC/LSR nanocomposites. Specifically, the DC conductivity was inversely proportional to the trap charge and the breakdown strength was directly proportional to the trap level. Thus, the results of this study are in complete agreement with previous results [32]. In summary, SiC nanoparticles doping could improve the sensitive dependence of LSR to electric field stress and temperature changes while lowering the trap level and decreasing the number of carriers trapped. As a result, the number of charge carriers involved in electric conduction substantially increased microscopically and the conductivity of SiC/LSR nanocomposites significantly increased macroscopically. This was beneficial for current matching with XLPE cable insulation and uniformly improved the electric field distribution in cable accessories, thus improving their safety and reliability.
Conclusions
Based on the experimental study of the dielectric properties of SiC/LSR nanocomposites, the following conclusions can be drawn: (1) SiC nanoparticles doping decreases the breakdown strength of LSR, greatly increases its conductivity, and increases its relative permittivity and dielectric loss factor. (2) SiC/LSR nanocomposites have better nonlinear conductivity characteristics than pure LSR, as their temperature coefficient and nonlinear coefficients are greatly improved, which in turn makes the distribution of the electric field more uniform in HVDC cable accessories.
|
2018-04-03T06:13:39.763Z
|
2018-03-01T00:00:00.000
|
{
"year": 2018,
"sha1": "0db7042f87e9efa5c9e292d43b3416f7026d20b3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/11/3/403/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "0db7042f87e9efa5c9e292d43b3416f7026d20b3",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
255186008
|
pes2o/s2orc
|
v3-fos-license
|
AER: Auto-Encoder with Regression for Time Series Anomaly Detection
Anomaly detection on time series data is increasingly common across various industrial domains that monitor metrics in order to prevent potential accidents and economic losses. However, a scarcity of labeled data and ambiguous definitions of anomalies can complicate these efforts. Recent unsupervised machine learning methods have made remarkable progress in tackling this problem using either single-timestamp predictions or time series reconstructions. While traditionally considered separately, these methods are not mutually exclusive and can offer complementary perspectives on anomaly detection. This paper first highlights the successes and limitations of prediction-based and reconstruction-based methods with visualized time series signals and anomaly scores. We then propose AER (Auto-encoder with Regression), a joint model that combines a vanilla auto-encoder and an LSTM regressor to incorporate the successes and address the limitations of each method. Our model can produce bi-directional predictions while simultaneously reconstructing the original time series by optimizing a joint objective function. Furthermore, we propose several ways of combining the prediction and reconstruction errors through a series of ablation studies. Finally, we compare the performance of the AER architecture against two prediction-based methods and three reconstruction-based methods on 12 well-known univariate time series datasets from NASA, Yahoo, Numenta, and UCR. The results show that AER has the highest averaged F1 score across all datasets (a 23.5% improvement compared to ARIMA) while retaining a runtime similar to its vanilla auto-encoder and regressor components. Our model is available in Orion, an open-source benchmarking tool for time series anomaly detection.
I. INTRODUCTION
Time series data is consistently generated and collected across various industries -examples include stock prices in finance, vital signs in healthcare, and retail sales in business. Effective monitoring and use of time series data are essential for increasing efficiency and productivity. In addition, analysis of time series data can extrapolate recurring patterns to predict future occurrences. Anomaly detection, an important task within time series analysis, explicitly aims to identify unexpected events. This research is increasingly relevant due to its broad applications in detecting crucial issues, such as financial fraud in trading networks [8], medical problems in electrocardiograms [5], [16], and ecosystem disturbances in satellite signals [21]. 1 The AER model is available in Orion: https://github.com/sintel-dev/Orion
Prediction-based Methods
Assumption: Anomalous values cannot be predicted as well as the normal ones.
Reconstruction-based Methods
Assumption: Anomalies cannot be effectively reconstructed since information is lost in the mapping to the latent dimensions. While the criteria differ across domains, anomalies in time series typically exhibit one of three identifiable patterns: point, contextual, or collective [4]. Point anomalies are singular data points that suddenly deviate from the normal range of the series. For example, a sensor malfunction is one common cause of point anomalies. Collective anomalies are a series of consecutive data points that are considered anomalous as a whole. Finally, contextual anomalies are groups of data points that fall within the series' normal range but do not follow expected temporal patterns.
Time series also exhibit unique properties that complicate anomaly detection. First, the temporality of time series implicates a correlation or dependence between each consecutive observation [11]. Second, the dimensionality of each observation influences the computational cost, imposing limitations on the modeling method. For example, modeling methods for multivariate datasets with more than one channel face the curse of dimensionality since they need to capture correlations between observations on top of temporal dependencies [3]. Third, noise due to minor sensor fluctuations during the process of capturing the signal can impact the performance [20]. The pre-processing stages must minimize noise to prevent models from confusing this noise with anomalies. Finally, time series are often non-stationary. They have statistical properties that change over time, like seasonality, concept drift, and change points, that can easily be mistaken for anomalies.
Existing machine learning methods for anomaly detection on time series can be either prediction-based or reconstructionbased (Fig. 1). Prediction-based methods train a model to learn previous patterns in order to forecast future observations [6]. An observation is anomalous when the predicted value deviates significantly from the actual value. Prediction-based methods are good at revealing point anomalies but tend to produce more false detection [13]. On the other hand, reconstructionbased methods learn a latent low-dimensional representation to reconstruct the original input [6]. This method assumes that anomalies are rare events that are lost in the mapping to the latent space. Hence, regions that cannot be effectively reconstructed are considered anomalous. In our experiments, we observed that reconstruction-based methods tend to be more effective than prediction-based methods at identifying contextual and collective anomalies.
This paper proposes a new architecture -an auto-encoder with regression (AER) model -that leverages the successes and addresses the limitations faced by each method type. This architecture trains a reconstruction-based auto-encoder with a prediction-based regression component using a joint objective function. As a result, the model can produce both reconstruction-based and prediction-based anomaly scores (likelihood of an abnormal observation). This paper also explores several ways to calculate and combine scores to address several limitations of existing methods. Briefly, the contributions of this paper are as follows: • We identified several successes and limitations of prediction-based and reconstruction-based methods using visualized examples. • We propose a novel architecture -auto-encoder with regression (AER) -that leverages the successes of prediction-based and reconstruction-based methods for anomaly detection on time series data. • We introduce the idea of masking anomaly scores created from the smoothing function to reduce start-of-sequence false-positive predictions. We applied masking to every baseline method and compared the method's performance to that of its unmasked counterpart. • We present bi-directional anomaly scores, which combine prediction-based anomaly scores in the forward and reverse directions. This method addresses the limitation of missing forecasts faced by prediction-based methods. • We demonstrated that AER outperformed five other baseline methods in anomaly detection on 12 time series datasets 2 . In addition, ablation studies show that the AER model achieved a 23.5% improvement in averaged F1 score compared to the baseline ARIMA model while retaining a runtime similar to its vanilla auto-encoder and LSTM regressor components. The structure of the paper is as follows: Section II provides an overview of the existing pipeline and approaches for time series anomaly detection. Section III formally defines the problem, and Section IV documents the successes and limitations of existing methods. Section V introduces our solution, including the AER framework, smoothing function masking, and bi-directional scoring. Finally, Sections VI and VII evaluate the proposed framework, discuss the results, and summarize the key findings.
A. Anomaly Detection Pipelines
Anomaly detection aims to find a set of anomalous intervals from either univariate or multivariate time series data. It is usually an unsupervised task due to the lack of labeled data. Recent work by Sintel [1] formalized this task as an end-toend pipeline consisting of pre-processing, modeling, and postprocessing stages. The pre-processing stage first transforms the raw data into suitable inputs for the models. The modeling stage then predicts or reconstructs the input to get the expected output. Finally, the post-processing stage finds discrepancies between the expected and real inputs. The methodology for finding these discrepancies significantly impacts the anomalies identified by this stage. Hence, our work focuses on the limitations in the post-processing stage for predictionbased and reconstruction-based methods. Understanding these limitations also enables us to make appropriate changes to the modeling stage.
B. Machine Learning-Based Approaches
Prediction-based approaches generally use the deviation between the predicted and actual values to identify anomalies. Autoregressive Integrated Moving Average (ARIMA) [15] and Long Short Term Memory Recurrent Neural Network with Non-parametric Dynamic Thresholding (LSTM-DT) [9] are well-known examples of prediction-based approaches. ARIMA uses lags and lagged forecast errors to predict future values. Statistical models like ARIMA require the user to have extensive domain knowledge about the time series data in order to adjust the parameters appropriately. Machine learningbased methods like LSTM-DT tend to require less domain knowledge. In the modeling stage, the method uses a separate LSTM neural network to model each channel in order to facilitate granular system control and mitigate errors from high-dimensionality outputs. In the post-processing stage, the method combines an exponentially-weighted average function with a non-parametric dynamic thresholding technique to detect anomalous intervals. Our work examines the limitations of the post-processing stage in LSTM-DT pipeline.
Reconstruction-based approaches learn a latent lowdimensional representation to reconstruct the original input. These methods assume that the latent space prioritizes capturing common patterns within the dataset. Rare events like anomalies are not captured in the latent representation and are less likely to be accurately reconstructed. Principal Component Analysis (PCA) [19], LSTM Auto-Encoders (LSTM-AE) [7], and LSTM Variational Auto-Encoders (LSTM-VAE) [14] are examples of reconstruction-based approaches. PCA is a dimensionality-reduction technique limited to linear reconstructions and fails to leverage spatial-temporal correlation in multivariate settings. LSTM-AE is an auto-encoder built from LSTM layers that learns a latent space representation for the input. The size of the latent space needs to be calibrated to capture generalizable patterns while avoiding noise and anomalies. LSTM-VAE introduces regularization in the latent space using a probabilistic encoder and decoder. However, these methods tend to overfit to the training data, which results in decreased performances [6].
Generative Adversarial Network (GAN) is another reconstruction-based approach to address the overfitting issue. This form of adversarial learning offers regularization to the reconstruction errors. An early example is MAD-GAN [12], which uses spatial-temporal correlation and other dependencies among multiple variables to capture non-linear latent interactions. TadGAN [6] is another GAN-based approach trained with cycle consistency loss to address model instability issues and allow for better reconstruction of time series data. It also proposes several methods in the post-processing stage to calculate reconstruction-based anomaly scores. Similar to prediction-based methods, our work examines post-processing steps presented by TadGAN for reconstruction-based approaches.
Zhao et al. propose MTAD-GAT, a multivariate anomaly detection model that optimizes a joint loss of forecasting-and reconstruction-based models [23]. The architecture of MTAD-GAT differs from AER (our work) where MTAD-GAT is a graph attention network in comparison to AER that includes a bidirectional LSTM network. Moreover, Zhao et al. apply additional preprocessing steps to clean the data. Specifically, they apply Spectral Residual (SR) anomaly detection method [17] to filter out anomalous regions. In this work, we limit preprocessing to data scaling, imputing, and detrending. Furthermore, our approach still operates in an unsupervised setting where there is no prior knowledge about the anomalies in the dataset and no hyperparameter tuning, preventing information leakage. Lastly, we provide analysis to understand why the combination of prediction-based and reconstruction-based anomaly scores can be beneficial in predicting point and collective anomalies.
III. ML-BASED ANOMALY DETECTION PIPELINE
Unsupervised time series anomaly detection aims to find a set of anomalous intervals given time series with one or more channels. Ideally, each interval captures an unexpected behavior that deviates from the expected patterns in the signal. This section first formulates the anomaly detection task into a sequence of steps ( Fig. 2) similar to Alnegheimish et al.'s work [1] and then critically analyzes existing methods to learn their strengths and weaknesses.
A. Pre-processing Stage
The time series signal is pre-processed into inputs suitable for models similar to Geiger et al.'s work [6]. The time series with number of channels is divided into train and test splits. The train split is used to learn the parameters for subsequent The pipeline for anomaly detection on time series data consists of pre-processing, modeling, and post-processing stages. Our work focuses on models, anomaly scores, and the smoothing function steps of the pipeline.
transformations. Both splits are detrended, as necessary, by fitting and subtracting a least-square fit. Then, the values of each split are min-max normalized to the range [-1, 1]. Finally, any missing values are imputed with the mean. Let be the total number of observations in the split without loss of generality. A rolling window with window size and step size 1 creates − number of inputs x = { , +1 , ..., + −1 } such that represents the index of the first observation in the window. It is worth noting that pre-processing varies based on application scenarios, and the above summary only covers the most common steps.
B. Modeling Stage
The input and output depend on the type of anomaly detection model. Each input x ∈ R × has observations based on the window size (default to = 100 for reconstructionbased models and = 250 for prediction-based models) with channels. In the case of multivariate inputs, separate models are trained for each channel to ensure traceability [9]. Usually, one channel is selected as the model's target channel. For example, many-to-one prediction-based models will produce single timestep predictions ∈ R for index of the target channel. On the other hand, many-to-one reconstruction-based models reconstruct the entire target channel and produce a sequence : + −1 ∈ R with the same starting index as the input x .
C. Post-processing Stage -Computing Anomaly Scores
The computation of anomaly scores differs between prediction-based and reconstruction-based models since they produce different outputs.
Prediction-based models produce a one-step forecast in the forward direction + at index + given the input x starting at index . Only forecasts for indices ∈ [ + 1, ] can be computed since prediction-based models require at least Anomaly Scores High anomaly scores at the early indices often result in falsepositive predictions.
PL2
Low prediction-based anomaly scores for contextual anomalies with simple patterns result in false-negative predictions. Prediction-based (P)
PL3
Missing prediction-based anomaly scores at the early indices result in false-negative predictions.
PS1
Prediction-based anomaly scores are better at capturing point anomalies than reconstruction-based anomaly scores.
RS1
Reconstruction-based anomaly scores are better at capturing contextual and collective anomalies.
Reconstruction-based (R) RL1
Reconstruction-based anomaly scores reducing peaks for point anomalies result in false-negative predictions.
RS2
Reconstruction-based DTW anomaly scores are better at capturing anomalies than AD and PD anomaly scores. observations to forecast the first value at the index +1. The absolute error between the sequence of forecasts in the forward direction and the time series creates the prediction-based anomaly score as defined in Eq. (1).
Reconstruction-based models reconstruct a sequence of values : + −1 of one channel given the input x starting at index . Each index in the time series signal has multiple reconstructed values since that index occurs in multiple sequences of : + −1 . The median of the collection of reconstructed values is used as the final value for index since using the median achieves better performance than using the mean [6]. Unlike prediction-based anomaly scores, reconstruction-based anomaly scores can be calculated for every index. The reconstruction-based anomaly scores can be calculated in three ways given sequences and : point-wise differencing, area differencing, or dynamic time warping.
Point-wise differencing (PD). The reconstruction-based PD anomaly score , defined in Eq. (2) takes the absolute error between the time series and the reconstructed value at every index .
Area differencing (AD). The reconstruction-based AD anomaly score , defined in Eq. (3) is created using a fixed length window size that measures the similarity between local regions.
The similarity is measured as the average difference between areas beneath two curves of length 2 calculated using the trapezoidal rule ( = 10 by default). Dynamic Time Warping (DTW). The reconstruction-based DTW anomaly score , defined in Eq. (4) created with dynamic time warping allows for many-to-many mapping between two sequences that are locally out of phase [2]. DTW creates a cost matrix ∈ R 2 ×2 such that each ( , ) coordinate represents the distance between and .
Dynamic programming solves for the optimal warp path * with the minimum warp distances between and . Exponentially weighted moving average (EWMA) [10] with a smoothing window of 0.1 is applied to both prediction-based and reconstruction-based anomaly scores to reduce noise.
D. Post-processing Stage -Identifying Anomalous Sequences
Hundman et al. [9] used the locally adaptive thresholding function to identify anomalous intervals from the anomaly scores. This function uses a sliding window to compute local thresholds, merges continuous observations to create anomalous sequences, and mitigates false-positives by pruning anomalies.
Let be the sequence of anomaly scores with a maximum size of length (one score for each observation). The window size defaults to 3 with a step size of 3*10 to optimally identify anomalies. The adaptive threshold for each sliding window is four standard deviations from the window's mean. Observations with scores that exceed that threshold are identified as anomalous. Consecutive anomalous time steps are joined together to create anomalous sequences. Hundman et al. [9] additionally employed a pruning method to reduce the number of false positives. Let ( ) represent the maximum anomaly score in each anomalous sequence ( ) . The maxima are sorted in descending order, and the decrease percentage change ( ) is calculated between ( ) and ( +1) . At the sequence ( ) whose percentage change ( ) does not exceed an empirically defined threshold (defaults to 0.13), that sequence and all subsequent sequences are reclassified as normal, i.e., all sequences between [j, K].
IV. CRITICAL ANALYSIS OF EXISTING METHODS
Despite some minor differences, most prediction-based (P) and reconstruction-based (R) methods follow the same course presented in Fig. 2. Both method types generally have their successes (S) and limitations (L), which are summarized in Table I , anomaly scores for LSTM-AE (green) and LSTM-VAE (purple) models.
PL1: High anomaly scores at the early indices often result in false-positive predictions. This error is likely the byproduct of using the exponential weighted moving average function to smooth the anomaly scores. The function requires at least the same number of observations as the size of the smoothing window before it can produce stable anomaly scores. While this limitation occurs in many signals, an example is seen in the prediction-based anomaly scores from the art_daily_flatmiddle signal (see PL1 in Fig. 3(b)).
PL2: Low prediction-based anomaly scores for contextual anomalies with simple patterns result in false-negative predictions. The cyclic pattern in prediction-based anomaly scores suggests that the models could not fully capture the structure, especially at the change point in the time series. However, in this case, the contextual anomaly is a simple pattern. Therefore, the models can easily forecast the pattern, resulting in nearly zero anomaly scores at the interval. Hence, the adaptive threshold failed to find the contextual anomaly (see PL2 in Fig. 3(b)).
PL3: Missing prediction-based anomaly scores at the early indices result in false-negative predictions (see PL3 in Fig. 4(b)). This limitation occurs only in prediction-based models since they require at least observations to forecast the first value at index + 1. This behavior usually results in false-negative predictions for signals with anomalies occurring at the beginning, mainly from datasets like YAHOOA3 with a PS1: Prediction-based anomaly scores are better at capturing point anomalies than reconstruction-based anomaly scores. For example, prediction-based anomaly scores showed more prominent peaks at anomalies than reconstruction-based anomaly scores for the A3Benchmark-TS11 signal from the YAHOOA3 dataset. As a result, the locally adaptive thresholding function can quickly identify anomalies using predictionbased anomalies, resulting in higher F1 scores for datasets like YAHOOA3 with more point anomalies (see PS1 in Fig. 4(b)).
RL1: Reconstruction-based anomaly scores reducing peaks for point anomalies result in false-negative predictions. The reconstruction-based anomaly scores are calculated from the median of all predicted values for index . Since some reconstructed outputs are better at capturing the point anomalies than others, the median value is closer to the true value at index . This calculation lowers the anomaly scores such that the window-based threshold no longer captures those point anomalies, since the scores are now closer to the window's mean (see RL1 in Fig. 4(c)).
RS1: Reconstruction-based anomaly scores are better at capturing contextual and collective anomalies. For example, reconstruction-based anomaly scores from the art_daily_flatmiddle signal spiked while predictionbased anomaly scores remained close to zero at the contextual anomaly (see RS1 in Fig. 3(d)). This behavior occurs for prediction-based anomaly scores since the contextual anomaly pattern was easy to model. On the other hand, reconstruction-based models struggled to recreate the entire interval, since the model tries to reconstruct values from simple anomalous intervals and complex non-anomalous intervals. The sudden shift from an intricate cyclic pattern to a simple pattern results in high reconstruction-based anomaly scores.
RS2: Reconstruction-based DTW anomaly scores are better at capturing anomalies than AD and PD anomaly scores. Reconstruction-based anomaly scores for the A3Benchmark-TS11 signal show that reconstruction-based DTW anomaly scores are less noisy than reconstruction-based PD anomaly scores (see RS2 in Fig. 4(d)). The success of DTW scores is attributed to the method's ability to handle shifts in the alignment of two series. The ablation study by Geiger et al. [6] also reports that DTW slightly outperforms the other two reconstruction error types.
Our observations show that prediction-based and reconstruction-based anomaly scores have successes and limitations that complement one another. For example, we observe from our experiments that prediction-based anomaly scores have an easier time identifying point anomalies but produce relatively more false positives. On the other hand, reconstruction-based anomaly scores have an easier time identifying contextual and collective anomalies but produce relatively more false negatives. Therefore, our method strives to address these limitations and leverage strengths from both types of models as an alternative solution for anomaly detection in time series.
V. AER: AUTO-ENCODER WITH REGRESSION
Our solution has three components targeting the models, anomaly scores, and smoothing function steps in the anomaly detection pipeline, as summarized in Fig. 5.
A. Modeling Stage
The AER model borrows ideas from LSTM-AE and LSTM-DT to produce prediction-based and reconstruction-based anomaly scores simultaneously. The goal is to combine the strengths of both types of methods while overcoming some of their limitations.
The input to the model is x ∈ R × with observations and channels. Like other auto-encoder architectures, AER consists of an encoder and a decoder. While AER uses a regular encoder, the decoder reconstructs + 2 instead of observations by increasing the number of units of the repeated vector layer by two. This minor change allows the model to create an output consisting of three components: the onestep reverse prediction −1 ∈ R, the reconstructed sequence : + −1 ∈ R , and the one-step-ahead prediction + ∈ R. The loss function (Eq. 5) is divided into prediction and reconstruction portions. The prediction loss is the average of the mean squared error between the pairs of true and prediction values in the reverse ( −1 , −1 ) and forward direction ( + , + ). Likewise, the reconstruction loss is the mean squared error between the time series : + −1 and the reconstructed sequence : + −1 . The contribution of the prediction and reconstruction loss is determined by ∈ [0, 1]. The full objective function is defined as follows: By default, the hyperparameters are = 100 observations per input and = 0.5 to give equal importance to the prediction and reconstruction losses. One biLSTM layer with = 30 units is used for both the encoder and decoder. The latent space is the same dimension as the last hidden state of the bidirectional LSTM layer, which is 2 .
B. Post-processing Stage: Masking
To overcome the false-positive predictions created from the exponential weighted moving average smoothing function (PL1), we introduce masking. The proposed solution is to mask indices from the start of the sequence with some value. Our observations show that using the minimum anomaly scores as the masking value produced the best results. By default, is equal to 0.01 (size of smoothing window) where is the time series length.
C. Post-processing Stage: Bi-Directional Scoring
Bi-directional anomaly scores target the missing start of sequence anomaly scores since prediction-based methods require at least observations to make the first forecast (PL3). A solution is to produce anomaly scores using the sequence of predictions in the forward direction and in the reverse direction . The anomaly scores created using can fill in the missing prediction-based anomaly scores produced by . Again, let denote the function to calculate prediction-based anomaly scores. Prediction-based anomaly scores are calculated in the forward direction ( , ) for indices ∈ [ +1, ] and in the reverse direction ( , ) for indices ∈ [1, − ]. If masking is used, then the first values of ( , ) are replaced with zeros, and the first values of ( , ) are replaced with ( ( , )). Then, the scores ( , ) are padded with zeros in the beginning while ( , ) are padded with zeros at the end to align the anomaly scores.
The bi-directional anomaly scores defined in Eq. (6) consist of averages of both scores in overlapping intervals and the max between both scores in non-overlapping intervals.
D. Post-processing Stage: Combination Scores
The bi-directional prediction-based anomaly scores and reconstruction-based anomaly errors can be used to create the combined anomaly scores .
1) Prediction-based Only (PRED): The combined anomaly scores are calculated using only the bi-directional prediction-based anomaly scores.
2) Reconstruction-based Only (REC): The combined anomaly scores are calculated using only the reconstruction-based anomaly scores. The calculation of reconstruction-based anomaly scores defaults to using DTW since it outperforms reconstruction-based PD and AD (RS2).
3) Convex (SUM):
The combined anomaly scores are calculated using a convex combination with parameter weight that controls the two errors' relative importance (by default = 0.5). Both prediction-based and reconstruction-based anomaly scores are min-max scaled to between [0, 1] before the combination.
4) Product (MULT):
The combined anomaly scores are calculated using a point-wise product between the two scores to emphasize both scores' high values. controls the relative importance of the two errors (by default = 1). Both prediction-based and reconstruction-based anomaly scores are min-max scaled to between [1,2] before the combination.
VI. EXPERIMENTAL RESULTS
The three main points we seek to validate in our experimental study are as follows: • RQ1: Does the AER framework enable us to discover anomalies more efficiently than we can through other approaches? • RQ2: What is the impact of smoothing function masking and bi-directional scoring on anomaly detection? • RQ3: Do mixture anomaly scores offer additional information compared to using either a prediction-based or reconstruction-based anomaly score on its own?
A. Data Sources We use 12 datasets (742 signals) spanning various domains to evaluate the models' generalizability and adaptability. The National Aeronautics and Space Administration (NASA) provided two spacecraft telemetry datasets 3 : Soil Moisture Active Passage (SMAP) and Mars Science Laboratory (MSL) acquired from a satellite and a rover, respectively [9]. Each numeric measurement in the target channel is accompanied by one-hot encoded information about commands sent or received by specific spacecraft modules in a given time window. The Yahoo Webscope Program provided the S5 datasets 4 consisting of one set of real production traffic to Yahoo properties (A1) and three synthetic datasets (A2, A3, A4) with varying trends, noise, and pre-specified or random seasonality. The A2 and A3 datasets only contain outliers inserted at random positions, while A4 has outliers and change points. The Numenta Anomaly Benchmark (NAB) provided several datasets 5 from various domains: artificialWithAnomaly (Art), realAdExchange (AdEx), realAWSCloudwatch (AWS), real-Traffic (Traffic), realTweets (Tweets). The UCR Time Series Anomaly Archive 6 is a dataset created to address flaws like triviality, unrealistic anomaly density, mislabeled ground truth, and run-to-failure bias faced by popular datasets [22]. Similar to Geiger et al. [6], Table II summarizes basic information about each dataset. It differentiates between real and synthetic datasets and provides the number of signals and anomalies for each dataset. Each anomaly is classified as either point or collective, depending on the length of the anomaly. Lastly, the total number of anomalous and overall data points are provided for each dataset.
B. Evaluation Metrics
Like Hundman et al. [9] and Geiger et al. [6], the metric used in this study is unweighted contextual F1 scores for each dataset. The motivation is that anomalies are rare and windowbased in many real-world application scenarios. The end user's goal is to detect timely true alarms without receiving many false positives. Hence, this evaluation metric is preferable since it prioritizes finding any part of the anomalies. Anomaly scoring is based on overlapping segments: a true positive (TP) if a known anomalous window overlaps any detected windows, a false negative (FN) if a known anomalous window does not overlap any detected windows, and a false positive (FP) if a detected window does not overlap any known anomalous region.
We performed all experiments in an instance of MIT Supercloud [18] with an Intel Xeon Gold 6249 processor, 10 CPU cores, 9 GB RAM per core, and 1 Nvidia Volta V100 GPU. The environment is created using the anaconda/2022a module, which includes TensorFlow 2.0. All models are implemented as primitives and benchmarked using Orion [6].
C. Baseline Models
We compare our solution against the following five stateof-the-art methods: ARIMA (Prediction-based): Autoregressive Integrated Moving Average [15] is implemented with the StatsModels library. The hyperparameters are empirically set to p=1, d=0, q=0.
LSTM-AE (Reconstruction-based): LSTM auto-encoders [7] use one LSTM layer with 60 units for the encoder and generator. A time-distributed layer with a dense one-unit layer is used to create the output.
LSTM-VAE (Reconstruction-based): LSTM variational auto-encoders [14] consist of an encoder and a decoder. The encoder uses one shared LSTM layer with 60 units and separate dense layers, each with 60 units, to create the mean and standard deviation vector. The decoder uses a repeat vector layer, an LSTM layer with 60 units, and a time-distributed layer with a dense one-unit layer.
TadGAN (Reconstruction-based): TadGAN [6] consists of an encoder and generator that use bi-directional LSTM layers, and critics that use 1D convolution layers. The reconstructionbased anomaly scores can be used in combination with the critic scores to create the final anomaly scores. Geiger et al. [6] reported an ablation study merging these scores using summation, product, critic-only, and reconstruction-only combinations.
D. Benchmarking Results
AER outperforms baseline models based on averaged F1 scores (RQ1). Table III shows that AER has an averaged F1 score of 0.683, which is 23.5% higher than the score of the standard ARIMA model. The flexibility of combining prediction-based and reconstruction-based anomaly scores leads to an improvement in F1 scores across the datasets. The graph in Fig. 6 shows the runtime of AER scales in the same order as LSTM-DT, LSTM-AE, and LSTM-VAE. While the runtime is slighter higher for AER than for those models, this is a very reasonable computation cost considering the performance increase.
AER v.s. TadGAN (RQ1). Similarly, Table III shows that AER outperforms TadGAN by 24.9% in terms of averaged F1 scores while requiring less execution time (see Fig. 6). This result suggests that combining prediction-based with reconstruction-based anomaly scores could lead to better F1 scores than combining critic-based with reconstruction-based anomaly scores.
Masking improves averaged F1 scores slightly (RQ2). Table IV-A shows that masking scores improved averaged F1 scores by 4.3%, on average, for prediction-based methods and 2.6%, on average, for reconstruction-based methods. Masking anomaly scores benefited prediction-based methods more than reconstruction-based methods since those methods tend to make more false-positive predictions. However, masking may remove anomalies at the start of the signal and hurt model performance on datasets like YAHOOA3 and YAHOOA4.
Bi-directional scoring greatly improves F1 scores on some datasets (RQ2). LSTM-DT (M, Bi) consists of two separate LSTM-DT models trained on the sequence in the forward and reversed direction respectively. Table III-A shows that using bi-directional scoring with LSTM-DT (M, Bi) improved F1 scores by 20.3% for the YAHOOA3 dataset and 24.1% for the YAHOOA4 dataset compared to LSTM-DT. These datasets have signals with point anomalies at the beginning that unidirectional prediction-based models cannot predict. However, bi-directional scoring may negatively impact the performance of models on other datasets. Since prediction-based methods tend to produce false-positive predictions, filling in anomaly scores missed by prediction-based anomaly scores allows for more opportunities to produce false positives.
E. Ablation Study
Product (MULT) combination of anomaly scores have the highest averaged F1 score across all combination methods (RQ3). The product (MULT) combination of predictionbased and reconstruction-based anomaly scores produced the highest F1 scores on 6 of 12 datasets (see Table IV-B). Most of these datasets were non-synthetic, including MSL, SMAP, YAHOOA1, and Tweets. This combination method outperformed the convex (SUM) combination by 3.7%, the reconstruction-based only (REC) combination by 10.6%, and the prediction-based only (PRED) combination by 1.5% in terms of averaged F1 scores. Additionally, excluding YAHOOA3 and YAHOOA4 synthetic datasets with many point anomalies result in an averaged F1 score of 0.658 for the product (MULT) combination, a 6.6% increase compared to 0.617 for prediction-based only (PRED) combination. These results support the idea that mixture anomaly scores offer more information than reconstruction-based anomaly scores in general and prediction-based anomaly scores in cases other than identifying point anomalies.
Prediction-based only (PRED) anomaly scores perform better on datasets with mostly point anomalies. Bidirectional scoring produced the highest F1 scores on datasets like YAHOOA3 and YAHOOA4 with mostly point anomalies (see Table IV-B). This finding is consistent with our findings in the LSTM-DT (M, Bi) model.
The selection of the combination method for each dataset is based on the use case. We recommend that users default to using product (MULT) anomaly scores and using predictionbased only (PRED) scores when users primarily want to identify point anomalies. The AER model reports the F1 scores of AER (PRED) for the YAHOOA3 and YAHOOA4 datasets with mostly point anomalies and AER (MULT) for the other datasets, even though they might not be the best combination method according to the ablation study. In practice, datasets come without labels since anomaly detection is an unsupervised problem. Hence, it is impossible to retroactively tune the best method to calculate anomaly scores for each dataset.
F. Limitations and Discussion
While product mixture scores offer unique insights for anomaly detection, several ways exist to improve the AER framework. For example, the model architecture could be better since our study uses a vanilla auto-encoder architecture with one biLSTM layer for both the encoder and decoder. Our framework is designed to easily extend to any reconstructionbased method with minimum changes to the objective function. Another improvement involves experimenting with the (defaults to 0.5), which controls the contribution of prediction and reconstruction loss to the objective function. An optimal could lead to more accurate prediction-based and reconstruction-based anomaly scores that ultimately improve F1 scores. Lastly, the findings in our analysis of existing methods in section IV are for datasets we are currently investigating. The identified constraints may not always hold in other datasets.
Although researchers pay increasing attention to building more powerful models to improve the accuracy of predictionbased and reconstruction-based methods, we would like to call for more attention to the post-processing stage. Our study demonstrated that changes in the post-processing stage could significantly improve performances in addition to our proposed model. Future exploration directions could include additional methods to create mixture scores and better heuristics for the selection of such methods (e.g., between PRED and MULT) for each signal.
VII. CONCLUSION
This study analyzed the successes and limitations of existing reconstruction-based and prediction-based methods. We proposed a threefold solution to address existing limitations: (1) the AER framework that leverages the successes of prediction-based and reconstruction-based methods, (2) masking anomaly scores to reduce start-of-sequence falsepositive predictions, and (3) bi-directional scoring to address missing forecast issues. In addition, we conducted an ablation study to test several ways of combining prediction-based and reconstruction-based anomaly scores. Our results showed that (1) AER has the highest F1 score averaged across 12 datasets, (2) masking and bi-directional scoring improve F1 scores given the right conditions, (3) the product combination (MULT) of bi-directional and reconstruction-based anomaly scores produces better results, on average, for datasets with mostly collective anomalies. Finally, the code is available at https://github.com/sintel-dev/Orion.
|
2022-12-29T06:42:26.737Z
|
2022-12-17T00:00:00.000
|
{
"year": 2022,
"sha1": "f35e2b062aa13166aabdaed7ae56666bec51b3ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f35e2b062aa13166aabdaed7ae56666bec51b3ca",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
17207741
|
pes2o/s2orc
|
v3-fos-license
|
The cost-effectiveness of early noninvasive ventilation for ALS patients
Background Optimal timing of noninvasive positive pressure ventilation (NIPPV) initiation in patients with amyotrophic lateral sclerosis (ALS) is unknown, but NIPPV appears to benefit ALS patients who are symptomatic from pulmonary insufficiency. This has prompted research proposals of earlier NIPPV initiation in the ALS disease course in an attempt to further improve ALS patient quality of life and perhaps survival. We therefore used a cost-utility analysis to determine a priori what magnitude of health-related quality of life (HRQL) improvement early NIPPV initiation would need to achieve to be cost-effective in a future clinical trial. Methods Using a Markov decision analytic model we calculated the benefit in health-state utility that NIPPV initiated at ALS diagnosis must achieve to be cost-effective. The primary outcome was the percent utility gained through NIPPV in relation to two common willingness-to-pay thresholds: $50,000 and $100,000 per quality-adjusted life year (QALY). Results Our results indicate that if NIPPV begun at the time of diagnosis improves ALS patient HRQL as little as 13.5%, it would be a cost-effective treatment. Tolerance of NIPPV (assuming a 20% improvement in HRQL) would only need to exceed 18% in our model for treatment to remain cost-effective using a conservative willingness-to-pay threshold of $50,000 per QALY. Conclusion If early use of NIPPV in ALS patients is shown to improve HRQL in future studies, it is likely to be a cost-effective treatment. Clinical trials of NIPPV begun at the time of ALS diagnosis are therefore warranted from a cost-effectiveness standpoint.
Background
Respiratory failure is the most common cause of ALS patient death [1]. Prior to respiratory failure, respiratory muscle weakness can be measured by standard pulmonary function tests including forced vital capacity (FVC) [2]. Treatment of ALS patients with noninvasive positive pressure ventilation (NIPPV) when FVC is less than 50% appears to improve ALS patient survival [3,4] and quality of life [5][6][7]. Improved survival of ALS patients with NIPPV may be explained by a slower rate of pulmonary function decline [4,5].
NIPPV initiated early in the ALS disease course may offload respiratory muscle work and thereby attenuate the progressive decrease in pulmonary compliance seen in ALS [8]. This treatment may also improve quality of life as ALS patients early in their course may experience non-specific symptoms of fatigue and lethargy, related to subtle respiratory muscle weakness, which goes unrecognized or is attributed to impaired mobility [2]. Whether initiation of NIPPV at diagnosis, when FVC is typically reduced but greater than 50%, slows the rate of pulmonary function decline and improves quality of life and survival remains to be studied in a clinical trial.
Paralleling proposed studies of feasibility and effectiveness of early NIPPV [9] is a need to determine what magnitude of health-related quality of life (HRQL) improvement this proposed treatment needs to achieve to be cost-effective. Quality of life improvement is an essential aspect of ALS treatment as curative treatments are not available [10]. The possible improvement in HRQL from early NIPPV treatment in ALS patients can be analyzed in conjunction with the costs of early NIPPV with a cost-utility analysis by using quality-adjusted life years (QALY) as a measure of effectiveness. Cost-utility analyses traditionally are used to determine which proven therapies are cost-effective by determining a treatment's incremental cost-effectiveness, that is, the cost per QALY gained relative to alternative treatments. In the "traditional" cost-utility analysis, effectiveness has been proven, and an estimate of benefit in health-state utility is already known. As society's willingness-to-pay costs per QALY have been reported, [11] the incremental cost-effectiveness can be compared to this standard to determine whether a newly proposed treatment is cost-effective. We applied this same process to determine a priori, how much benefit early NIPPV treatment in ALS patients would need to provide for this treatment to be cost-effective. We reasoned that should the degree of improvement determined in this analysis seem plausible, future clinical trials testing early NIPPV would be warranted from an economic perspective. If, on the other hand, the analysis showed that an impractical degree of improvement would be necessary for the treatment to be cost-effective, future clinical trials of early NIPPV for ALS would be less worthwhile.
Methods
We calculated the benefit in health-state utility that early NIPPV treatment of ALS patients must achieve to be costeffective. The primary outcome was the percent utility gained through NIPPV in relation to two common willingness-to-pay thresholds: $50,000 and $100,000 per QALY [11].
Model
A decision tree modeled two alternative strategies: NIPPV starting at the time of diagnosis versus no NIPPV at the time of diagnosis, for a hypothetical cohort of patients with a recent diagnosis of ALS. Eighty percent of ALS patients have some evidence of respiratory muscle weak-ness at the time of initial diagnosis [12]; while half of patients demonstrate a reduction in FVC to less than 80% (approximately two standard deviations below the normal range) at initial presentation [13]. It was assumed that if early NIPPV is effective in preventing respiratory insufficiency, it should therefore be started at the time of diagnosis.
Patients were allowed to shift through disease states (mild, moderate, severe, terminal, or death) through Markov processes. The probabilities of patients progressing through these disease states over time were obtained from the literature [14]. All patients were modeled to begin in the mild stage, given their recent diagnosis. The Markov models used the average amount of time patients spend in each disease state, the probability of transitioning into a more severe stage of ALS, along with the utility associated with the time spent in each health state, to estimate the clinical and economic disease events over time. Per practice guidelines recommending the initiation of NIPPV based on an FVC < 50%, it was assumed that both groups would be treated with NIPPV when these criteria were met, and thus the analysis modeled only until this point. The time horizon used was 1 year as this is the average time period between diagnosis and meeting the NIPPV treatment criteria [3]. The reference case used a benefit in health-state utility of 20% in the NIPPV group compared with the non-NIPPV group. This is similar to the improvement in patient QOL demonstrated for NIPPV treatment in those who had respiratory muscle weakness, hypoventilation, or sleep-disordered breathing [7]. The improvement in health-state utility associated with NIPPV use was allowed to vary in sensitivity analysis, where one variable is allowed to vary over a plausible range. One-way sensitivity analyses were conducted for each variable across the ranges of values found in Table 1.
To account for patients entering the model at varying rates of disease progression, the time horizon was adjusted. The time horizon was varied between 6 months and 2 years, in a one-way sensitivity analysis. As variations in FVC at entry may relate to ALS disease stage at entry, we also conducted a one-way sensitivity analysis on the probability of entering the model in the mild stage, as opposed to the moderate stage. Given that all patients are assumed to have recently been diagnosed with ALS and have an FVC > 50%, it was assumed that no one would enter the model in a severe or terminal state. The decision tree was analyzed by Data 4.0 (TreeAge Inc, Williamstown, MA).
Utilities
Assessing health-state utilities in a patient population allows assignment of a numerical value to patient reports of HRQL or health state "utility" for different stages of disease. Health utilities were determined by assessment of patient's health state at each level of disease by a preference-based method and have been reported previously in control arms of large clinical trials [15]. These measurements were aggregated across individuals to determine utility scores for each health state, ranging from death (0), to perfect health (1). Utilities for each ALS stage, measured by the EuroQol EQ-5D visual analogue scale, were obtained from the literature [15].
Costs
Costs were estimated from the Medicare fee schedule for 2004 (in US dollars) for NIPPV and NIPPV accessories. Medicare reimbursement was selected given the societal perspective [16]. Costs of one month of NIPPV rental and accessory costs were included for those intolerant to NIPPV. No discounting of costs or utilities was needed given the time horizon. Other costs related to ALS patient care were considered equal in both treatment groups given the identical probabilities of transitioning through health states and were therefore not entered into the model.
Results
The average patient receiving NIPPV experienced 0.59 QALYs at a cost of $1,773; a patient not receiving NIPPV experienced 0.54 QALYs at a cost of $0, resulting in an incremental cost-effectiveness ratio of $33,801. Sensitivity analysis performed on the utilities of ALS states demonstrated NIPPV has an incremental cost-effectiveness ratio lower than $50,000 as long as the utility for ALS patients receiving NIPPV is at least 13.5% higher at each stage than those without NIPPV, meaning that early NIPPV is costeffective as long as the treatment of ALS patients with NIPPV beginning at the time of diagnosis improves HRQL by at least 13.5%. For a willingness-to-pay threshold of $100,000 per QALY, the increase in HRQL with NIPPV would only need to be 6.8% or greater to be cost-effective.
The cost-effectiveness did not exceed the $50,000 willingness-to-pay threshold in any of the cost, transition probability, tolerance, or utility sensitivity analyses, meaning that alterations of each variable across a plausible range (Table 1) did not cause the incremental cost-effectiveness of NIPPV to exceed $50,000 per QALY. Altering the tolerance of NIPPV below 18% would however cause the costeffectiveness to exceed $50,000 per QALY. No alteration of the probability of entering the model in the mild disease stage caused the incremental cost-effectiveness of NIPPV to exceed the $50,000 per QALY threshold. Shorter time horizons were associated with a lower cost-effectiveness ratio. A time horizon of 6 months was associated with an incremental cost-effectiveness of $76,909, while an 8 month time horizon was associated with incremental cost-effectiveness of $53,001. Time horizons of 10 months or above were associated with an incremental cost-effectiveness less than $50,000.
Discussion
The benefit of early NIPPV use in ALS patients has not yet been studied. However, our cost-effectiveness model suggests that NIPPV begun at the time of diagnosis would be cost-effective if NIPPV were shown to improve HRQL by just 7-14%.
The 7-14% range of HRQL improvement that would be necessary in our model for early NIPPV to be cost-effective may be an overestimate. The $50,000 per QALY threshold for assessing cost-effectiveness is quite conservative. Given more recent estimates of the appropriate cost-effectiveness threshold [11], the improvement that would be necessary for NIPPV to be cost-effective is likely less than 7%. The possibility that NIPPV could slow the transition from less severe to more severe disease states was not taken into account in the model. Should early NIPPV be demonstrated to slow the progression of ALS [9], it would be even more cost-effective than this model suggests.
Tolerability of NIPPV by ALS patients with early disease is unknown. Tolerance of NIPPV (assuming a 20% improvement in HRQL) would only need to exceed 18% in our model for treatment to remain cost-effective using a conservative willingness-to-pay threshold of $50,000 per QALY. This estimate of NIPPV compliance is well lower than that seen in other studies [3]. The base case used a much more conservative estimate of 49% tolerance [3].
The current analysis was limited by the validity of the estimates used in the model. Tolerance of NIPPV administered early in the course of ALS is unknown, but this value was allowed to vary in sensitivity analysis. Utility values were ascertained from estimates in the literature, but previous studies on this topic are limited. The utility values were also allowed to vary in sensitivity analysis. In our model, early NIPPV remained cost-effective in all of the sensitivity analyses, supporting the robustness of the model.
Conclusion
If early use of NIPPV in ALS patients is shown to improve HRQL in future studies, it is likely to be a cost-effective treatment. Further trials of early NIPPV initiation in ALS patients are warranted, and supported from a cost-effectiveness perspective.
|
2017-06-22T17:20:55.655Z
|
2005-08-30T00:00:00.000
|
{
"year": 2005,
"sha1": "37629b2a61312df4870d3faec714be7791998374",
"oa_license": "CCBY",
"oa_url": "https://bmchealthservres.biomedcentral.com/track/pdf/10.1186/1472-6963-5-58",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "22297a04444598b7374f73c2e4b3784338cfd6bd",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54876298
|
pes2o/s2orc
|
v3-fos-license
|
An Improved Infrared and Visible Image Fusion Algorithm Based on Curvelet Transform
The fusion of infrared images and visible images can combine complementary information in an image, so we can better describe a scene, and it is helpful for some tasks such as target detection, target localization and environment recognition. In this paper, we use the Second Generation Curvelet Transform (SGCT) to decompose infrared images and grayscale visible images to propose a new image fusion algorithm. This algorithm uses a multi-resolution decomposition of different tools and different fusion rules implementation. The simulation results show that, compared with existing algorithms, this algorithm have improved to some extent in the evaluation of fused images
INTRODUCTION
Wavelet Transform (WT) has the localized analysis capabilities of time domain and frequency domain, as well as its optimal approximation performance in one-dimensional bounded function, making WT an important tool for analyzing and processing onedimensional non-stationary signals [1] .However, WT cannot make full use of the geometric regularity of image itself.In order to overcome the deficiencies in the image analysis of WT, EJ Candès and DL Donoho [2] proposed Curvelet transform.Curvelet transform has the property of anisotropy, but its mathematical realization is more complex in its transformation process.In view of this, EJCandes and DL Donoho proposed a simpler, more easily understood Curvelet theory in 2004, and it is known as the Second Generation Curvelet Transform (SGCT) [3][4] .Curvelet transform as a new MRA tool [5][6][7] , compared with WT, is it is more suitable for portraying images of geometric features such as curves and lines, using the " wedge base" to approach the singular point of 2 C , and fully consider the geometry of the singular point, and with any angle of directionality (anisotropy), so it is more suitable for the image processing and application.In addition, Curvelet transform has a better factor expression ability in the geometric characteristics of the images (curves, straight lines), they can be expressed through a small number of large Curvelet transform coefficients, and more energy concentrated after the transformation, so it is of great importance in the image extraction and analysis..
II. INFRARED AND VISIBLE IMAGE FUSION ALGORITHM BASED ON THE SECOND GENERATION CURVELET TRANSFORM (SGCT)
The characteristics of infrared image are lower space resolution, serious mixing phenomenon, easy to lose high frequency information details, while the visible image features opposite, other than less low frequency information [8-9].Thus, the fusion of infrared images and visible images has been widely applied to obtain complementary image information, with which we can better describe a scene and complete some tasks, such as target detection and localization, environment identification [10-11].
A. Steps and framework
In this paper, decompose the infrared and gray visible images with SGCT and propose the image fusion algorithm based on significance level and regional matching rule.Steps are as follows: 1.
Convert infrared image A and visible image B to SGCT respectively, then obtain their low frequency and high frequency components;
2.
Process the above low and high frequency components with their own fusion strategies to obtain the fused components;
3.
Process the fused components with SGCT Inverse Transform to obtain the final fusion images.Specific framework of the fusion images is shown in Fig. 1: Image fusion rule determines the ultimate fusion effect directly, which is the core of fusion algorithm and also a key issue not solved effectively so far. 1) Directional contrast.
Contents List available at RAZI Publishing Acta Informatica Malaysia (AIM) Journal Homepage: https://www.razipublishing.com/journals/acta-informatica-malaysia/archives/ ISSN: 2521-0874 (Print) ISSN: 2521-0505 (Online) The concepts of directional contrast and weighted absolute values are proposed in literature [12] and literature [13] in order to measure the images on different scales and directional visual brightness.In SGCT, the image can be decomposed into low and high frequency sub-band, and the directional contrast[13] is defined as follows (1): Among them: X=(A,B).
2) Weighted activity Weighted activity can be used to measure the information quality of a pixel point and represent contained information content.The research based on human visual system points out that the human eyes are sensitive to the areas changed dramatically such as edge and contour information, where these areas usually contain more pixels and include more details than any other smoother areas.Weighted activity is defined as expression (2), which is calculated in a S'T smooth window.
3) Significant level measure Significant level measure reflects some significant features of multiresolution coefficients of the source images.Regard the above defined directional contrast and weighted activity as significant level measure, then the highest SGCT decomposition level of significant level measure can be defined as follows (3): ( , ) ( , ) For other decomposition level, definition is as follows ( 4): ( , ) ( , ) Among them, X = (A, B), j = 1, 2...L-1, L is decomposition level.Use matching measure to represent the similarity degree of multiresolution coefficients between the two source images, and is defined as follows(5): Among them, X = (A, B), j = 1, 2…L -1, L. High matching measure indicates more similarities between source images.4) Fusion algorithm Fusion algorithm based on weighted average method is shown as follows ( 6): ) Among them, and represent the value of the point (x, y), and is the fused value of the same point.According to the fusion rule, process the fused components with SGCT inverse transform to obtain fusion image F.
III. SIMULATION EXPERIMENTS AND RESULTS ANALYSIS
The experiment selects two groups of infrared and visible light image to simulation, which used to compare to verify the effectiveness and correctness of proposed algorithm .The first group of source images are infrared and visible light images in the investigation which is shoot at the same scene.In 2 (a)), infrared image can clearly distinguish a standing person, compared with hot objects in the scene ,others are of low contrast, fuzzy objects, difficult to distinguish, and the same scene visible light image (Figure 2 (b)), roads, fences, roof, chimney scene contrast is high, but because of the dim light, the figure can not be seen.In order to verify the validity and correctness of fusion algorithms in this chapter, experiments are compared with 3 kinds of typical fusion algorithm, the first is based on the LPT image fusion algorithm, decomposition of 5 layer, and the top LPT sub-band coefficients of fusion with weighted average method; the second is the fusion algorithm of image based on DWT, using the "DB4" wavelet filters, and image fusion image decomposition algorithm for 5 level; the third is based on the SGCT image fusion algorithm SGCT simple),decomposition of 5layer.To able comparability between the fusion algorithm, with the fusion processing, the second and third kinds of fusion algorithm based on low frequency coefficient the weighted average, high frequency coefficient of regional energy from fusion rule.Figure 4 (a), (b) gives the first set of infrared and visible light images to be fused, (c) ~ (f) is a variety of the fusion results.
From the visual point of view, in figure 4, the fusion algorithm can heat the target in infrared image (figure) characteristics and location characteristics combined with visible light image characteristics of background information.From the perspective of the observations of the target, as shown in the red line marking region in figure 4, Through the comparison we can find that figure 4 (d) has obvious "block" effect, the characters are virtual shadow edge, which is owing to the lack of the translation invariance for DWT, thus there has produced many irregular wave and edge in the reconstructed image.By comparison, because of the SGCT excellent directional characteristic as well as the better recognition ability of describing the edge details of image feature,in figure 4 (e), (f), the phenomenon of the "block" effect and the virtual shadow can be eliminated.
In order to evaluate different fusion algorithm for infrared and visible image fusion performance, fully integrated into the source image of the infrared target characteristics and background edge detail information, this paper uses entropy (H), correlation coefficient (CC), average gradient (G), space frequency (SF) as an objective evaluation criteria.Results as shown in table 1.
From the evaluation of fusion results can be seen, compared with other algorithm in simulation experiments, the proposed algorithm for performance evaluation indexes all have a certain degree of increase.Entropy algorithm in this paper, and the correlation coefficient are higher than other three algorithms, which shows the fusion algorithm proposed in this paper have more important feature information in the original image.The average gradient and spatial frequency of this algorithm are significantly higher than other algorithm, which shows that this algorithm keep in visible light image feature information and the background of infrared image of target feature information, at the same time, it can effectively keep the edge details of the source image features, which exactly the same as that of the subjective visual comparison results.
Ⅳ. Conclusion
Based on the multi-resolution analysis of image fusion framework: multi-resolution analysis algorithm is the basis of image fusion, the fusion rule is the core of image fusion, the fusion effect evaluation is the key of image fusion.In view of this, this paper introduces the second generation Curvelet transform multi-resolution tools to image fusion ,which combines with human visual characteristics and multi sensor imaging characteristics of prior information, so a new algorithm is proposed, and the algorithm is applied to the infrared and gray level image fusion.Through the simulation experiment, the effectiveness of the algorithm is verified.
Figure 1 .
Figure 1.The framework of infrared and gray visible image fusion algorithm based on SGCT B. Realization process of fusion algorithm.
Figure 2 .
Figure 2. The source images (a)infrared image (b)visable image After SGCT decomposition.Infrared, visible, and the fusion of various scale add up the results as shown in figure 3 (a), (b), (c),which shows the result of the infrared, visible and fusion of various scale additive for edge detection.From the above experimental performance analysis, SGCT can more sparse representation of the source image of geometric features such as curve, linear and so on, through a few larger Curvelet transform coefficient can be said, and after the transformation energy is more concentrated, which is of great significance for extraction and analysis of image characteristics.Fusion algorithm based on SGCT and the significant measure to combine features of the image of the infrared and visible light image, effectively retain the image characteristics of ground objects in the scene (background) information and hot target (Figure)information, which is marginal as well as clearly distinguishable.
Figure 3 .
Figure 3.The sum of all scales results and detected areas (a) sum of infrared all scales; (b) sum of visible all scales; (c) sum of fusion all scales; (d) edge area of infrared; (e) edge area of visible; (f) edge area of fusion result
TABLE 1 .
EVALUATION OF THE FUSION ALGORITHM
|
2018-12-05T04:13:08.295Z
|
2017-01-20T00:00:00.000
|
{
"year": 2017,
"sha1": "dd0f4d894da3a5fae3083bec2feb84fb9cab112c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26480/aim.01.2017.36.38",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "dd0f4d894da3a5fae3083bec2feb84fb9cab112c",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
92398753
|
pes2o/s2orc
|
v3-fos-license
|
Factors associated with suicidal ideation and suicidal attempts among adolescent students in Nepal: Findings from Global School-based Students Health Survey
Introduction Suicide has been recognized as a major public health problem with high burden in low and middle income countries. Suicide has long lasting psychological trauma on friends and relatives in addition to loss of economic productivity. Although the need of high quality evidence is essential for designing suicide prevention program, Nepal lacks reliable evidence from nationally representative data. This study aimed to estimate the prevalence of suicidal ideation and attempt among adolescent students and identify the factors associated with them. Materials and methods Total of 6,531 students of grade 7 to 11 from 74 schools representing all three ecological belts and five development regions participated in this cross sectional study. To select the representative sample from study population, two stage cluster sampling method was used. Standardized self-administered questionnaire were completed by participants. Multivariable logistic regression was done to identify the factors associated with suicidal ideation and attempt. Results Nearly 13.59% of the participants had considered suicide while 10.33% had attempted it. Food insecurity (OR = 2.32, CI = 1.62–3.32), anxiety (OR = 2.54, CI = 1.49–4.30), loneliness (OR = 2.51, CI = 1.44–4.36) and gender (OR = 1.39, CI = 1.03–1.89) were identified as risk factors of suicidal ideation. Anxiety (OR = 3.02, CI = 1.18–7.74), loneliness (OR = 2.19, CI = 1.28–3.73) truancy (OR = 1.99, CI = 1.40–2.82), cigarette use (OR = 3.13, CI = 1.36–7.23) and gender (OR = 1.60, CI = 1.07–2.39) were identified as risk factors of suicidal attempt. Having 3 or more close friends was found to have protective effect (OR = 0.35, CI = 0.16–0.75) against suicidal attempt. Conclusion Study reveals relatively high prevalence of suicidal ideation and suicidal attempt among school-going adolescents in Nepal. Appropriate coping strategies for factors like anxiety, loneliness seem could be useful for preventing both suicidal ideation and attempt.
Introduction
Suicide has been recognized as a major public health problem with high burden in low and middle income countries. Suicide has long lasting psychological trauma on friends and relatives in addition to loss of economic productivity. Although the need of high quality evidence is essential for designing suicide prevention program, Nepal lacks reliable evidence from nationally representative data. This study aimed to estimate the prevalence of suicidal ideation and attempt among adolescent students and identify the factors associated with them.
Materials and methods
Total of 6,531 students of grade 7 to 11 from 74 schools representing all three ecological belts and five development regions participated in this cross sectional study. To select the representative sample from study population, two stage cluster sampling method was used. Standardized self-administered questionnaire were completed by participants. Multivariable logistic regression was done to identify the factors associated with suicidal ideation and attempt.
Results
Nearly 13.59% of the participants had considered suicide while 10.33% had attempted it.
Introduction
Suicide is one of the major causes of death and disability worldwide. A person commits suicide in every 40 seconds with almost 800,000 deaths annually [1]. The estimated age standardized rate for suicide was 10.67 per 100,000 population in 2015 [2]. Suicide is the second leading cause of death among 15-29-year-olds [3]. It largely affects low and middle-income countries accounting almost 78% of all suicide deaths globally [3]. Suicide rate in Nepal is 7.2 in general with 8.2 in male, and 6.3 in female per every 100,000 population [2]. Suicide in adolescence is often underreported with possible cause of death being classified as underdetermined or accident to protect the families from possible stigma associated with it [4].
Because of the sensitivity of the issue, suicide has received attention in global public health arena in recent years. In addition to loss of life and economic productivity for the society, there is long lasting psychological trauma on friends and relatives. Prevention of suicide is the best option given that most of suicide cases don't get treatment nor they can be treated [5]. Target 3.2 of Mental Health Action Plan 2013-2020 has envisioned reducing the rate of suicide in WHO member countries by 10% by 2020 which requires clear understanding of the factors leading to suicide [6].
Although the epidemiological evidence are essential in designing interventions, suicidal behavior has a large number of underlying causes that are complex to understand and differ from one country to other thus making the preventive efforts more complex and diverse [7,8]. Both suicidal ideation and suicidal attempt share the common risk factors such as hopelessness, social isolation, anxiety, depression, impulsivity and substance abuse, among others. Several other adverse stimuli are also likely to be associated with the development of suicidal ideation and the progression from ideation to attempts [9]. Evidence suggests that adolescents of both genders who had suicide ideation and attempts are significantly more likely to commit suicide than those without such ideation and attempts [10].
Delivery of mental health services including treatment of suicidal attempts are limited especially in urban areas and is further constrained by limited human and financial resources. Government of Nepal spends less than 1% of total healthcare budget in mental health [11]. Thus, suicide prevention program are crucial in context of Nepal. Suicide prevention activities need to be tailored as per the context of the country and need deeper understanding on the determinants of the suicide. However, Nepal lacks reliable and representative data due to multiple reasons like poor registration system, mis-categorization of suicide cases by hospitals, and underreporting of suicide incidences in police data because of stigma attached with suicide [11]. Although there are some data from hospital based studies confined to specific setting and small scale cross sectional studies, Nepal lacks large scale nationwide studies to guide policy making process [12,13]. Identification of factors associated with suicidal ideation and attempt among adolescent students can be helpful in designing the appropriate interventions and reducing the burden of condition [14]. In this context, this study was designed to assess the determinants of suicidal ideation and attempts among adolescents of Nepal.
Materials and methods
Data used in this study were obtained from Global School-based Students Health Survey (GSHS) 2015 which was a nationwide study using a globally standardized methodology. The research was approved by Ethical Review Board of Nepal Health Research Council. Prior to participation of student in survey, administrative permission from respective schools and written informed consent from student's parent and students were obtained ensuring voluntary participation, privacy and confidentiality. Students were also informed beforehand about the study explaining about the objective of the study, potential risk and benefits, confidentiality and privacy the information provided.
Two-stage cluster sampling was used in this study to select representative sample from study population that comprised of 7 to 11 class students. In the first stage of sampling strategy, 74 schools were selected based on probability proportional to school enrolment size from the list of 20,304 schools containing any of 7 to 11 class. In second stage, sampling frame comprising of the list a classrooms in each of selected school was prepared and intact classrooms were selected based on predetermined random numbers. Each student in the selected classroom were eligible to participate in the study. Probabilities of selection of each participants and nonresponse rate were adjusted applying appropriate weighting factor. Out of 74 schools selected for the study, 68 schools (92%) participated in the study. Four schools were closed even on the third visit, one school could not be reached because of disruption of road due to flood and one refused to participate.
All students in selected class were explained about the study and were provided with information paper about GSHS and a consent form to be signed by parents on the first day and were requested to share the information sheet with parents before obtaining consent from parents for participation in the study. The following day, questionnaire was administered among students who had consent from parents and were willing to participate in the study. From among 8,670 students selected for the study, 6,531 participated making response rate of 75%. The age of the participants ranged from 11-18 years. Of the total completed questionnaire received back from students, 6,529 questionnaire were useable.
Data were collected through self-administration of standardized Nepali version of the questionnaire. The GSHS questionnaire used for this study contained total of 91 questions with 58 core and 33 expanded questions covering demographics of the students, dietary behaviors, hygiene, violence and unintentional injury, tobacco use, mental health, alcohol and drug use, sexual behaviors and physical activity. Variables used in this study, the questions used and their coding schemes are presented in Table 1.
Data analysis was performed using STATA software version 15.0 (Stata Corporation, College Station, TX, USA). For all analysis, complex survey analysis was carried out. Weighted percentages are reported in descriptive analysis. Chi-square test and odds ratio (OR) were used for bivariate analysis to assess association between suicidal ideation and attempt and independent variables. Multivariable analysis was used for evaluation of the effect of explanatory variables for suicide ideation and attempt in the past 12 months (binary dependent variables) after adjusting for probable confounding variables. The two-sided 95% confidence intervals are reported in the results. Variable parental intrusion of privacy was omitted from multivariable logistic regression model because of multicollinearity.
Results
Around 4.57% of research participants had faced food insecurity. Similarly, 4.36% had anxiety and 6.27% had felt lonely. Almost two third (65.77%) of the research participants had at least 3 friends. Slightly more than half (50.72%) had experienced bullying in school, 40.09% had experienced physical attack and 39.15% had been involved in physical fighting. Similarly, 6.17% had been using tobacco products and 8.58% had already initiated drug use. Around 13.59% had considered suicide while 10.33% had attempted suicide. Around 23.72% of research participants were of age 16 years or older. (Table 2
Close friends
How many close friends do you have? 0 = 0 close friends, 1 = 1 close friends, 2 = 2 close friends, 3 = 3 or more close friends
Truancy
During the past 30 days, on how many days did you miss classes or school without permission? 0 = 0 to 2 times, 1 = 3 or more times
Bullied
During the past 30 days, on how many days were you bullied? 0 = 0 times, 1 = 1 or more times physically attacked During the past 12 months, how many times were you physically attacked? 0 = 0 times, 1 = 1 or more times
Physical fighting
During the past 12 months, how many times were you in a physical fight? 0 = 0 times, 1 = 1 or more times
Current cigarette use
During the past 30 days, on how many days did you smoke cigarettes? 0 = 0 times, 1 = 1 or more times
Initiation of drug use
How old were you when you first used drugs? 0 = I have never used drugs, 1 = any other response
Really drunk
During your life, how many times did you drink so much alcohol that you were really drunk? 0 = 0 times, 1 = 1 or more times
Parents check homework
During the past 30 days, how often did your parents or guardians check to see if your homework was done? 1 = Most of times/always, 0 = Never/ rarely/sometimes
Parents understand Problem
During the past 30 days, how often did your parents or guardians understand your problems and worries? Adolescents having food insecurity were at 2 folds (OR = 2.32, CI = 1.62-3.32) higher risk of having suicidal ideation compared to those who had food security. Similarly, adolescents having anxiety had higher odds (OR = 2.54, CI = 1.49-4.30) of having suicidal ideation compared to their counterparts. Children who felt lonely had higher odds (OR = 2.51, CI = 1.44-4.36) of having suicidal ideation compared to their counterparts. Similarly, children who had initiated drug use were almost 2 folds (OR = 1.60, CI = 1.14-2.23) more likely to have suicidal ideation. Females had higher odds (OR = 1.39, CI = 1.03, 1.89) of having suicidal ideation compared to males.
Other variables like having parental support, having 3 or more close friends, truancy, physical fighting, current cigarettes use, getting really drunk, having parents who understand problems and check their homework were not found to have statistically significant association with suicidal ideation. (Table 3) Those adolescents who had anxiety were almost 3 folds (OR = 3.02, CI = 1.18-7.74) more likely to attempt suicide. Those who felt lonely most of times were almost 2 folds (OR = 2.19, CI = 1.28-3.73) more likely to attempt suicide compared to their counterparts. Having 3 or more close friends had protective effect (OR = 0.35, CI = 0.16-0.75) against suicidal attempts although having one or two close friends did not differ significantly from those not having any close friends. Truancy increased the risk of suicidal attempt by almost 2 folds (OR = 1.99, CI = 1.40-2.82). Similarly, those currently using cigarettes were 3 folds more likely (OR = 3.13, CI = 1.36-7.23) to attempts suicide while girls were around 2 folds more likely (OR = 1.60, CI = 1.07-2.39) to attempt suicide compared to their counterparts.
Other variables like having parental support, having food insecurity, physical fighting, initiation of drug use, getting really drunk, having parents who check their homework and understand problems were not found to have any statistically significant association with suicidal attempt. (Table 4)
Discussion
This study identified high burden suicidal ideation and suicidal attempts among adolescent students in Nepal. The study has also disentangled the influence of gender, loneliness, having close friend, anxiety, getting drunk, substance abuse, physical fighting and parental support over suicidal ideation and suicidal attempts. This is the first nationally representative study that estimated suicidal ideation and suicidal attempts in Nepalese adolescent students. Moreover, GSHS is conducted in multiple countries among nationally representative samples using globally standardized methodology thus making data comparable from one country to another and provide important opportunity to rectify gap in data on suicide [15]. However, the original study was not designed to determine the factors associated with suicidal ideation and suicidal attempts. Therefore, the factors identified in the study might not fully explain the suicidal ideation and suicidal attempts in the study population because of lacking information on key explanatory variables such as socio-economic status and psychological comorbidities. Around 13.59% of the research participants had suicidal ideation and 10.33% had attempted suicide in our study. One of the previous study done in four cities in China (Wuhan, Urumqi, Beijing, Hangzhou) had reported that 17.4% of research participants had suicidal ideation and 8.1% had attempted suicide [8]. However, the other study in rural parts of China had found that 19% of the participants had suicide ideation and 7% had attempted suicide attempts in the past year [16]. Prevalence of suicidal ideation was 4.9% in Bangladesh, 11.6% in Bhutan, 13.1% in Maldives, 9.4% in Myanmar, 5.4% in Indonesia, 9.4% in Srilanka, 12.5% in Thailand and 9.3% in Timor-Leste [2,17,18,19,20,21,22,23,24]. GSHS has revealed that suicidal attempt was 6.7% in Bangladesh, 11.3% in Bhutan, 12.7% in Maldives, 8.8% in Myanmar, 3.9% in Indonesia, 6.8% in Srilanka, 13.3% in Thailand and 9.5% in Timor Leste [17,18,19,20,21,22,23,24]. Suicidal ideation and attempt rate seem to differ across countries. Suicidal behavior has a large number of underlying causes. The factors that place individuals at risk for suicide are complex and interactive [8]. Furthermore, access to different means of committing suicide might have created these differences in suicidal ideation and suicidal attempts in different countries.
Our study revealed that girls are at higher risk of suicidal ideation and suicidal attempt compared to boys. Findings from other studies regarding gender differences in suicidal ideation and attempt are not consistent. Most studies previous studies have shown gender differences in suicidal ideation. However, findings differ on whether boys or girls are at higher risk of ideation. Studies conducted in Canada, Uganda have shown higher rate of suicidal ideation among boys while other studies in Malaysia, China and Guyana have shown higher rates among girls [8,25,26,27,28,29]. On the other side, studies conducted in Lebanon, Tanzania, and Thailand have not shown any statistically significant difference of suicidal ideation between girls and boys [15,30,31]. Inconsistencies in gender differences in suicidal ideation and attempt might be due to social and cultural context of the country that defines status of girls in society. In case of Nepal, being male dominated society, problems of girls might have received less attention that motivated them to consider suicide. Furthermore, in some of the above studies the suicidal ideation might have been underreported by girls thereby showing lower suicide rate [15].
Our study revealed that children who felt lonely most of time or always are more likely to have suicidal ideation and suicidal attempt compared to their counterparts. Findings on association of loneliness with suicidal attempt and ideation are largely consistent in most of previous studies. Previous studies done in Lebanon, Uganda, Tanzania, Sub-Saharan Africa have also revealed that children are more likely to have suicidal ideation when they have feeling of loneliness [15,26,30,32]. Feeling lonely could exacerbate the ill effect of other problems associated with suicide behavior as they find no one to share about the problem that substantially alleviate the agony. This is further supported by another findings of our study that having 3 or more close friends had protective effect against suicidal attempts although having one or two close friends did not differ significantly from those not having any close friends. However, it's equally important to note that our study revealed no statistically significant association between having close friends and having suicidal ideation. One of the previous study from China had also revealed statistically significant association between having close friends and suicidal attempt and failed to demonstrate any association with suicidal ideation [33]. Another study in Guyana has revealed lower risk of having suicidal ideation if the research participants have close friends [29]. This reinforces the importance of social and peer support in the role of maintaining mental well-being.
Similarly, Children who had anxiety had higher odds of having suicidal ideation and suicidal attempt. Findings are consistent with most of other studies done in different countries. Higher odds of having suicidal ideation was reported in previous studies done in Uganda, Lebanon, Thailand and Republic of Benin when children were worried [15,26,31,34]. Findings are quite usual as adolescents often consider suicide as means to overcome anxiety or distress in life.
There was no significant association between truancy or missing school without permission and having suicidal ideation in our study. The children who missed school at least 3 times without any permission were found to have almost 2 folds higher risk of having suicidal attempt. Previous similar study done in Republic of Benin had not found any significant association of truancy with suicidal ideation and attempts [34]. However, the variable should be dealt in conjugation with other factors like feeling lonely, having close friends and being worried.
Cigarettes smoking seems to increase the risk of suicidal attempt by almost 3 folds in our study. Some of the previous studies have also demonstrated the association between suicidal behavior and smoking [35,36,37,38,39]. There have been some possible explanations for association between smoking and suicide. Suicide has been linked with depression as people might tend to self-medicate themselves with nicotine opting for cigarette smoking or smoking can be personality characteristics associated with low self-esteem [40,41]. Depression might have worked as residual confounder in this study distorting the association of smoking and suicide. One of the previous study also suggests that the association between suicidal behavior and smoking might be because of some unobserved background variables like life circumstances. In the particular study, the use of fixed-effects regression models controlling for unobserved confounding sources had substantially reduced the magnitude associations between suicidal behavior and smoking [42]. Given the lack of any direct plausible causative mechanism, the interpretation of linkage between smoking and suicide need some precautions [43].
There are some limitations in the study. The existing stigma surrounding suicidal ideation and attempts in culturally diversified Nepalese societies might have caused an underreporting of the conditions leading to social desirability bias. Study does not cover the adolescent who did not attend school. Furthermore, study also did not collect information on socioeconomic status, religious affiliation, social participation and psychological co-morbidities that could be important in characterizing suicidal ideation and attempt. Further research could be useful in this regard. Despite these limitations, this is the first study done in Nepal among large and nationwide representative sample intended to determine the risk factors of suicidal ideation and attempt. Since the study has used globally standardized methodology of GSHS, study findings are also comparable to other countries adopting same or similar methodology. The study comes out with nationwide estimate of suicidal ideation and attempt among school going adolescents. The study findings could be useful for policy makers in designing appropriate strategies for suicide prevention. Adopting appropriate preventive strategies could be very useful in context of Nepal considering the limited availability of treatment services in rural areas constrained with lack of financial and appropriate human resources for mental health services.
Conclusions
Study reveals high suicide rate of suicidal ideation and attempt among Nepalese school going adolescents. Factors like food insecurity, anxiety, Loneliness and gender were found to be associated with suicidal ideation while anxiety, loneliness, truancy, cigarette use and gender were found to be associated with suicidal attempt.
|
2019-04-03T13:09:15.358Z
|
2019-01-03T00:00:00.000
|
{
"year": 2019,
"sha1": "7f8186d55d3680035c0ea9c3916a422dceaef6c5",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1371/journal.pone.0210383",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f8186d55d3680035c0ea9c3916a422dceaef6c5",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
253010316
|
pes2o/s2orc
|
v3-fos-license
|
Clinical associations and genetic interactions of oncogenic BRAF alleles
BRAF is a serine/threonine-specific protein kinase that regulates the MAPK/ERK signaling pathway, and mutations in the BRAF gene are considered oncogenic drivers in diverse types of cancer. Based on the signaling mechanism, oncogenic BRAF mutations can be assigned to three different classes: class 1 mutations constitutively activate the kinase domain and lead to RAS-independent signaling, class 2 mutations induce artificial dimerization of BRAF and RAS-independent signaling and class 3 mutations display reduced or abolished kinase function and require upstream signals. Despite the importance of BRAF mutations in cancer, the clinical associations, genetic interactions and therapeutic implications of non-V600 BRAF mutations have not been explored comprehensively yet. In this study, the author analyzed publically available data from the AACR Project GENIE to further understand clinical associations and genetic interactions of oncogenic BRAF mutations. The analyses identified 93 recurrent BRAF mutations, out of which 50 could be assigned to a functional class based on literature review. The author could show that the frequency of BRAF mutations varies across cancer types and subtypes, and that the BRAF mutation classes are unequally distributed across cancer types and subtypes. Using permutation testing-based co-occurrence analyses, the author defined the genetic interactions of BRAF mutations in multiple cancer types and revealed unexplored genetic interactions that might define clinically relevant subgroups. With non-small cell lung cancer as example, the author further showed that the genetic interactions are BRAF mutation class-specific. The presented analyses explore the properties of oncogenic BRAF mutations and will help to further delineate the complex role of BRAF in cancer.
INTRODUCTION
The BRAF gene encodes a serine/threonine-specific protein kinase that regulates the MAPK/ERK signaling pathway and affects cell division, differentiation, and secretion. The BRAF protein is composed of three conserved regions (CR1-3) that are characteristic for the Raf family protein kinases: CR1 contains the Ras-GTP-binding domain, CR2 is a flexible serine-rich linker and CR3 contains the catalytic protein kinase domain (Daum et al., 1994). Binding of Ras-GTP to B-Raf induces a conformational change that leads to autophosphorylation and activation of the kinase domain (Cutler et al., 1998;Cook & Cook, 2021).
Mutations in BRAF gene are found in diverse types of cancer including melanoma, colorectal adenocarcinoma, non-small-cell lung cancer, papillary thyroid carcinoma and hairy cell leukemia and are considered oncogenic drivers (Davies et al., 2002;Paik et al., 2011;Cardarella et al., 2013;Holderfield et al., 2014). In hairy cell leukemia the BRAF Val600Glu (V600E) mutation is found in nearly all cases at diagnosis and considered as the causal genetic event (Tiacci et al., 2011). In other types of cancer, the frequency of BRAF mutations ranges from 80% in malignant melanoma to 1-5% in lung adenocarcinoma and colorectal cancer.
Based on the molecular mechanism, a classification of BRAF mutations identified in cancer samples has been proposed (Dankner et al., 2018): class 1 mutations mimic the phosphorylation of the activation loop and thereby lead to aberrant activation of the kinase domain. BRAF molecules with class 1 mutations can signal as monomers and independent of upstream RAS. The most frequently observed class 1 mutation is V600E, but also other mutations at the same amino acid position namely V600M, V600R, V600K and V600D are considered class 1 mutations. Class 2 mutations lead to artificial dimerization of BRAF and activation of the kinase domain. Similar to class 1 mutations, class 2 mutations signal independent of RAS, however, they activate BRAF significantly weaker than class 1 mutations. The most frequent class 2 mutations are located at amino acid position 469, 597 and 601. Also, BRAF fusions that have been identified in different types of cancer at low frequencies promote dimerization and signal similar to BRAF with class 2 point mutations (Jones et al., 2008;Ross et al., 2016). In contrast to class 1 and 2 mutations, class 3 mutations have low or absent kinase activity and require upstream RAS signaling to realize their oncogenic potential (Wan et al., 2004;Heidorn et al., 2010;Yao et al., 2015).
The high prevalence of BRAF mutations in cancer has spurred interest in the development of specific BRAF inhibitors: Vemurafenib was the first BRAF inhibitor approved for the treatment of metastasized melanoma harboring BRAF V600E mutations in 2011 (Bollag et al., 2010;Chapman et al., 2011). However, single agent treatment of melanoma patients with vemurafenib led to rapid development of resistance. Treatment strategies combining BRAF and MEK inhibitors can delay the development of resistance. In the following years additional BRAF inhibitors including dabrafenib and encorafenib have been developed and introduced into the clinic.
At present, BRAF inhibitors have been approved for the treatment of various types of cancer harboring BRAF class 1 mutations: For patients with advanced melanoma the BRAF and MEK inhibitor combinations dabrafenib/trametinib (Flaherty et al., 2012;Robert et al., 2015), encorafenib/binimetinib (Dummer et al., 2018) and vemurafenib/ cobimetinib (Ascierto et al., 2016) can be employed. More recently it was demonstrated that the addition of the immune checkpoint inhibitor atezolizumab to vemurafenib and cobimetinib can improve the progression free survival of melanoma patients with BRAF V600E mutations (Sullivan et al., 2019;Gutzmer et al., 2020).
In colorectal cancer the combination of the BRAF inhibitor encorafenib with the EGFR monoclonal antibody cetuximab has shown promising results and was approved for the treatment of patients with BRAF V600E mutations (Tabernero et al., 2021). The combination of dabrafenib and trametinib has also been approved for the treatment of non-small cell lung cancer with BRAF V600 mutations (Planchard et al., 2016(Planchard et al., , 2017. In addition, the BRAF inhibitor vemurafenib has been approved for the treatment of patients with Erdheim-Chester disease with BRAF V600 mutations and the combination of dabrafenib and trametinib is approved for treatment of anaplastic thyroid cancer with BRAF V600E mutations. To date, there has been no regulatory approval for a targeted therapy in patients with non-V600 mutations. Different clinical trials have tried to explore treatment strategies for patients with non-V600 BRAF mutations with varying degrees of success (Kotani et al., 2020). In the case of BRAF class 2 mutations, multiple smaller clinical trials and case reports suggest that MEK inhibition might be an active treatment strategy (Dagogo-Jack, 2020).
Clinical evidence for patients with inactivating class 3 BRAF mutations is still largely missing: It has been suggested that tumors with class 3 BRAF mutations are sensitive to the inhibition of activated RAS (Yao et al., 2017). There is also preclinical evidence demonstrating activity of pan-RAF and CRAF inhibitors in tumors with class 2 and inactivating class 3 mutations (Smalley et al., 2009;Kordes et al., 2016;Hoefflin et al., 2018). Currently, multiple clinical trials are testing the efficacy of RAF inhibitors alone or in combination with MEK inhibitors in patients with inactivating BRAF mutations.
Recent large scale tumor sequencing efforts have greatly expanded the knowledge about genetic alterations in cancer. The AACR Project GENIE is an international data-sharing effort for clinical-grade, high-throughput sequencing (NGS) data. The data is collected at 18 cancer centers in the United States and Europe (Sweeney et al., 2017). In this study, the author surveyed oncogenic BRAF mutations across cancer types using data from the AACR Project GENIE. The presented analyses highlight that different oncogenic BRAF mutations are associated with distinct clinical features and genetic interactions.
Data
For the analyses of clinical associations and genetic interactions of oncogenic BRAF mutations the AACR Project GENIE 11.0 public data set containing gene panel sequencing data from over 136,000 cancer samples from over 121,000 patients was downloaded from Sage Bionetworks Synapse (Synapse ID: syn26706564, DOI: https://doi.org/10.7303/ syn26706564).
Statistical analysis
All analyses were performed using the R software environment for statistical computing and graphics. Visualizations were created using the ggplot2 and ggpubr packages for the R software environment. All source code and data are available as supplemental data (Data S1).
Identification of reoccurring BRAF mutations and classification of BRAF mutations
BRAF mutations were defined as reoccurring if they were found in ≥5 samples in the AACR Project GENIE dataset. The author attempted to assign all reoccurring BRAF mutations to their respective BRAF mutation class based on an extensive literature review (Yao et al., 2017;Dankner et al., 2018;Schirripa et al., 2019;Lokhandwala et al., 2019;Lin et al., 2019;Johnson et al., 2019;Yaeger et al., 2019;Lei et al., 2020;Owsley et al., 2021;Sahin & Klostergaard, 2021). Reoccurring BRAF mutations not previously described in the reviewed literature where marked as unknown significance (?).
Frequency of BRAF mutations across cancer types
For calculation of BRAF mutation and BRAF mutation class frequencies across cancer types only samples with reoccurring BRAF mutations were counted. When multiple BRAF mutations were identified in one sample only the mutation with the highest clinical significance was considered. The clinical significance was defined as class 1 > class 2 > class 3 > unknown significance.
Co-mutation analyses
For co-mutation analyses a permutation test was employed: initially samples were filtered according to cancer type and BRAF mutation class and samples with reoccurring BRAF mutations were labeled. For each investigated subset the absolute and relative frequencies of co-mutation of BRAF with other genes were calculated (observed absolute and relative frequency). Next the sample labels were randomly permutated and co-mutation frequencies were calculated. To generate a robust null hypothesis, random permutation was performed one million times for each investigated subset. Left and right p values were calculated for each gene by counting the permutations with co-mutation frequency lower or higher than the observed absolute frequency.
In the figures observed relative frequency and expected relative frequency were visualized as scatter plot. Only genes with an absolute difference between observed and expected relative frequency ≥2% and a p value corrected for multiple hypothesis testing ≤0.001 are shown.
Overview of oncogenic BRAF alleles
Recent large scale tumor sequencing efforts have greatly expanded the knowledge about genetic alterations in cancer. To investigate clinical associations and genetic interactions of oncogenic BRAF alleles, the AACR Project GENIE 11.0 public dataset containing gene panel sequencing data from over 136,000 cancer samples from over 121,000 patients was downloaded.
Initially, a list of all BRAF alleles in the dataset was compiled. On the protein level, 914 unique BRAF mutations were identified. The BRAF V600E mutation was the most frequently occurring mutation which was detected in 3,905 cancer samples. A total of 93 BRAF mutations could be identified in ≥5 samples and were considered reoccurring mutations (Figs. 1A and 1B). A total of 821 mutations were detected in less than five samples and out of these 585 mutations could be identified in only one sample (Fig. 1A). (2) p.V600Gfs*50 (?) p.V600M (1) p.V600R (1) Next the author conducted an extensive literature review to assign the identified BRAF mutations to their respective functional classes: six mutations (0.66%) at the amino acid position 600 were considered class 1 mutations. These mutations included V600E as well as V600D, V600M, V600R and V600K. A total of 41 mutations (4.49%) were considered class 2 mutations and included among others mutations at amino acid position 469, 601, 597 and 464. 26 mutations (2.84%) were considered class 3 mutations. Mutations at amino acid position 594 and 466 were the most frequently occurring class 3 mutations in the AACR Project Genie dataset. In total, 50 (53.76%) of 93 reoccurring mutations could be assigned to a functional class based on the literature review. A large fraction (841, 92.01%) of the BRAF variants in the dataset has not been studied in detail and could not be assigned to a specific class (Fig. 1C). However, these variants of unknown significance were observed at a lower frequency and were only present in 21% (1,581) of the samples with BRAF mutations (Fig. 1D).
Relative frequency of BRAF mutations and BRAF mutation classes across cancer types
Next, the author investigated relative frequency of BRAF mutations in the different cancer types. For these analyses only reoccurring BRAF mutations identified in ≥5 samples were considered. The relative frequency of BRAF mutations varied greatly across the different cancer types ( Fig. 2A): in thyroid cancer 39.53% (742/1,877) of the samples contained a reoccurring BRAF mutation, in melanoma 32.91% (1,809/5,496) of the samples contained a BRAF mutation, in colorectal cancer 10.38% (1,337/12,880) contained a BRAF mutation and in NSCLC 4.4% (850/19,319) contained a BRAF mutation. In other cancer types such as pancreatic cancer, prostate cancer and breast cancer, BRAF mutations were only found at low relative frequencies (1.62%, 1.54% and 0.63%, respectively). The relative frequency of BRAF mutations also varies among different cancer subtypes (Fig. 2B).
Next, the author asked if BRAF mutation classes are equally represented in different cancer types. To this end, we plotted the relative frequency of the different BRAF mutation classes for all cancer types in the project GENIE dataset (Fig. 3). These data clearly demonstrated that BRAF mutation classes are not equally distributed across cancer types but show a distinct cancer-type specific distribution. In thyroid cancer, almost all identified BRAF mutations could be assigned to class 1. Also, in melanoma and colorectal cancer, the majority of BRAF mutations could be assigned to class 1. In non-small cell lung cancer, class 1, 2 and 3 BRAF mutations were found in a similar frequency. Notably, in prostate cancer, a majority of the BRAF mutations could be assigned to class 2. In small cell lung cancer and cervical cancer no BRAF class 1 mutation could be identified, but the overall BRAF mutation frequencies in these cancer types were low (13 and 10, respectively).
Association of BRAF variant class and mutant allele fraction
The mutant allele fraction (MAF) of genetic variants in bulk tumor sequencing data is a complex parameter reflecting tumor cell content, clonal architecture of the tumor and ploidy of the tumor genome. Oncogenic drivers generally show higher mutant allele fractions compared to passenger mutations. The author set out to investigate the mutant allele fraction of BRAF variants assigned to class 1, 2 and 3 as well as variants that could be not assigned to a specific variant class based on literature mining. The data revealed that BRAF variants assigned to class 1 showed the highest median MAF across all samples in the dataset (Fig 4A). Class 2 variant displayed a significantly lower MAF than class 1 variants. Class 3 variants and reoccurring variants of unknown significance showed an even lower MAF.
These observations, however, might be partially attributed to the difference of mutant allele frequencies across cancer types. For example, in the case of non-small cell lung cancer, the median MAF was significantly lower for class 1 variants compared to class 2 variants (Fig. 4B).
Genetic interactions of BRAF mutations are cancer type-specific
To investigate genetic interactions of oncogenic BRAF, variants the author performed a co-mutation analysis. The dataset was filtered for the indicated cancer types and random permutation was performed to generate a null hypothesis. Based on the null hypothesis, the expected frequency and expected relative frequency of co-occurrence of mutations in BRAF with mutations in other genes was calculated. The employed approach can account for the heterogeneous cancer samples and sequencing approaches included in the dataset.
To account for the biological differences between cancer types, the analysis was performed separately for non-small cell lung cancer, melanoma, colorectal cancer and thyroid cancer.
In the thyroid cancer samples BRAF mutations co-occurred less frequently than expected with mutations in NRAS, HRAS, RET, PTEN, NF1 and KRAS. In contrast, mutations in BRAF and PIK3CA co-occurred more frequently than expected (Fig. 5A).
In melanoma samples, NRAS mutations co-occurred much less frequently with BRAF mutations than expected. Also, mutations in GNA11, GNAQ, KIT, SF3B1, NF1 and TP53 were found less frequently in BRAF-mutated samples than expected. In contrast, PTEN was found slightly more frequently than expected in BRAF-mutated samples (Fig. 5B).
In colorectal cancer samples, the author observed a trend towards mutual exclusivity for BRAF and KRAS mutations. Also, APC and TP53 mutations were found less frequently in BRAF-mutated samples than expected. In contrast, the author observed an enrichment of mutations in ARID1A, CREBBP, FAT1 KMT2A, KMT2D, NOTCH3, PTEN and RNF43 in BRAF-mutated samples (Fig. 5C). Non-small cell lung cancer samples with BRAF variants were less likely than expected to carry also alterations in EGFR and KRAS. The author also observed a similar trend for TP53. In contrast, variants in BRAF and ATR, FAT1, KEAP1, SETBP1, SETD2 and STK11 did more frequently co-occur than expected.
The BRAF variant class defines genetic interactions
Next, the author wanted to investigate if the BRAF variant classes define the genetic interactions. To avoid interference by cancer type-specific mutations, only the non-small lung cancer samples in the GENIE dataset were used for these analyses. The null hypothesis was generated by randomly permutating variant class labels, and relative expected and observed co-mutation frequencies were plotted for each variant class (Figs. 6A-6C).
The conducted analyses revealed that the genetic interactions of mutant BRAF alleles are highly dependent on the variant class: BRAF class 1 mutations co-occured significantly more frequently than expected with AKT1 and SETD2 mutations. In contrast, BRAF class 1 mutations were found less frequently than expected in samples with EGFR, ERBB4, KEAP1, KRAS, STK11 and TP53 mutations (Fig. 6A). BRAF class 2 mutations co-occured more frequently than expected with ATR, EPHA3, NRAS, STK11 and WHSC1 mutations. In contrast, the author observed that EGFR and KRAS mutations are found less frequently than expected in samples with BRAF class 2 mutations (Fig. 6B). For, BRAF class 3 mutations, the author found a strong association with KEAP and STK11 mutations. As in the case of BRAF class 1 and 2 mutations, KRAS and EGFR were co-mutated less frequently than expected in samples with BRAF class 3 mutations (Fig. 6C). The author also created a heatmap displaying the ratio of observed relative frequency/expected relative frequency for all BRAF mutation classes to highlight the mutation class-specific genetic interactions (Fig. 6D).
DISCUSSION
BRAF mutations are frequently found in cancer and considered oncogenic drivers. In this study, the author analyzed publically available data from the AACR Project GENIE to further understand clinical associations and genetic interactions of oncogenic BRAF alleles. Initially, the author defined a subset of BRAF mutations that were found in ≥5 samples in the AACR Project GENIE dataset. 53.76% of these reoccurring mutations were previously reported in the literature and could be assigned to a functional mutation class. However, a large fraction of the identified BRAF mutations (92.01%) remained variants of unknown significance, highlighting the urgent need for further functional investigations. Notably, these variants of unknown significance accounted only for a small fraction of the samples (21%).
Next, the frequencies of BRAF mutations across cancer types were determined. The frequencies were similar to previously reported frequencies from other cohorts, with minor deviations (Holderfield et al., 2014). However, due to the large number of samples and the diverse cancer types included in the AACR Project GENIE dataset, the author was also able to report BRAF mutation frequencies for rarer cancer types and subtypes.
The author also showed that the ratio of class 1, 2 and 3 variants varies across cancer types and subtypes and might reflect the oncogenic signaling in these cancer types. In prostate cancer, BRAF mutations occur at a relatively low frequency (1.54%) and previous studies have suggested that BRAF mutations in prostate cancer might be targetable (Santos et al., 2020). Notably, a majority of the BRAF mutations identified in prostate cancer could be assigned to class 2. Unfortunately, the clinical annotations in the project GENIE dataset were not sufficient to further clinically define this interesting subgroup of prostate cancer cases.
In cancer types with low frequencies of BRAF mutations such as leukemia, breast cancer, myeloproliferative neoplasms, head and neck cancer and esophagogastric cancer the author found high proportions of BRAF variants of unknown significance. The author speculates that these variants of unknown significance in these cancer types are passenger mutations and that the frequency of oncogenic and potentially targetable BRAF alterations in this cancer types is very low.
The author also attempted to correlate BRAF mutations class with mutant allele fraction, however the results from these analyses might be biased and were not consistent across cancer types.
Relying on the AACR Project GENIE dataset, the author was able to systematically investigate genetic interactions of oncogenic BRAF alleles in multiple cancer types. The analysis identified several interesting interactions, some of which have not been previously mentioned in literature. For all investigated cancer types, the author observed that mutations in upstream components of the MAPK/ERK signaling pathway, namely EGFR and RAS family proteins, co-occur much less frequently than expected with oncogenic mutations in BRAF. These finding had been previously reported in multiple studies (Sensi et al., 2006;Li et al., 2014).
For thyroid cancer the author found an interaction between BRAF and PIK3CA mutations which has been previously reported (Charles et al., 2014). For colorectal cancer, the author overserved co-occurrence of oncogenic BRAF mutations with mutations in the ubiquitin ligase RNF43. Recent papers have implicated oncogenic mutations in BRAF and RNF43 in the serrated neoplasia pathway of right-sided colorectal cancer (Eto et al., 2018;Matsumoto et al., 2020). The author also observed that in colorectal cancer samples, KMT2D mutations co-occur more frequently than expected with BRAF mutations. The author could not find a report that describes this co-occurrence in clinical samples however a recent mouse study demonstrated that KMT2D functions as tumor suppressor in BRAF V600E mutant melanomas (Maitituoheti et al., 2020).
Finally, the author investigated if the genetic interactions of BRAF alleles depend on the mutation class. For this analysis, only non-small cell lung cancer samples were used because this subgroup showed a balanced distribution of BRAF class 1, 2 and 3 mutations. This analysis confirmed that the BRAF mutation classes exhibit different genetic interactions that might reflect their signaling mechanisms. Interestingly, the author found, that in NSCLC BRAF class 1 mutations co-occur with alterations in SETD2. This interaction had been already described before in a study focusing on non-small cell lung cancer (Sheikine et al., 2018). The author also observed an interaction of BRAF class 2 and 3 mutations with ATR, hinting to a possible connection of oncogenic BRAF and DNA damage response and replication stress signaling.
Taken together, the author employed publically available data from the AACR Project GENIE to explore the complex role of BRAF in cancer. The author believes that the presented analyses and data will be a valuable resource for researchers and clinicians focusing on BRAF biology and precision oncology.
CONCLUSIONS
BRAF mutations are frequently found in cancer and considered oncogenic drivers. In this study, the author used data from the AACR Project GENIE to investigate clinical associations and genetic interactions of oncogenic BRAF alleles. The analyses demonstrate that the frequency of BRAF mutations greatly varies across cancer types and subtypes. Also, the distribution of BRAF mutations classes is highly unequal across cancer types and subtypes. Because of the large number of samples in the AACR Project GENIE dataset, the author was also able to systematically assess mutational co-occurrence of BRAF mutations with mutations in other genes in multiple cancer types. These analyses allowed to identify several interesting molecular subgroups (e.g., colorectal cancer with BRAF and RNF43 mutations), however, the limited clinical annotations in the dataset did not allow to further clinically define these subgroups. The author believes that the presented analyses and data will be a valuable resource for researchers and clinicians focusing on BRAF biology and precision oncology.
Data Availability
The following information was supplied regarding data availability: All source code and data tables are available in the Supplemental Files.
Supplemental Information
Supplemental information for this article can be found online at http://dx.doi.org/10.7717/ peerj.14126#supplemental-information.
|
2022-10-20T15:43:26.670Z
|
2022-10-18T00:00:00.000
|
{
"year": 2022,
"sha1": "9a2d190e7da16e5d019186030088bc7dd942d1f0",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.14126",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7817f9e80f19ecbe8cb8ac4394002a4bffce7f5d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
119159207
|
pes2o/s2orc
|
v3-fos-license
|
A Chandra Observation of the Nearby Sculptor Group Sd Galaxy NGC 7793
(Abridged) We conducted a Chandra ACIS observation of the nearby Sculptor Group Sd galaxy NGC 7793. At the assumed distance to NGC 7793 of 3.91 Mpc, the limiting unabsorbed luminosity of the detected discrete X-ray sources (0.2-10.0 keV) is approximately 3x10^36 ergs s^-1. A total of 22 discrete sources were detected at the 3-sigma level or greater including one ultra-luminous X-ray source (ULX). Based on multiwavelength comparisons, we identify X-ray sources coincident with one SNR, the candidate microquasar N7793-S26, one HII region and two foreground Galactic stars. We also find that the X-ray counterpart to the candidate radio SNR R3 is time-variable in its X-ray emission: we therefore rule out the possibility that this source is a single SNR. A marked asymmetry is seen in the distribution of the discrete sources with the majority lying in the eastern half of this galaxy. All of the sources were analyzed using quantiles to estimate spectral properties and spectra of the four brightest sources (including the ULX) were extracted and analyzed. We searched for time-variability in the X-ray emission of the detected discrete sources using our measured fluxes along with fluxes measured from prior Einstein and ROSAT observations. From this study, three discrete X-ray sources are established to be significantly variable. A spectral analysis of the galaxy's diffuse emission is characterized by a temperature of kT = 0.19-0.25 keV. The luminosity function of the discrete sources shows a slope with an absolute value of Gamma = -0.65+/-0.11 if we exclude the ULX. If the ULX is included, the luminosity function has a long tail to high L_X with a poor-fitting slope of Gamma = -0.62+/-0.2. The ULX-less slope is comparable to the slopes measured for the distributions of NGC 6946 and NGC 2403 but much shallower than the slopes measured for the distributions of IC 5332 and M83.
Introduction
Based on its superior angular resolution capabilities -namely, an on-axis point spread function (PSF) with a half-power diameter of ∼1 ′′ -the Chandra X-ray Observatory (Weisskopf et al. 2002) is an ideal instrument for surveying populations of discrete X-ray sources in nearby spiral galaxies. To date, numerous nearby spiral galaxies have been the subjects of deep Chandra observations which have sampled their resident X-ray source populations in unprecedented detail. Prominent examples of such galaxies which have been the subject of such studies include M33 (Plucinsky et al. 2008;Long et al. 2010;Tuellmann et al. 2011), M51 (Terashima & Wilson 2001, M81 (Swartz et al. 2003), M83 (Soria & Wu 2003), M101 (Pence et al. 2001;Mukai et al. 2003), NGC 1637 (Immler et al. 2003), NGC 2403 (Schlegel & Pannuti 2003), NGC 3184 (Kilgard et al. 2002), NGC 6946 and IC 5332 (Kilgard et al. 2002). In each case, the Chandra observations have dramatically increased the numbers of known discrete X-ray sources in each galaxy. The applications of these observations include sampling a robust number of discrete X-ray sources for statistically-significant analyses, spatially resolving the discrete sources from a component of diffuse X-ray emission detected from some galaxies, spectral analyses of the brightest discrete sources, time variability analyses (often incorporating observations made with previous X-ray observatories) and measurement with high accuracy of the positions of X-ray sources for the purposes of identifying counterparts at other wavelengths.
Typically, the classes of X-ray objects detected by these surveys include X-ray sources associated with active galactic nuclei (AGN), X-ray binaries (XRBs) and supernova remnants (SNRs). Observations of XRBs and SNRs are essential in developing a thorough understanding of stellar evolution. Unfortunately, studies of XRBs and SNRs in our own Galaxy are hampered by observational difficulties, including significant absorption along Galactic lines of sight as well as considerable uncertainties in distances to these sources. In addition, the Galactic population of XRBs and SNRs represent a galaxy of a single mass, metallicity, star formation history and morphological type. Observing XRBs and SNRs located in nearby galaxies minimizes these issues. In prior works (Pannuti et al. 2000;Lacey & Duric 2001;Pannuti et al. 2002Pannuti et al. , 2007Filipović et al. 2008), we analyzed high angular resolution observations at multiple wavelengths of several nearby galaxies to both identify SNRs and statistically assess their properties. In the present paper we continue this work by analyzing a Chandra observation of the nearby Sd galaxy NGC 7793. This observation was conducted primarily to study X-ray emission from the SNR population in NGC 7793; here we consider the properties of the discrete X-ray sources detected in this galaxy as well as the accompanying diffuse X-ray emission.
NGC 7793, a member of the nearby Sculptor Group (Puche & Carignan 1988), lies at a distance of 3.91 Mpc (Karachentsev et al. 2003) and at an inclination angle of i∼50 • (Tully 1988). General properties of both NGC 7793 and the pointed Chandra observation of this galaxy are listed in Table 1. NGC 7793 has been the subject of prior X-ray observations made with the Einstein Imaging Proportional Counter (IPC) (Fabbiano et al. 1992) and the Röntgensatellit (ROSAT) Position Sensitive Proportional Counter (PSPC) (Read & Pietsch 1999, hereafter RP99). These observations detected seven X-ray sources -including an ultra-luminous X-ray source (ULX) located along the southern edge of the galaxy -within the optical extent of NGC 7793. In addition, prominent diffuse X-ray emission which permeates much of the disk of the galaxy has been detected (RP99). The SNR population of NGC 7793 has been well-studied by both optical and radio surveys (Blair & Long 1997;Pannuti et al. 2002); based on these searches, a total of 32 resident SNRs have been identified in this galaxy. In the present paper, we will concentrate on searching for X-ray counterparts to these 32 sources. N7793-S26 -an additional source that was initially classified as a SNR by Blair & Long (1997) and detected in the radio by Read & Pietsch (1999) and Pannuti et al. (2002) -has recently been classified as a microquasar candidate by Pakull et al. (2010) and Soria et al. (2010). We exclude this source from our investigation presented here of the properties of SNRs in NGC 7793 and we will discuss this source (particularly its X-ray, optical and radio properties when compared to extragalactic superbubbles) in more detail in a future paper.
The organization of this paper is as follows: the observations and data reduction are described in Section 2. Properties of the discrete X-ray sources detected by this observation -including spectral analysis of four of the most luminous sources as well as searches for multi-wavelength counterparts and time-variable emission from all sources -are discussed in Section 3. Next we discuss the diffuse X-ray emission from NGC 7793 as sampled by this observation (Section 4), the luminosity function of the discrete sources (Section 5) and the properties of the SNR population of this galaxy (Section 6). Finally, we summarize our results in Section 7.
Observations and Data Reduction
We used the Advanced Charge-Coupled Device (CCD) Imaging Spectrometer (ACIS) (Garmire et al. 2003) onboard Chandra to observe NGC 7793. The observation was obtained in Very Faint mode on 2003 September 6-7 using an aim point approximately two arcminutes west of the nucleus, ensuring the maximum coverage of NGC 7793 across the (back-illuminated) ACIS-S3 chip. The exposure lasted approximately 49724 seconds and, after correcting for the deadtime, the effective exposure time was 49094 seconds.
We accumulated source-free background areas offset from the galaxy (namely from the back-illuminated ACIS-S1 chip) and extracted a light curve using 50-second bins to test for the presence of soft background flares. No flaring behavior of any kind was detected.
We re-filtered the Level 1 data, correcting for the induced charge-transfer inefficiency following the prescription of Townsley et al. (2000). This approach permits using a single event redistribution matrix in the spectral fitting, altering the response matrix for the off-axis-dependent effective area. The data was reduced with standard tools in the software application package "Chandra Interactive Analysis of Observations" (CIAO 1 ) Version 3.4 (CALDB version 3.3.0). Point sources were identified using wavdetect at 1 ′′ , 2 ′′ , and 4 ′′ scales (Freeman et al. 2002). The detected sources were merged into a final source list after eliminating duplicate detections. Source counts were extracted using apertures that increased with off-axis angle to ensure the inclusion of an approximately constant fraction of the PSF. The minimum aperture was 2 ′′ in diameter and enclosed >95% of the PSF. These same apertures were used to extract spectra (see Section 3.1) or counts for quantiles (see Section 3.3), depending upon the count rate. A background spectrum was obtained from a region on the ACIS-S3 chip but outside of the galaxy and southeast of the nucleus.
The Discrete X-ray Source Population
We detected 22 discrete X-ray sources within the optical extent of NGC 7793 at the ∼3σ level or greater to a limiting unabsorbed luminosity of approximately L X ≈ 3 × 10 36 ergs s −1 over the energy range of 0.2 through 10.0 keV, assuming a foreground column density of N H = 1.15×10 20 cm −2 and a power law model with a photon index Γ=1.5. Table 2 lists the properties of these sources, including position (in J2000.0 coordinates), absorbed and unabsorbed fluxes, unabsorbed luminosities and the significance (in σ) of the detection of the source. In Figure 1, we present an R-band image 2 of NGC 7793 with the positions of the detected X-ray sources indicated with ellipses representing the 90% confidence contours of their measured positions. The sizes of these ellipses are related to the errors in the determination of the source positions. In comparison, the ROSAT PSPC revealed seven discrete sources (RP99). Similar to the previous X-ray observations (Fabbiano et al. 1992, RP99), we have not detected a central X-ray source associated with the nucleus of this galaxy to the stated limiting unabsorbed luminosity. We expect that the majority of the detected sources are resident XRBs; other possibilities include background AGNs, foreground stars, and X-ray luminous SNRs. Table 3 lists identified counterparts to the X-ray sources as detected at multiple wavelengths: we discuss each of these associations in detail in Sections 3.1 and 3.2. Lastly, Table 4 contains the spectral properties of the entire population of discrete X-ray sources based on quantiles; these properties will be discussed in more detail in Section 3.3. Figure 1 reveals a remarkable asymmetry in the distribution of the discrete X-ray sources with the large majority of the discrete sources located in the eastern half of NGC 7793. Only four of the 22 discrete X-ray sources are found in the western half and there is a stark absence of discrete sources in the northwestern quadrant. Previous
Inspection of
Einstein and ROSAT observations (Fabbiano et al. 1992, RP99) also did not detect any discrete sources in this quadrant. We discuss this asymmetry more thoroughly in Section 3.5.
2 The R-band image has been kindly provided to us by Annette M. N. Ferguson: for details on the observations of this galaxy that produced this image, the reader is referred to Ferguson et al. (1996).
The Most Luminous Discrete Sources
For four of the 22 discrete X-ray sources, there were sufficient counts above our arbitrary limit of 200 counts (corresponding to a count rate of ∼4×10 −3 counts per second) to extract spectra and generate spectral fits that yielded parameters which were measured to <30%. We now discuss general properties of each of these four sources: properties of the other discrete sources are presented in Section 3.2. In each case, the spectra were fit using the software package XSPEC Version 11.3.1 (Arnaud 1996) 3 : the individual spectra were grouped to a minimum of 25 counts per bin and each spectrum was fit over the energy range where there were a sufficient number of counts. We have not performed any fits to the X-ray spectrum of the source CXOU J235752.7−323309 even though the number of counts detected from this source exceed the threshold stated above because this source is physically associated with a foreground star (see Section 3.2).
We used four basic models to fit each extracted spectrum: a simple power law model, a bremsstrahlung model (Karzas & Latter 1961;Kellogg et al. 1975), the APEC model, an optically thin thermal plasma model known as the APEC model (Smith et al. 2001), and finally the DiskBB model (Mitsuda et al. 1984;Makishima et al. 1986). This last model describes the spectrum from an accretion disk consisting of multiple blackbody components and is characterized by the temperature T in at the inner disk radius. To account for photoelectric absorption along the line of sight we used the Wisconsin cross-section models (Morrison & McCammon 1983). There are two choices possible when fitting the column density: fixing it at the value of the known Galactic column density in the pointing direction (that is, N H = 1.15×10 20 cm −2 ) or treating it as a free parameter. With this in mind, we performed fits to the extracted spectra using the four models first with the column density frozen to the known Galactic column and then with the column density left as a free parameter. Lastly, a background spectrum was extracted using an aperture 0.7 arcmin in diameter and positioned off the optical extent of the galaxy. No point sources were included in this aperture. The resulting spectrum extracted from this aperture was accurately fit using a combination of a power law component and Gaussians. In Table 4, we present a representative summary of our derived best fits and not the results of every fit that we attempted obtained for each model. The background spectrum itself and best-fit model were then included in the fits to each point source without adjusting the background model fit components.
In general, the basic models return statistically acceptable fits for each source. In some cases, a particular spectrum warranted a more complex model to obtain a satisfactory fit.
Also, in some cases significant statistical differences in the fits were seen when the column density was frozen or thawed: below we discuss specific results for fitting the extracted spectra. The spectra of two of these sources were also analyzed by RP99: we also compare our derived fits with the fits obtained by those authors. CXOU J235746.7−323607: This source corresponds to the X-ray source P10 identified by RP99 and the candidate radio SNR R3 identified by Pannuti et al. (2002): the offset between CXOU J235746.7−323607 and P10 is ∼6 ′′ while the offset between CXOU J235746.7−323607 and R3 is only ∼1.5 ′′ . Pannuti et al. (2002) proposed that this X-ray source and the candidate radio SNR are physically associated based on their positional proximity. RP99 commented on the soft nature of the ROSAT PSPC spectrum of this source. Based on this soft spectrum, a lack of apparent variability in the X-ray emission (between two epochs of ROSAT observations) and its positional association with a portion of NGC 7793 that features numerous SNRs, RP99 speculated that this X-ray source may be a superbubble or the collective X-ray emission from multiple SNRs.
Our fits indicate a moderately-hard spectrum for this source. The bremsstrahlung temperature is higher than that of RP99 (kT ∼ 1.6 keV compared to kT ∼ 0.8 keV), although the error bars overlap the fitted values. The fitted temperature obtained from the DiskBB model is softer (kT ∼ 0.7 keV): the use of the DiskBB model in this case -while physically less appropriate for a source identified as an SNR, is justified below. Within the errors, the fitted column density is consistent with the known column toward NGC 7793.
If we fix the column density to the known column, the impact is largely confined to the power law model: the power law index Γ falls from 2.9 to 1.7 with a slight overlap in the error bars; for the other models, the fitted parameters differ by <10%. The fitted normalization then corresponds to 26 +73 −17 km, a value larger than is generally deemed typical of neutron stars but falling within inferred radii of other low-mass X-ray binaries (e.g., Church & Balucinska-Church (2001)). In Figures 2 and 3 we present the extracted spectrum of CXOU J235746.7−323607 as fit with the power law model (with a variable column density) and a confidence contour plot for this fit, respectively.
We note that we have detected a clear variability in the X-ray emission from this source. The prior ROSAT PSPC observations of this galaxy reported by RP99 caught this source in a brighter state than our Chandra observation by a factor of approximately three.
This time-variability -coupled with the high X-ray luminosity and the moderately hard spectrum observed for this source -cast doubt on the classification of this source as a single SNR. Alternatively, CXOU J235746.7−323607 may be an SNR/XRB system analogous to the Galactic source W50/SS 433 (Safi-Harb & Petre 1999), though the observed X-ray luminosity of the former source is several orders of magnitudes greater than the latter source. It is possible that the observed X-ray emission stems from a complex of sources which remain unresolved even with the high angular resolution capabilities of Chandra.
The classification of CXOU J235746.7−323607 is therefore currently uncertain and we will discuss this source again in Section 3.4 when we discuss a search for time-variability in the X-ray emission from the detected discrete X-ray sources and in Section 6 when we describe properties of the SNR population of this galaxy. CXOU J235750.9−323726: This source, suspected to be a ULX associated with NGC 7793, was identified as P13 by RP99. Those authors presented a detailed history and an analysis of its spectral properties. To summarize, this source was first detected by Einstein (Fabbiano et al. 1992) and subsequently Margon et al. (1985) included it in an atlas of X-ray selected quasi-stellar objects, arguing that the source was associated with a background quasar seen just below the southern edge of NGC 7793. This quasar has been cataloged as 2355−329 and features a redshift of 0.071. It has also been cataloged as 2355−3254 in more recent observations presented by Bowen et al. (1994). The Einstein observation localized the position to within an arcminute; with the better angular resolution of the ROSAT PSPC, RP99 ruled out an association between P13 and the background quasar, arguing instead that the X-ray source is native to NGC 7793. RP99 described time variability in the source's emission by comparing observations made six months apart and speculated that the source may be either a background galaxy or a black hole X-ray binary with an estimated mass of ∼ 10 M ⊙ . The estimated X-ray luminosity of this source (∼ 10 39 ergs s −1 , assuming that it is in fact associated with NGC 7793) is comparable to ULXs seen in other galaxies.
Our Chandra observations verify that this source is indeed located within the optical extent of NGC 7793. We used the improved positional accuracy to search for a counterpart using optical (Hα and R-band) images and our radio maps of NGC 7793 (Pannuti et al. 2002) but we do not find a clear optical or radio counterpart. Recently, Motch et al. (2011) identified an optical counterpart (a V ∼ 20.5 magnitude star) and suggest this star (a late B-type supergiant with a mass between 10 and 20 M ⊙ ) to be the companion star to the observed X-ray source.
RP99 derived fits to their extracted ROSAT PSPC spectrum of this X-ray source using either a thermal bremsstrahlung model with a characteristic temperature kT = 3.49 +4.26 −3.49 keV or a power law model with a photon index Γ ∼ 1.8±0.5. We do not derive a statistically-acceptable fit for any models if the column density is fixed to the Galactic value. The derived column densities from our fits were N H ∼ 10 21 cm −2 , nearly a full order of magnitude greater than the nominal column density toward NGC 7793 itself. Compared with the fit presented by RP99, our fit with a bremsstrahlung model returns a significantly higher effective temperature (kT > 14 keV). A portion of the discrepancy may be explained by the broader energy range sampled by the Chandra spectrum; alternatively, a spectral state change is also possible. The photon index derived by our power law fit (Γ=1.4 +0.20 We also note that Chandra has revealed for the first time a second source located ∼ 2 ′′ east of the CXOU J235750.9−323726. This second source is denoted as CXOU J235750.9−323726 and CXOU J235750.9−323728 are approximately 5 ′′ and 8 ′′ respectively from the position given by RP99 for P13. These sources would certainly be blended by the broader PSF of the ROSAT PSPC (∼ 27 ′′ ).
CXOU J235806.6−323757. The Chandra detection of this source immediately establishes it as a variable -given its measured luminosity, it should have been detected during the prior ROSAT observations. The spectral models all return equally acceptable fits: an order-of-magnitude higher column density than the known Galactic value is required for fits with the bremsstrahlung model and the power law model while the DiskBB model only requires a column of N H <8×10 20 cm −2 . The bremsstrahlung temperature is kT ∼2 keV but rises to kT ∼ 6.4 keV if the column density is fixed at the known value; otherwise, the parameters of the fixed column models change insignificantly. All models require an unresolved line at 0.83 keV with equivalent widths that range from 45 to 122 eV. The bremsstrahlung and power law models require a second unresolved line at 0.67 keV with an equivalent width of ∼120 eV. The combination of spectral fit values and variability suggests an XRB classification is the most likely for this source. Figures 6 and 7 present the extracted spectrum of this source as fit with the power law model (with a thawed column density) and a confidence contour plot for this fit, respectively. CXOU J235808.7−323403. The offset between this source and RP99 source P9 is approximately ∼2 ′′ , so we claim that these sources are in fact the same. This source is the weakest source of the four considered in this Section for which a basic spectral analysis is reasonable. The three best-fit models are listed in Table 4; none of them is particularly good as single high (or low) bins contribute relatively large values to χ 2 . We do not include other models as the fits are significantly poorer. The spectrum is clearly soft for all three fits, which overlap within the errors. Given the relatively poor fit, the flux is likely an overestimate and the errors on the normalization appropriately reflect the uncertainty. RP99 note that this source, which corresponds to P9 in their paper, is variable and highly absorbed. If we adopt the best-fit APEC model, we confirm the high absorption as two of the fits yield column densities with values of N H ∼ 0.1-0.2 ×10 22 cm −2 . As with RP99, we do not detect a counterpart at optical or radio wavelengths. In Figure 8 we present the extracted spectrum for this source as fit with the APEC model.
Identifications of Other Discrete X-ray Sources
We now briefly comment on the nature of some of the other Chandra-detected X-ray sources. We searched at multiple wavelengths for counterparts to these sources: we also identified sources which may have been confused in prior X-ray observations due to poorer angular resolution. We recount the results of these searches here and present a summary of these counterparts in Table 3.
Foreground Stars: The optical counterparts of CXOU J235748.6−323234 and CXOU J235752.7−323309 are foreground stars. The first source corresponds to the star USNO 0574−1250312 (Monet et al. 2003): it was previously detected with the ROSAT PSPC by RP99 and was labeled by those authors as P6. The offset between P6 and our Chandra position is 4.2 ′′ ; Chandra's improved position yields an offset from USNO 0574−1250312 of only 1.3 ′′ . Davoust & de Vaucouleurs (1980) noted that the star USNO 0574−1250312 was misclassified as an HII region (#18) by Hodge (1969).
The second source matches the source cataloged as P7 by RP99 with a Chandra-ROSAT position offset of ∼2 ′′ . We therefore claim that the two sources are in fact the same. RP99 speculated that this source may be a background object; we instead claim that this source is physically associated with the star USNO 0574−1250339 (Monet et al. 2003), located only 0.8 ′′ from the Chandra position.
HII Regions: Catalogs of HII regions in NGC 7793 have been presented by Hodge (1969) and Davoust & de Vaucouleurs (1980). We used a search radius of 3 ′′ to identify associations between our Chandra sources and the cataloged HII regions. We found one such association: the X-ray source CXOU J235743.8−323633 is offset from the HII region D22 (Davoust & de Vaucouleurs 1980) by 2.5 ′′ ; we suggest that these two sources are physically associated. This X-ray source may be an SNR or an XRB associated with the HII region as we do not generally expect H II regions to be X-ray luminous.
SNRs: As described previously, a total of 32 optically-identified SNRs and candidate radio SNRs have been identified in NGC 7793 based on the optical (Blair & Long 1997) and radio searches (Pannuti et al. 2002). We have already discussed the association between the X-ray source CXOU J235746.7−323607 and the candidate radio SNR R3 ( §3.1). If we adopt a search radius of 1.5 ′′ (which corresponds to a linear distance of approximately 30 pc at the assumed distance to NGC 7793), we find one other association 4 . This association (with an offset of 1.1 ′′ ) occurs between CXOU J235747.2−323523 and the optically-identified SNR S11 (Blair & Long 1997). This source had been originally classified as an HII region (#40) by Davoust & de Vaucouleurs (1980). Pannuti et al. (2002) found a non-thermal radio counterpart, which helped to solidify its classification as an SNR. The Chandra observation clearly reveals an X-ray counterpart for the first time, making it one of a small number of known extragalactic SNRs which have been detected at X-ray, optical and 4 We note here that the published positions of several candidate radio SNRs detected in NGC 7793 by Pannuti et al. (2002) radio wavelengths. We will discuss the multi-wavelength properties of the SNR population of NGC 7793 in more detail in Section 6. For completeness, we also note that within this adopted search radius, our cataloged source CXOU J235800.1−323325 is coincident with the southern component of the candidate microquasar N7793-S26. As mentioned earlier, this source was previously classified as an SNR (Blair & Long 1997;Pannuti et al. 2002) but has recently been re-classified as a microquasar Soria et al. 2010): CXOU J235800.1−323325 features some spatial extent in the X-ray that mimics the observed extended emission seen at optical and radio wavelengths.
Young Massive Star Clusters: We searched for positional coincidences between our sample of X-ray sources and the 20 young massive star clusters identified in this galaxy by Larsen & Richtler (1999) and Larsen (1999). The effective radii of these clusters were given by Larsen (1999) and range from ∼4-60 pc at our assumed distance to NGC 7793 (corresponding to a projected angular scale of 0.3-3 ′′ ). Using these radii, our search found no positional matches.
Background Sources: We also used the NASA/IPAC Extragalactic Database (NED 5 ) to search for background source counterparts for the remaining eleven discrete X-ray sources identified in our survey. We adopted a radius of 6 ′′ for this search but no counterparts were identified for any of the eleven sources. To estimate the number of detected background sources that are seen in projection beyond NGC 7793, we use the relation given by Campana et al. (2001) for the number N of background sources greater than a flux density S per square degree, which may be expressed (in CGS units) as (1) If we consider the entire ACIS-S3 chip (with a field of view of 8.3 ′ × 8.3 ′ ) and assume a limiting absorbed flux of 1.15×10 −15 ergs cm −2 s −1 for our observation, we estimate that approximately ten background sources lie within those bounds. NGC 7793 covers approximately half of the ACIS-S3 chip, so we adopt a contamination of ∼5 background objects within the optical extent of the galaxy.
Sources Previously Blended and Now De-Blended by Chandra: Finally, we describe our search for sources which may have been blended by previous observations but have been resolved with Chandra. Besides the resolved emission components from the candidate microquasar N7793-S26, we find two such instances: the first includes the two X-ray sources found within the error circle of the source P13, the ULX, discussed previously. The second instance involves the sources CXOU J235802.8−323614 and CXOU J235803.54-323643 which are located within 20 ′′ and 11 ′′ , respectively, of the position of the source P11 identified by RP99. The latter source is only slightly more luminous than the former and it appears that the combination of emission from both were identified as P11 in the PSPC observation.
Quantile Analysis of the Spectral Properties of the Discrete X-ray
Sources We adopted the quantile approach to a color-color diagram (Hong et al. 2004). The quantile method determines the energy below which a fixed percentage of events fall: colors are therefore determined from the ratios or differences of the resulting energies. Briefly, E X is defined as the energy below which the net counts are X% of the total net counts; E 25 , E 50 and E 75 then correspond to the energies below which the counts are 25%, 50% and where E low and E high are the lower and upper boundary energies respectively of the full energy band considered (in the present paper, we have assumed values of E low = 0.2 keV and E high = 10.0 keV). The grid is defined by 3*(Q 25 /Q 75 ) versus log (Q 50 /(1-Q 50 )) to separate the data as much as possible. The appearance of the interpretative grid in model coordinates (e.g., N H versus kT , N H versus power law index) has a quashed appearance, reflecting the spectral energy information truly available from the instrument. The reader is referred to Hong et al. (2004) for more information about the quantile approach.
Calculated values for Q 25 , Q 50 , Q 75 , and 3*(Q 25 /Q 75 ) are presented in Table 5. For example, in the case of the first tabulated source CXOU J235743.8−323633, we have measured a value for E 25 = 1.14 keV and the corresponding value for Q 25 is 0.096. In (that is, at higher kT ). We suspect that the Group (i) points are mainly SNRs or other emission-line sources: alternatively, these sources may also be XRBs illuminating adjacent clouds of interstellar material in NGC 7793. We note that strong emission line sources are more difficult to interpret correctly using the quantile approach because strong, low-energy line emission (at CCD spectral resolution) can mimic a source with a continuous spectrum and lower column density.
Time Variability
To investigate time variability from the discrete X-ray source population of NGC 7793, we examined two approaches: flux differences between our Chandra observation and the ROSAT PSPC observation of RP99 as well as variability within the Chandra observation itself. For the differences in flux between the two observation epochs, we must consider the flux differences between sources detected by both RP99 and the present work, sources detected by RP99 but not detected by the present work, and sources detected by the present work but not by RP99. We adopted a common energy range (0.2-2.4 keV) to make the flux comparisons as well as a commmon spectral model (that is, an absorbed power law).
We note that we have recovered all of the RP99 sources. We have found only one Chandra source that should have been detected by RP99 if it were active during the ROSAT PSPC observation, namely CXOU J235806.6−323757 (see Section 3.1). We considered the seven discrete X-ray sources detected both in this paper and by RP99. Table 6 presents estimates for the ROSAT and Chandra luminosities of these sources using a power law model with a photon index Γ=1.5 and a column density of N H =1.15×10 20 cm −2 . For CXOU J235746.7−323607 we also included a luminosity estimate based upon the Einstein observation of this source as described by Harris et al. (1994) and designated in that work as 2E 2355.2-3253.
Precise comments for time variability are hampered by the lower quality PSF of ROSAT relative to Chandra. The broader PSF of ROSAT mixed diffuse emission into the discrete emission, raising the overall count rate as well as altering the nature of the spectrum, depending upon the size of the counts aperture used. The presence of the mixing may be noted because all of our Chandra luminosity estimates are at least 10% lower than the ROSAT values.
Regardless, several sources stand out and merit a brief discussion. CXOU J235806.6-323757 should have been detected, implying at least an increase in flux by a factor of ∼30-50.
CXOU 235748.6-323234 decreased by a factor of ∼20 while CXOU J235800.1-323325 decreased by a factor of ∼6-7. We therefore identify a total of three variable discrete X-ray sources in NGC 7793: the remaining sources decreased by modest amounts (factors of ∼2) or are constant within the errors.
To test for variability within the Chandra observation, we used the standard CIAO tools to first barycenter the data and then extract light curves for each detected point source. The light curves were binned into 60-second intervals and these binned light curves were then run through a Bayesian variability detector (the CIAO tool glvary). None of the sources exhibited statistically significant variability. An examination of each light curve verified that the light curves of all of the discrete sources were constant within the errors.
Point Source Spatial Asymmetry
We noted above that a majority of the point sources are located in the eastern half of the galaxy and none are located in the northwestern quadrant. Such a peculiar distribution of sources is difficult to reconcile with the general orderly and symmetric optical appearance of the galaxy. Explanations for such a distribution include an excess absorption toward the northwestern quadrant of the galaxy, a random probability in the distribution of the sources, a gravitational interaction between NGC 7793 and another galaxy, dramatically lower effective exposure on this portion of the chip during the observation and "patchy" star formation activity in NGC 7793. We rule out an excess absorption toward the northwest quadrant based on inspection of maps of dust column density toward NGC 7793 as provided by Schlegel et al. (1998). These maps indicate a dust differential measure of only 2% across the face of NGC 7793, which is insufficient to account for the observed asymmetry. We consider the other four explanations here in turn.
We first consider a strictly random interpretation, that is, if most of the sources are X-ray transients, by chance we A gravitational interaction with another member (or multiple galaxies) of the Sculptor Group is another possibility for explaining the asymmetry: such an interaction should also trigger enhanced massive star formation within NGC 7793. The signposts of an elevated star formation rate have been revealed by numerous observations of NGC 7793 including copious amounts of diffuse radio continuum emission and diffuse [S II] emission from the galaxy's disk (Harnett 1986;Blair & Long 1997), the elevated infrared and blue luminosities of this galaxy for its Hubble type (Read & Pietsch 1999) and the considerable population of resident OB associations and HII regions (Davoust & de Vaucouleurs 1980;Ferguson et al. -24 -1996).
We have inspected tabulated information and distribution maps of the Sculptor Group member galaxies as provided by Puche & Carignan (1988) and Karachentsev et al. (2003) to identify a galaxy (or galaxies) which may be interacting with NGC 7793. The galaxy closest to the southeastern edge of NGC 7793 is the Sculptor Diffuse Irregular Galaxy (SDIG, also known as ESO349-G031) (Laustsen et al. 1977;Heisler et al. 1997 (Karachentsev et al. 2003). In fact, Karachentsev et al. (2003) argue that NGC 55 is instead associated with a third major Sculptor Group spiral galaxy NGC 300. This conclusion was also reached by Pietrzynski et al. (2006), who have separately derived a distance of 1.9 Mpc to NGC 55.
We can quantify the likelihood of an interaction between NGC 7793 and either the SDIG or NGC 55 by calculating the tidal index Θ (Karachentsev & Makarov 1999), which may be defined as follows. If M is the mass of the galaxy which is suspected of interacting with a galaxy of interest and D is the three-dimensional separation between that galaxy and the galaxy of interest, then where C is a constant equal to −11.75 when M is expressed in units of solar masses and D is expressed in units of megaparsecs. If Θ is calculated to be less than zero, then it may be safely concluded that the galaxy of interest and the suspected interacting galaxy are not in fact interacting to a significant extent. We calculate values for Θ = -3.14 and Θ = -3.76 in the cases of NGC 55 interacting with NGC 7793 and the SDIG interacting with NGC 7793, respectively. Based on these strongly negative values for the tidal index in both cases, we conclude that a gravitational interaction is an unlikely explanation for the observed asymmetry. Separately, we have also inspected GALEX data for NGC 7793 to search for any obvious asymmetry in the ultraviolet morphology of the galaxy (as might be expected by lopsided star formation) but we find no evidence for an asymmetric appearance at that wavelength.
Next, we considered the possibility that the effective exposure time of the portion of the ACIS-S3 chip that sampled the northwestern quadrant of NGC 7793 was significantly lower than for the rest of the chip, thereby leading to an observed deficit of discrete X-ray sources in this part of the galaxy. In Figure 11, we present a contour plot showing the effective exposure for the ACIS-S3 chip during the observation: for illustrative purposes, we also include the positions of the detected discrete X-ray sources, the location of the aimpoint for the observation and finally an ellipse that spans the optical extent of the galaxy. We argue that in fact the effective exposure time for this portion of the chip is not significantly lower than for the rest of the chip and that therefore a different effective exposure time cannot account for the observed asymmetry of the detected discrete X-ray sources.
Lastly, regarding the "patchy" star formation scenario, we have also considered the work of Smith et al. (1984), who described how the star formation activity in NGC 7793 is stochastic and occurs only in large irregular "patches"; such "patchy" activity may also explain the observed asymmetric distribution of sources. Smith et al. (1984) accurately modeled the star formation activity in this galaxy using a stochastic self-propagating star formation model without an imposed spiral modulation. Those authors also commented that "flocculent" galaxies such as NGC 7793 (as described by Elmegreen 1981) in general lack spiral modulation to star formation and feature a "patchy" arm structure. The lack of spiral modulation in NGC 7793 is also supported by the absence of prominent emission from the galaxy's nucleus at any wavelength. If the star formation in the northwest quadrant happened to belong to one large "patch," then the lack of X-ray sources not only at the Chandra epoch but also at the epochs of the prior X-ray observations can be explained.
However, this scenario lacks the full status of an explanation given that we do not have a method to date such a "patch." Therefore, at the present time we cannot provide a clear explanation for the observed asymmetry of discrete X-ray sources in NGC 7793. Random probability seems more likely than such explanations such as a gravitational interaction with another galaxy or stochastic star formation but additional study and analysis is required.
Diffuse Emission
A spectrum of the diffuse emission was extracted following the method described by . Point sources were removed by screening out all events within a radius that enclosed >95% of the PSF at the detected position of each point source. The screening radius was increased to match the increase in the PSF with off-axis angle. The resulting holes were filled in by randomly selecting from an annulus surrounding each source the approximate number of events that would have been present based on the count rate in the annulus. 7 This process assumes only spectral and spatial uniformity of the diffuse emission on spatial scales of ∼20-30 ′′ . The inner radius of the annulus used for the back-fill was twice the outer diameter of the source screening radius to reduce the probability that the selected events were point source events scattered by the wings of the PSF. The annulus was 20 ′′ wide. Annulus overlaps with other point sources were minimal, but were avoided by sampling events from the non-overlapping portions of the annulus. A radial profile of the diffuse emission was obtained from azimuthal sums in 10 ′′ -wide annuli centered on the nucleus. The diffuse spectrum was then extracted using an aperture with a radius of ∼3 ′ .7 determined by the point at which the diffuse profile joined the local (non-galaxy) background. An estimate of the impact of the backfill may be determined by summing the extraction areas of the point sources and dividing by the extraction area of the entire galaxy. For NGC 7793, those values are 1900 arcsec 2 and 1.55×10 5 arcsec 2 , respectively, for an impact ratio of ∼1.2%.
The spectral fit of the diffuse emission spectrum was carried out in two steps. First, the background was fit using a variety of continuum and line features to achieve a good fit as described in . Second, the diffuse spectrum + background was fit simultaneously with the background by adopting for the background the best-fit parameters determined in the first step 8 . A version of the optically-thin thermal plasma model APEC (Smith et al. 2001) which is known as "VAPEC" and which includes variable elemental abundances as determined using recent atomic physics was used to 7 We could ignore the regions surrounding point sources but our analysis of the diffuse emission is part of an investigation of the spatial distribution of the diffuse emission in face-on spirals for which we do not want holes (Schlegel et al. 2011, in preparation).
fit the spectrum. Abundances were allowed to vary but if an abundance was found to have an error that included unity, the abundance value was reset and fixed at unity. The known absorbing column density toward NGC 7793 was adopted and the corresponding fit parameter was fixed at that value. The background spectrum contained a large fluorescent Si feature at ∼1.78 keV. The background fit easily matched the data, but the result slightly over-corrected the source spectrum because of photon statistics. We excised the data at this location: this excision has a negligible effect on the fit as the dominant emission from the hot gas occurs in the 0.5-1 keV band. We also included a power law component to account for any hard emission present from unresolved point sources. The results of our fits are summarized in Table 7.
In Figure 12 we present the extracted spectrum of the diffuse emission as fit with the VAPEC model. The fitted temperature of the diffuse spectrum of NGC 7793 was found to be kT = 0.19 +0.03 −0.02 keV and N H = 3.6×10 21 cm −2 , kT = 0.22±0.02 keV for the dual APEC fit, or kT = 0.25±0.02 keV if N H is fixed at the Galactic column density. These values are lower than the temperatures derived by RP99 (namely kT ∼0.8-1.1 keV) using typical thermal models such as thermal bremsstrahlung and the Raymond-Smith thermal plasma.
The most likely explanation for the discrepancies in the fitted temperature is the mixing of the diffuse emission with point source emission from the broad wings of the ROSAT PSPC PSF. For the PSPC, a circle enclosing 90% of the flux was 0.9 ′ in diameter at 1 keV but 2.7 ′ in diameter at 1.7 keV. RP99 adopted an aperture of 1 ′ so sources with harder spectra and hence larger PSFs would preferentially contribute flux above a PSF radius of ∼1.1 ′ . In addition, weak unresolved point sources would blend to form a brighter "diffuse" component.
The fitted temperature of the diffuse emission is similar to the results from fits to the diffuse components of other nearby galaxies: in Table 8 we list several values measured for the diffuse emission from other galaxies for comparison. NGC 7793 stands out for the absence of a second, hotter diffuse component. We attempted to fit a second APEC model (i.e., VAPEC + VAPEC + Power Law), but the model normalization of the second APEC component was consistent with zero. A second APEC component was non-zero only if the N H was fixed at the known Galactic column. We expect that a longer exposure would lead to sufficient statistics to separate a hot component from the background emission without the necessity of additional model constraints. The abundance of each element was permitted to vary, but only the abundance of Ne in the fixed N H VAPEC model was significantly different from 1.0: in this case, the best-fit value for the abundance was 2.23 +0.74 −0.66 . In Figure 13 we present confidence contour plots for the column density and the fitted temperature for the APEC fit (left) as well as the fitted temperature and neon abundance for the VAPEC fit in which the column density was fixed at the known Galactic value (right).
We also modeled the spectrum as a sum of unresolved Gaussians plus a power law with emission Gaussians at 0.60, 0.75, and 0.89 keV, corresponding approximately to emission lines attributed to O VII, Fe L shell, and Ne IX. The presence of these lines is expected in hot diffuse gas. The model fits (with both N H free and fixed) were consistent with the APEC+Power Law model, but applying Occam's razor, this model is penalized by the extra components and constraints necessary to obtain a good fit. While the multi-Gaussian fit implies the presence of specific emission lines, the VAPEC model provides the statistically acceptable fit and we consider it the best-fit model.
For the adopted Galactic column density and the VAPEC model, we calculate absorbed and unabsorbed fluxes for the diffuse emission of ∼5.4×10 −13 and ∼5.5×10 −13 erg s −1 cm −2 , respectively, in the 0.5-2 keV band. For the assumed distance to NGC 7793, the unabsorbed flux corresponds to a luminosity of L X ∼3.3×10 38 erg s −1 .
Luminosity Function of Discrete X-ray Sources
A plot of N (defined as the number of sources with luminosities in excess of the luminosity L X ) versus log L X for the NGC 7793 discrete sources (with luminosity units of 10 38 erg s −1 ) is shown in Figure 14. Several functions are plotted, including the complete luminosity function for NGC 7793, the luminosity function without the two known foreground stars, and the function without the ULX and the two foreground stars. For comparison, the luminosity function of NGC 2403 (Schlegel & Pannuti 2003) and a line of slope Γ = −0.65 are also shown. The excluded ULX is very bright and its inclusion leads to a long tail in the distribution, potentially requiring a two-component fit. We discuss the complete and the ULX-less functions solely for comparison with the luminosity functions of other galaxies.
Note that the slope of Γ = −0.65 is well-defined in the narrow luminosity range of ∼37.1 < log L X < ∼37.8 in contrast to the slope of NGC 2403 (determined over the range ∼36.3 < log L X < ∼39.0 -see Schlegel & Pannuti (2003)). There are a deficit of sources in NGC 7793 at low (<37.0) and high (∼38.3-39.0) log L X . Both galaxies occupied a similar area on the ACIS CCDs, so any loss of sources at low L X does not provide an explanation.
If one or two luminous LMXBs were off during the observation epoch, the deficit at log L X ∼ 38.3 to ∼ 39.0 is readily explained. At the present time, we have information on the time variability of only the most luminous sources (see Section 3.4).
In previous work, differences between the slopes of the luminosity functions of the different galaxies may be reconciled by applying corrections for star formation rates. This result was demonstrated by Grimm et al. (2002) who aligned seemingly discrepant luminosity functions after applying corrections for star formation rates in these galaxies. In the case of NGC 7793, such a correction is small (Storchi-Bergmann et al. 1994). In Table 9, we present estimates of both Γ and star formation rates (in units of M ⊙ yr −1 ) for five galaxies, including NGC 7793. In contrast to the work of Grimm et al. (2002), we find no obvious correlation between the values of the slopes and the star formation rates of the five galaxies we list in Table 9.
The Multi-Wavelength Properties of the SNR Population of NGC 7793
Lastly we comment on the multi-wavelength properties of the SNR population of NGC 7793. As noted previously, prior to the present work a total of 32 SNRs had been identified in this galaxy by previous surveys conducted at X-ray, optical and radio wavelengths (Blair & Long 1997;Read & Pietsch 1999;Pannuti et al. 2002). In Section 3, we presented a spectral analysis of CXOU J235746.7−323607, the X-ray counterpart to the candidate radio SNR R3 that was identified by Pannuti et al. (2002) and concluded that the X-ray source was time-variable. We also exclude this source from our discussion here of the multi-wavelength properties of the SNR population of NGC 7793, reducing the size of the sample of sources to 31.
By virtue of its superior angular resolution, Chandra potentially yields a significant improvement over ROSAT (which imaged NGC 7793 previously) in studies of the SNR population of an external galaxy. To identify X-ray counterparts to these known SNRs, we have cross-correlated the list of discrete X-ray sources detected in this galaxy (see Table 2) with the positions of the known SNRs (as described in Section 3.2). We have clearly detected only one additional SNR, the optically-identified SNR N7793-S11, which has also been detected in the radio by Pannuti et al. (2002). To investigate the scenario where the X-ray counterparts to the SNRs may be extended and faint which could be missed by wavdetect, we extracted counts at the locations of all of the 30 remaining SNRs. For the optically-identified SNRs we used apertures that corresponded to the sizes of the angular extents of these SNRs (ranging in size from ∼2 ′′ to ∼10 ′′ in radius) while for the candidate radio SNRs we used apertures that were 3 ′′ in radius. We did not detect any additional SNRs to a limiting count rate of ∼1×10 −4 cts s −1 : we note that this search is further confused by diffuse X-ray emission from the disk of NGC 7793 as well. To estimate a corresponding limiting luminosity for this count rate, we consider the work of Long et al. (2010) who conducted an X-ray survey of the SNR population of the nearby face-on spiral galaxy M33 with Chandra, known as the Chandra ACIS Survey of M33 (ChASeM33 -see Plucinsky et al. (2008)). Those authors assumed a soft thermal (kT = 0.6 keV) spectrum with a sub-solar metal abundance of 0.5 for calculating the luminosities of detected X-ray counterparts over the energy range of 0.35 -2.0 keV. Assuming the same model, considering the same energy range and adopting the nominal Galactic column density toward NGC 7793 of N H = 1.15 × 10 20 cm −2 , we calculate a limiting luminosity for our search of L X ∼ 7.6×10 35 ergs s −1 . The survey conducted by Long et al. (2010) identified 7 (out of 131 SNRs observed by the survey) with X-ray luminosities in excess of this limit for a detection rate of 5%: this value closely matches our detection rate of one SNR detected (out of 31 SNRs observed) of 3%.
We find therefore that we can reconcile the detection rate of X-ray counterparts to SNRs in NGC 7793 with the detection rate of X-ray counterparts to SNRs in M33 as presented by Long et al. (2010). For comparison purposes, we note that over ten Galactic SNRs are known to have unabsorbed X-ray luminosities that exceed the limiting luminosity attained by our Chandra observation: 9 it is possible that the discrepancy between NGC 7793 and the Milky Way may be simply due to the lower mass and star formation rate of NGC 7793 compared to the Milky Way. Figure 15 presents a Venn diagram summarizing the overlap of detections of the 31 SNRs identified at multiple wavelengths in NGC 7793 as updated with the results of the present work. Pannuti et al. (2007) presents and discusses wavelength-dependent selection effects in our search for SNRs in a sample of nearby galaxies using Chandra observations. Long et al.
(2010) also discuss multi-wavelength properties of the sample of known SNRs in M33: those authors describe the importance of local gas density in dictating the X-ray properties of SNRs as well as consider how optical morphology and environment of the SNR may affect its detectability in the X-ray. The reader is referred to both of these works for a more complete discussion of multi-wavelength properties of SNRs. We note that the detection of X-ray emission from N7793-S11 is significant in that this source is one of only a very few extragalactic SNRs located beyond the Local Group which have been detected in the X-ray, optical and radio bands.
Conclusions
The conclusions of this paper may be summarized as follows: 1) We detected 22 discrete X-ray sources within the optical extent of NGC 7793 in a 49094 sec exposure. The sources are significant at the ∼3σ level or greater and correspond to a limiting unabsorbed luminosity of ∼3×10 36 ergs s −1 over the 0.2-10.0 keV energy range.
2) Four sources had a sufficient number of counts to allow spectral fitting. Acceptable fits were derived using either a power law, a bremsstrahlung model, an APEC model or a disk blackbody model plus zero-width Gaussians to simulate unresolved lines. Column densities were generally higher than the known Galactic value toward NGC 7793 -in fact, fits using the known Galactic value were generally poor. Our derived fit to the extracted spectrum of the ULX using the DiskBB model returns a value for kT in of approximately 2.0 keV, consistent with the interpretation provided by RP99 that this source is an XRB featuring a ∼ 10 M ⊙ mass black hole. All sources were investigated using a quantile color-color plot. Time variability was investigated through comparisons of the fluxes between our Chandra data and the RP99 results. Three sources were shown to have varied by factors of ∼6 to 30 or more.
3) We searched for counterparts at multiple wavelengths for the detected X-ray sources.
Based on our search, we have identified counterparts to one SNR, one HII region and two foreground stars; the remaining sources are likely to be XRBs and luminous X-ray SNRs native to NGC 7793 and background galaxies seen through the disk of the galaxy. The detected SNR -N7793-S11 -is also detected in the optical and radio, making it one of the few SNRs located outside of the Local Group to be detected at all three wavelengths. We have also ruled out the possibility that the candidate radio SNR R3 is a single SNR. 4) A remarkable asymmetry is seen in the distribution of X-ray sources in this galaxy, with the large majority seen in the eastern half. Possible explanations for this asymmetry include a gravitational interaction with a nearby galaxy or stochastic star formation.
5) The fitted temperature of the diffuse emission is kT =0.253 +0.018 −0.015 keV, lower than the temperature measured by RP99. The discrepancy can be explained by the significant mixing of the diffuse emission with point source emission from the broad wings of the ROSAT PSPC PSF. 6) We constructed the luminosity function for the detected discrete X-ray sources in NGC 7793. If the known ULX in this galaxy is excluded, the absolute value of the fitted slope is Γ=−0.65±0.11, but the shape is linear over a small range. If the ULX is included, the absolute value of the fitted slope becomes Γ=−0.62±0.2 but the residuals of the fit are much larger.
We thank the referee for many helpful comments that have significantly improved the quality of this paper. We thank Hodge (1969) and Davoust & de Vaucouleurs (1980), respectively; S -optically-identified SNRs identified by Blair & Long (1997); P -X-ray sources detected by the ROSAT PSPC and presented by RP99; R -candidate radio SNRs identified by Pannuti et al. (2002). The identifications of stellar counterparts are made based on the USNO-B catalog (Monet et al. 2003). Sources not listed do not have any counterparts at other wavelengths. keV. Parameter values are as follows: Power Law ("PowLaw"): Photon Index Γ defined such that E −Γ ; Bremsstrahlung ("Brems"): temperature kT in keV and DiskBB (multi-color disk model) temperature kT in in keV. The normalizations are defined as follows -Power Law: photons/keV/cm 2 /s at 1 keV; Bremsstrahlung: 3.02×10 −15 /(4πd 2 ) n e n i dV where d is the distance to the source in cm and n e and n i are the electron and ion densities respectively in cm −3 ; DiskBB: (R in /D) 2 cos θ, where R in is the inner disk radius in km, D is the distance to the source (in units of 10 kpc) and θ is the angle of the disk. The column density N H is in units of 10 22 cm −2 . Finally, "EqW" is the Gaussian Equivalent Width in eV. In this table we are presenting the best fits derived from the models that we used and not necessarily every model that we used. For details of the spectral analysis of these sources, see Section 3.1. to fluxes using a power-law model (with a photon index Γ=1.5) and assuming a column density N H =1.15×10 20 cm −2 . The units of the luminosities are ergs s −1 .
|
2011-06-08T20:08:30.000Z
|
2005-08-01T00:00:00.000
|
{
"year": 2011,
"sha1": "8b2e18125fdf0f5229ec0eb3b458253130dac432",
"oa_license": null,
"oa_url": "http://iopscience.iop.org/article/10.1088/0004-6256/142/1/20/pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "8b2e18125fdf0f5229ec0eb3b458253130dac432",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
222236475
|
pes2o/s2orc
|
v3-fos-license
|
Pregnancy with giant ovarian dysgerminoma
Abstract Rationale: Dysgerminoma is an extraordinarily rare neoplasm arising from the malignant germ cells of the ovary. Early antenatal diagnosis and proper management of the neoplasm to improve maternal-neonatal results are the considerable challenges facing the gyne-oncologist. We summarize the clinical features and discuss treatment strategies of the ovary dysgerminoma (OD). Besides, we also review the literature on OD in PubMed, Web of Science Core Collection, Library of Congress, and LISTA from 1939 to 2019 to evaluate its clinical characteristics, feto-maternal compromise, management, and fertility outcome. Patient concerns: A 25-year-old pregnant woman reported lower abdominal pain and vomiting. Diagnosis: The patient was diagnosed as right OD. Interventions: She received a cesarean section due to severe abdominal pain, delivered a healthy girl at 38 C 4 weeks of gestation, and accepted fertility-preserving surgery. However, the patient refused chemotherapy postoperatively. Outcomes: The patient was followed up 42 days, 3 months, and 6 months after surgery, and no tumor recurrence was observed. Lessons: OD has non-specificity characteristics, including age, symptoms, image date, and tumor marks. However, these abnormal indicators may provide some evidence for accurate antenatal diagnosis. The management strategies should be considered comprehensively on an individual basis, and fertility-preserving surgery should be carried out in the second trimester if further pregnancy is desired. Adjuvant chemotherapy needs to be applied to the treatment of OD patients with The International Federation of Gynecology and Obstetrics (FIGO) stages II, III, and IV and timely chemotherapy is suggested if there are several weeks before the expected date of delivery. The overall prognosis of OD patients is excellent.
Introduction
Malignant germ cell tumor (MGCT) is an extraordinary rare ovarian cancer, which occupies no >5% of all ovarian cancers [1][2][3][4] and 18% to 26% of all ovarian cancer with pregnancy. [5,6] MGCT mainly includes the following subtypes: ovary dysgerminoma (OD) (38.2%), yolk sac tumor (30.4%), and immature teratoma (15.7%). [2] OD is the most common subtype of MGCT and often occurs in adolescence and early adulthood. [1,[7][8][9][10] In pregnant women, OD patients only account for about 0.0002% to 0.001%, [11] and OD usually has a unilateral onset and is diagnosed at an early stage. It is difficult to achieve a large sample of OD due to its relatively low incidence. Thus, more studies are needed to summarize the clinical features and determine the optimal management strategies of OD. Furthermore, OD associated with mental retardation in pregnant women is even rarer. Therefore, the purpose of this study is to report our seldom case, as well as to review the literature on OD features, differential diagnosis, management strategies, and prognosis of pregnant patient with OD.
Ethic
This case report was approved by the institutional review board of the second hospital of Jilin University. Informed written consent was obtained from the patient for publication of this case report and accompanying images.
Methods
We report a case of OD with mental retardation and review relevant literature in PubMed, Web of Science Core Collection, Library of Congress, and LISTA from 1939 to 2019 (Table 1).
Case report
A 25-year-old pregnant woman with mental retardation who had abdominal pain and vomiting for 7 hours was transferred to our department. The previous history was gravida 1, para 0, without surgery history. Her initial prenatal examination was performed at 12 weeks of gestation. The ultrasound indicated pregnancy status and revealed a large mass in the pelvic cavity. The regular ultrasound examination during pregnancy revealed that the volume of the mass increased gradually. At 30 C 2 weeks of gestation, the ultrasound revealed cephalic presentation. The biparietal diameter (BPD) was 6.9 cm, the head circumference (HC) was 26.6 cm, the abdominal circumference (AC) was 24.7 cm, and the femur length (FL) was 5.6 cm. The posterior wall of the placenta was grade I and the lower margin was 1.6 cm from the inner cervix. The amniotic fluid index (AFI) was 16.1. The ultrasound also revealed a hypoechoic mass in the lower part of the posterior wall of the uterus with a size of 14.8 cm  8.5 cm. At 38 C 4 weeks of gestation, the ultrasound before admission of the patient revealed cephalic presentation. The BPD was 7.8 cm, HC was 31.5 cm, AC was 32.9 cm, and FL was 6.8 cm. The right wall of the placenta was late grade II and AFI was 12.2 cm. A Ushaped impression was found on the neck of the fetus. The ultrasound also revealed a hypoechoic mass located at the right rear of the uterus with a size of 23.0 cm  12.5 cm (Fig. 1). Some of her tumor markers were positive. Human chorionic gonadotropin (HCG) was 14,333.94 mIU/mL (0-5 mIU/mL), the a-fetoprotein (AFP) was 142.59 ng/mL (0-8.78 ng/mL), Cancer antigen (CA)-125 was 148.10 U/mL (0-35 U/mL), CA-199 was 610.46 U/mL (0-37 U/mL), CA-50 was 59.10 U/mL (0-20 U/mL), Cytokeratin 19 fragment was 4.86 ng/mL (0-2.08 ng/ mL), and neuron-specific enolase (NSE) was 76.04 ng/mL (0-15 ng/mL). Conversely, some of her tumor markers were negative, such as carcinoembryonic antigen (CEA), CA-153, and squamous cell carcinoma antigen (SCC).
On abdominal examination, the uterine fundal height was 33 cm and the abdominal circumference was 98 cm. The abdominal tenderness was positive, especially in the right lower abdomen, and the rebound tenderness was also positive. The patient could not cooperate in the other examination.
Termination of pregnancy was performed due to severe abdominal pain. She delivered a 2540 g healthy girl with a 1minute Apgar score of 9 by cesarean section (CS) and a 10-minute Apgar score of 10 by CS.
Intraoperatively, we found a large solid mass of 25 cm  19 cm  24 cm, which originated from the right ovary, with a moderate amount of pale-yellow ascites. The tumor was substantially lobulated, the texture was soft, the surface was intact, and the tissue was crunchy. Large blood vessels were visible, and the boundary between the tumor and adjacent organs (the right lining of the uterus, the rectal serosa) was not clear. No abnormalities in the appearance of the ovaries and fallopian tubes were found. At sectioning (Fig. 2), the mass was grayish-white, grayish-yellow, grayish red, and homogeneous. The tumor was almost solid, while some areas were soft, of which the density was similar to brain medulla. No enlarged lymph nodes were found in the pelvis and abdominal cavity. The right fallopian tube was 7 cm long and 0.3 to 0.7 cm in diameter. Tumor biopsy and contralateral ovarian biopsy were conducted primarily.
The pathological results of the intraoperative frozen section showed right adnexa dysgerminoma; left ovarian biopsy showed no tumor, but localized old bleeding and interstitial fibrosis. Fertility-preserving surgery, including giant tumor and right adnexa resection, omentectomy, appendectomy, pelvic lymphadenectomy, abdominal aortic lymph node biopsy, was performed.
The patient's postoperative vital signs were stable, and the incision healed well. However, the patient refused chemotherapy postoperatively. The follow-up results of the patient 42 days, 3 months, and 6 months after the surgery showed no tumor recurrence.
Discussion
OD is the most common subtype of MGCT, which originates from ovarian primordial germ cells. It often occurs in adolescence and early adulthood, but has been found only about 0.0002% to 0.001% of pregnant women. [11] It is hard to achieve a large sample of OD due to its extraordinary low incidence. Therefore, this paper is aimed to report our rare case, as well as to review the relevant literature summarizing the features, differential diagnosis, management strategies, and prognosis of pregnant patients with OD.
Features of ovarian dysgerminoma
OD can occur in women aged from 7 months to 70 years, [31] but predominantly in young pregnant women. [1,7,8,30,32] The majority of OD pregnant women usually have non-specific symptoms, [33] including the most common abdominal pain (35.3%), followed by abdominal distention (19.6%), a growing mass (19.6%), multiple symptoms (18.6%), and non-symptoms (21.6%). [2] In our study, abdominal pain was the main complaint of the patient and led to a cesarean section.
Considering the gross pathologic features of OD, it usually presents well encapsulated and characteristically solid, with a diameter range from 8 to 15 cm. [25,31] At sectioning, the tissue is lobulated, soft, fleshy, and gray-white or light tan. Occasionally, areas of hemorrhage and coagulative necrosis, which are typically related to cystic changes, can be observed. OD is most commonly unilateral in pregnancy, accounting for approximately 95%, [1] while only 5% to 20% is bilateral. [1,[34][35][36] In our study, the tumor was unilateral and showed substantially lobulated soft texture and the entire surface. This finding was consistent with the previous literature. [1,25,[31][32][33] Regarding the microscopic pathologic feature of OD, it is like that of testicular seminomas. OD is composed of round cells with Table 1 Clinical features in 34 cases of ovarian dysgerminoma in pregnancy.
Year Author Age, y Hubalek, M [25] a uniform population, which is usually infiltrated by T lymphocytes and separated by fibrous strands. A large round or flattened nucleus that contains one or a few prominent nucleoli and clear eosinophilic cytoplasm can be observed in the center of cells. In addition, mitoses are always in large quantities. [31] Regarding the imaging features of OD, it is characterized by pure solids. In ultrasonography, they show well-defined borders, smooth lobulated contours, and component lobules, with heterogeneous echogenicity. At Doppler ultrasonography, they are abundantly vascularized at power and color. [37][38][39] In our study, ultrasound results show unclear boundaries, component lobules, with heterogeneous echogenicity. This feature suggests that the mass may be malignant. At CT, the lobular pattern may also be observed with a predominantly solid tumor accompanied by enhancing septa and areas of cystic change. [38,40] Kim and Kang [38] claimed that calcification might be shown as a speckle. In magnetic resonance imaging (MRI), the most characteristic appearance is a solid mass, which is divided into lobules by fibrovascular septa. On T2-weighted images, the signal intensity is isointense or slightly hyperintense. On T1-weighted images, the signal intensity of OD is lower than that of muscles. Kitajima et al [41] described that the MRI features of epithelial ovarian neoplasms were similar to those of multilocular cystic masses with irregular septations. Unfortunately, this patient did not undergo CT and MRI examinations during the hospitalization. The International Federation of Gynecology and Obstetrics (FIGO) staging system is used for staging. Considering the tumor marks of OD, CA125, and NSE may provide reliable evidence in OD. [42] Literature reported that high levels of serum CA125 rapidly fell after chemotherapy. [25] Previous studies described that partly OD patients exhibited increased NSE content and positive NSE of IHC [43,44] The serum levels and IHC expression of NSE in pediatric patients with OD may be of value in patient monitoring. [42] In this study, CA125 and NSE increased significantly preoperatively. Some other indicators are abnormal, including HCG, AFP, CA-199, and CA-50. We hope that these positive indicators can provide some help for other scholars to diagnose OD accurately. Besides, LDH is another reliable indicator for predicting the effect of chemotherapeutic intervention. [25,45]
Differential diagnosis of ovarian dysgerminoma
OD has nonspecific features, which lead to the difficulty in making an accurate diagnosis. However, the age of patients, the imaging features of the neoplasm, and the abnormal tumor markers may help to determine a correct differential diagnosis. In general, OD should be distinguished from other purely solid masses of ovarian, including fibrosarcomas, granulosa cell tumors, Brenner tumors, epithelial ovarian, and metastatic carcinomas. [46]
Treatment strategies of ovarian dysgerminoma
With regard to surgery treatment of OD, accurate surgical staging is relatively critical to determination of the reasonable and accurate risk-based management. Currently, the FIGO classification is the most accepted method. [47] OD staged IA-C could achieve acceptable surveillance by fertility-sparing unilateral salpingo-oophorectom. [31] Bilateral salpingo-oophorectomy and hysterectomy are recommended for stage II and III diseases. In addition, if tumors do not invade the contralateral reproduction organs, unilateral salpingo-oophorectomy can be considered. The management strategies of stage IV patients mainly include fertility-sparing surgery, cytoreduction, and adjuvant chemotherapy. [48,49] Regarding second-look surgery of OD, if the tumor contains teratomatous elements or has residual disease, patients may benefit from second-look surgery after initial cytoreductive surgery and chemotherapy. [48,50] However, if the tumor does not have a teratomatous element, <5 cm of residual disease, or normal tumor marker levels after chemotherapy, second-look surgery is not recommended. [50] In this study, the patient's mental retardation and lack of awareness of contraception may lead to repregnancy and increase the family burden. Thus, the patient's guardian strongly requests hysterectomy and bilateral appendectomy for the patient. However, the patient does not meet the indications for hysterectomy and bilateral appendectomy according to the FIGO stage, age, and grade of malignancy. Also, in China, especially in rural areas, women who have lost fertility function are not competitive in the remarriage population. What is worse, she has mental retardation, which means that it is difficult for the patient to reconstitute a family once she divorces after hysterectomy and appendectomy. Therefore, after careful consideration, we performed fertility-preserving surgery for the patient. The patient showed a satisfactory treatment effect during the follow-up visit.
With regard to chemotherapy of OD, OD with FIGO stages II, III, and IV are indicated for chemotherapy. [47] Chemotherapy is recommended based on pathological evidence, [1] especially in cases with advanced-stage tumors, mixed epithelial and germ cell tumors, large tumor size, and rapidly increasing ascites. To date, platinum-based chemotherapy is the main strategy, including paclitaxel-carboplatin (TC) and bleomycin-etoposide-cisplatin (BEP). [25,[51][52][53][54][55][56] In 2004, Hubalek et al [25] claimed that TC could elicit an excellent response and posed no adverse impacts on the fetus. BEP is usually applied to the treatment of nonepithelial ovarian tumors of nonpregnant patients. However, the incidence of adverse advents (plagiocephaly, fetal ventriculomegaly with cerebral atrophy, hearing loss, and syndactyly) of etoposide is high. [57][58][59][60] Therefore, in pregnancy, paclitaxel-carboplatin chemotherapy instead of BEP is an optimized scheme for the treatment of nonepithelial ovarian cancer. [61] The influence of chemotherapy during pregnancy on maternal survival must be considered. Literature [62,63] reported that chemotherapy during the first trimester could increase the incidence of fetal death, abortion, and malformations. Furthermore, the study also showed that the central nervous system, hemopoietic system, the eyes, and genitals were still vulnerable to sustained exposure to antineoplastic agents after organogenesis. [64] However, increasing evidence suggests that chemotherapy for the second and third trimesters is relatively safe. [65] In our study, the patient was diagnosed as OD staged II B, and chemotherapy was recommended by gynecologists postoperatively. However, she refused. Optimistically, there was no recurrence during the follow-up period of 6 months. We attribute this positive outcome partly to the low malignancy of the tumor and the standard and thorough operation carried out by a gynecologist with >30 years of experience, and partly to the short follow-up period.
Prognosis of ovarian dysgerminoma
Residual disease, tumor markers, the FIGO stage, and the volume of the residual tumor are all the critical factors of prognosis. [48] Besides, age over 45 years is also a significant predictor of recurrence. [49] In most cases, tumors are detected early, which contribute to accurate prognosis. [66] The prognosis of early-stage OD patients is excellent, [48,49,67] and the overall 5-year survival rate is approximately 100%.
Conclusion
In conclusion, features of OD, including age, symptoms, images date, and tumor marks, have non-specificity. However, these abnormal indicators may provide some evidence for accurate antenatal diagnosis. The management strategies should be considered comprehensively on an individual basis, and fertility-preserving surgery should be carried out in the second trimester if further pregnancy is desired. Adjuvant chemotherapy needs to be applied to the treatment of OD with FIGO stages II, III, and IV. If there are several weeks before the expected date of delivery, timely chemotherapy is indicated. The overall prognosis of OD patients is excellent.
|
2020-10-10T13:07:11.168Z
|
2020-10-09T00:00:00.000
|
{
"year": 2020,
"sha1": "e18c6b729225b0d0107c30fac8be189e07cd8ade",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000021214",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fc5fc3fde5221f11bf12219643d2ddc47893c477",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
225428879
|
pes2o/s2orc
|
v3-fos-license
|
Expressions of Human Sexuality in the Genital Phase: Reports of a Field Observation in Adolescents and Young People from Porto Velho, Brazil
— This work aims to discuss about the psychosocial development in adolescence proposed by Erick Erickson, focusing on human sexuality, and analyze what influences experienced by each in this period. The study was part of the Human Sexuality discipline of the Psychology Course at the Federal University of Rondônia in the second semester of 2017 and reports an observation made with four young people and adolescents of both sexes in the age group of 16 to 20 years, emphasizing the genital phase but, as said, under the focus offered by Erick Erickson in the well-known phase of identity and confusion of identity. Study procedures included semi-structured interviews. Adolescence is a phase of many personal and social conflicts and the literature shows that adolescents are always in constant development and because of this, it is considered that they do not yet have sufficient autonomy to constitute themselves as fully capable of being aware for their actions. Thus, it was found that the lack of direction generates a negative outcome in the self-knowledge of young people who currently reflect realities of which they have barely overcome the identity crisis.
I. ADOLESCENCE: SOME POINTS
Adolescence is the target of research in many sciences, especially in psychology, which has sought explanations for this stage of life since the beginning of the 20th century. To better understand this concept, it will be necessary to analyze the term and its meaning. The term adolescence, according to the etymological dictionary, comes from the present participle of the Latin verb adolescere, to grow. The past participle, adultus, gave rise to the word "adult". In Portuguese, the words would be equivalent to "crescent and" crescent ", respectively. Although we consider the adolescence phase to be a relatively recent" sociological invention ", the word adolescent is about a hundred years older than the word adult. (HENDRICKSON 2008).
Therefore, it can be seen that the meanings carried by that word indicate growth, transformation and that is why we know these phases as a transition, because literally, their meaning explains this idea.
Because of this term, many authors have addressed the topic. Stanley Hall a psychoanalyst exponent had adolescence as his object of study, he conceives the phenomenon as a stage of disturbances, anguish and sexual flourishing.
Urribarri (2002, p.1) enumerated the aspects that cause the mismatches produced by the pubescent clash, such as: "The conflict around the dependence on external objects. The reactivation and the establishment of identity conflicts, which also call into question your identity.The reactivation of the body's representations, and questions about its scheme based on physical change and, in particular, on the news that genital eroticism produces".
According to Bock (2007,p.64) the main responsible for the institutionalization of adolescence, was Erick Erickson.He presented it based on the concept of a moratorium and characterized it as a special phase in the development process, in which the confusion of roles, the difficulties in establishing his own identity marked it as "[...] a way of life between childhood and adult life"(Erickson, 1976, p.128).
In Brazil, we had authors who wrote about adolescence, such as IçamiTiba, popularly known for his books, but who, like the others, also understood adolescence as a maturational phenomenon, a period of personal and social conflicts. Oliveira (2007) presents in his work the definition used by IçamiTiba for adolescence, as being second childbirth. For him, the experience of pregnancy lived by the mother and accompanied at a distance by the father, are factors similar to what happens in this age group. The son and daughter seek their own identity, trying to understand what is happening to their body. Kimbanda (2006) observed that in primitive tribes sexual initiation as well as the division of tasks within a clan began in adolescence, in an age group of approximately 14 years.
In Angola, for example, initiation is practiced by several groups: Ganguela, Tshokwe, Nhaneka-Humbe, Ambó. The girl must be initiated when her first period appears. In some groups, they start them before and in others, after two years or more; they also associate it with the marriage contract. (KIMBANDA, 2006, p.118) Fiori and Davis (1982) wrote about the development of adolescence and cultural intervention in emotions.
In tribal groups, or historically differentiated from Western culture, there is no long period that separates children's activities from the full integration of the subject into the productive and reproductive group. The child is considered as such until maturation and biological changes start puberty and characterize their transition to the adult group. Usually in these groups there is a passage ritual, sometimes preceded by a period of recollection, which will characterize the entry into adult relationships. (FIORI and DAVIS, 1982, p.11).
For years, it has been found that in different cultures adolescence is defined by historical requirements. Leontiev's contributions make such facts evident, "We can say that each individual learns to be a man. What nature gives you when you are born is not enough for you to live in society. It is still necessary for him to acquire what was achieved in the course of historical development by human society." (Leontiev, 1978, p.267).
Today we have access to different forms of information, and this has contributed to the transformation of adolescents and their role in society. However, even with all this advancement that technology allows, many young people still believe that this moment in life is an obstacle that needs great challenges. All of this makes it difficult not only to understand them when facing the demands of life, but also for their parents who cannot find practical measures, justifying certain attitudes.
We understand, therefore, that adolescence, like any other phase of human development, has its dynamics permeated by the environment. It is up to us to keep updated to deal with these changes, otherwise we will incur misconceptions and misjudgments. (MAFFEI, 2008, p.167). In view of this variety of understandings of adolescence, in this article we have adopted the Psychosocial perspective of development that presents significant contributions to understanding this stage from the perspective of Erick Erickson. In the course of it, we seek to consider the psycho-social context that rewards the relationships of the young people who were interviewed, also considering the maturational and biological view in the making of the interview script, since there is a junction of this overlapping element in Erickson's Psychosocial theory.
II. ADOLESCENCE AS A DEVELOPMENT PHASE: MAIN CONTRIBUTIONS TO PSYCHOSOCIAL THEORY
The psychosocial development defended by Erick Erickson stems from the understanding of human behavior beyond the biological and instinctive integrating functions, it seeks to include socio-historical variables in an attempt to break the importance given by Freud of childhood as defining the constitution of the individual's ego.
According to Rabello (2011), Erickson's theory describes the psychosocial development of the human being through eight stages in which the individual grows from the internal demands of the ego, but is also influenced by the people and environment in which they live, being essential to identify the culture and society in which the subject is inserted to understand the characteristics of each phase.
At each stage, there is a crisis that will result in an outcome, whether positive (ritualization) or negative (ritualism). Both experiences are essences for the construction of the ego, since crises can strengthen or weaken the ego to the extent that it is not adequately overcome. The more well lived the previous crises, that is, when Basic Trust, Autonomy, Initiative and Diligence have positive outcomes, the easier it becomes to overcome the Identity Crisis. Loyalty and loyalty to yourself are characteristics of the positive outcome of this stage.
From this perspective, Erickson (1976) addresses the identity and identity confusion phase, in which he describes psychological situations common in adolescents. And at that moment, the term crisis will not have an unpleasant connotation, but designate a moment of decision and direction that adolescents go through to grow, differentiate themselves from others, as well as make commitments through meaningful choices. During this phase, there will be constant psychological conflicts regarding the formation of an identity that is not yet well integrated. Based on this, the author writes: This phase yielded more work, being an entire chapter of his book on identity crisis, in addition to the works Youth and crisis; The complete life cycle. This period marks the moment when the personality gains mold as we recognize ourselves through the other and the self-knowledge "Who I am", "What are my plans", showing the emergence of the identity that according to (Myers, 1999, p.86) is the gradual reformulation of a self-definition that unifies the various "selves" in a coherent and concrete feeling of who you are.
The acquisition of new skills is relegated to crises experienced during adolescence that will support the accumulation of knowledge to be inserted in adult life. This phenomenon Erickson (1976) called a "psychosocial moratorium" that defines fixed periods through which everyone will go through life until they reach adulthood. In the phase in question, each society will stipulate the appropriate experiences to consider the young person apt to exercise adult life. In Western society, we have the advent of professional choice and ideological definition, as described by Erickson: "Social institutions support with vigor and the distinction of the nascent functional identity, offering to those who are still learning and experiencing a certain status of learning, a moratorium characterized by definitive obligations and sanctioned competitions, as well as by a special tolerance". (1976, p.157).
Soon, Erickson (1976) will go over how identity is configured in this stage, dividing it into areas that he considered basic to understand every phenomenon. For Parrot (2003), sexual identity will be responsible for the inclusion of young people in certain patterns and groups of which they correspond representative of the same characteristics that they have. As well, learning to relate differences and conflicts as outside oneself, at this stage the behavior and physical differences of the others do not interfere in the individual's perception, as he already understands that there is an exclusivity of attributes in each human being.
Professional identity is responsible for the feeling of belonging and appreciation attributed by the young person.
International Journal of Advanced Engineering Research and Science (IJAERS)
[ When he chooses a profession, he feels independent, confident to leverage the next phases proposed by the author. "Producing and building outside is for adolescents a compensating element for their failures" (Parrot, 2003, p. 31).
On the other hand, ideology identity implies continuous internal restructuring, because as the adolescent positions himself before the world, he is able to assimilate knowledge and introduce political, religious and spiritual ideologies of the social group of which he is part.
What the literature demonstrates about the theory is that the adolescent will always be in constant development and because of that, he does not have enough autonomy to constitute a being capable of being aware of his actions. Therefore, there is much criticism of this point of view, According to Pereira (2007, p.4), we understand that when understanding the young person as someone who "is not yet" we are denying his historical condition. Every young person has a life story that began to be built in childhood and that results in his unique personality.
Based on this assumption, other leading authors of psychoanalysis shared the theoretical position defended by Ericsson regarding adolescence, such as Anna Freud, G. Stanley Hall. For Pereira (2007) this model has been criticized in the maturational era of understanding the human being, even though it considers social aspects as influencers, it still has deterministic conceptions about human development.
In general, Erickson brought a new look within psychoanalysis on human development, focusing on adolescence and not so much on childhood. He noted that all phases have barriers to be faced in order to have a healthy ego, so he proposed an understanding of the daily conflicts in each age group and how this is absorbed and permeated through the human capacity to plan the future, a specific factor of the phase from which the article portrays.
III. METHODOLOGICAL PROCEDURES FOR STUDY
This study had a descriptive nature, with the collection of both quantitative and qualitative data in order to seek the knowledge, experience and opinions of each subject and thus corroborate the theme of the development of human sexuality in the genital phase, in adolescence and youth.
The Subjects
The sample universe consisted of four adolescents and young people of both sexes and aged 16 to 20 years.
Instruments and data analysis
A non-standardized interview was used, in which the interviewees reported their experiences and conceptions about the different contexts of sexuality. The data collected were analyzed through content analysis, in which it aims to know the speeches of each person in order to categorize them and gives meaning to their speeches.
IV. RESULTS
According to the website "The mind is wonderful", they published an article with the following theme: Do you know what sexuality is? (2015) portrays that the human being has his biopsychosocial unit that has three fundamental aspects of sexuality that must be analyzed together. The first factor is sexuality from the biological, it is not specifically the sexual organ or only reproduction, it is the most comprehensive concept. The scheme of the body is integrated as a whole and in this way, we are sexual beings from childhood, adolescence, adulthood and old age. The second factor is the social view of sexuality, according to the acquisition of customs and the learned behavior of the individual in the historical context that is inserted, the beliefs are modified and in relation to sexuality is no different. We are influenced from all sides, Therefore, the socialization process to live sexuality is different for each individual, because the knowledge that is internalized helps us to adapt from our experiences and the maturation of each person's personality. Finally, the third factor is sexuality from the psychological point of view, characterized by thoughts, fantasies, attitudes and tendencies. In other words, it is related to beliefs, pleasure, the result of experiences, the acquisition of knowledge and the feelings we feel about ourselves and others. Thus, it appears that each human being has its uniqueness and our feelings and emotions are felt in a certain way, despite the situation being the same. Therefore, what can be pleasurable for some may cause disgust for others.
Adolescence is a period of change, it is the transition from childhood to adulthood, a time of many changes being biological, social and psychological, adolescence is considered a very difficult phase to deal with, as there are several conflicts experienced in it.
Adolescent pregnancy is considered a serious public health problem and therefore requires guidance, preparation and monitoring programs during pregnancy and childbirth, as it is a problem that offers risks to the child's development, as well as risks to the pregnant woman herself, so most of the time it is unplanned.
When pregnancy occurs during adolescence, biopsychosocial transformations can be recognized as a Teenage pregnancy can have a difficult impact on the education of pregnant women, as many stop studying because of the pregnancy, many need to take care of the child because they usually have no one to take care of and often end up leaving to study to be able to take care of the child.
Teenager M, stopped studying said that she became very ill during pregnancy and that she had to miss several times and then she thought it was better to stop studying. But she intends to continue her studies because in the future she wants to be a doctor.
According to some studies, a teenage pregnancy is characterized as a high-risk pregnancy, due to the high rates of maternal-fetal morbidity. Bringing several biological implications, which are, anemia, overweight malnutrition, hypertension, pre-eclampsia and postpartum depression. The psychological is also affected by the fact that pregnancy at this time in life reduces opportunities and makes it impossible to take advantage of the experiences that adolescence/youth could provide you (TEIXEIRA, 2010).
In recent years, the teenage pregnancy rate has grown considerably, not only in Brazil, but worldwide (DADOORIAN, 2003). And the question asked is how does this continue to occur if access to information is becoming more and more comprehensive. Sexual maturity, which occurs with puberty during adolescence, results in pregnancy simply because having sex without any contraceptive method during a given period results in fertilization. Freud (1905) mentions that these transformations and organic changes generate a very large hormonal pressure, even impelling the adolescent to use his reproductive system to relieve this pressure, in this way the interest in sex begins and as a consequence of an active sexual life , pregnancy.
A study by Doering (1989) with pregnant teenagers showed that those who fell into the middle class and were seen in private clinics rejected pregnancy, saying that this would hinder their plans, projects and so on. the low-class adolescents, seen in public hospitals, claimed that they liked children, showing a much greater acceptance of motherhood. Which shows that the majority of middle class adolescents do not see motherhood as a priority, which often happens with low class adolescents, which is the case of the interviewee AL who reports that she has no prospect of the future, who just wants to be a good mother, who watches over the health of her children, showing that she sees motherhood as a life perspective. The research also addressed the fact that pregnancy, According to Blos (1998) apud Oliveira, et al., (2003) in the final phase of adolescence, professionalization is the most striking process for the consolidation of ego interests. Moreira, 2001 also states that all curiosity, creativity and spontaneity are necessary to channel a creative professional option. The particular vision of the professional choice of each young person expresses the way in which he evaluated the past, present and thus, creating means for the projections of the future.
The interviewee AS who is 20 years old, female, single, low class and is currently unemployed.When asked about life projects for the future, she reports the following: Given the analysis of this excerpt from the interview, we can see that the young woman is in the process of searching for occupational identity, which according to Finally, according to Oliveira et al, (2003) in a study with 48 adolescents from different social insertions in Brasília, the results showed that young people want to enter higher education, enter the job market through exercises that bring personal satisfaction. Despite the fact that the socioeconomic situation is a great difficulty for lowincome people, at any given moment, regardless of their social class, they have feelings of anguish and indecision for the future, only as a result of the growing maturity of their identity that can to know more about the issues that will help in making the decision for the individual's occupational identity.
We are faced with the ideas of Stanley Hall a great psychologist, who identifies adolescence being marked by torments and disturbances linked to the emergence of sexuality.
In conversation with V. at the age of 19, he brought the context of sexuality and its definition within the parameters of society and his perspective. He believes that it is the way we express ourselves sexually, with or without the interference of social, biological, political, religious factors, etc.
With this, Erikson (1976) highlights that it is in this period when human beings are "concerned with what they may appear in the eyes of others, in comparison with what they themselves believe to be, and with the question of how to associate roles and skills cultivated before the ideal prototypes of the day ". In the case of the adolescent interviewed, it is clear that he is deciding on the direction of his life, with great concern for what society will think about him. So much so that this formation of identity does not begin or end with adolescence, but it is where construction takes place.
He believes that our society to a great interference of religions, with a historical construction that determined what is "right" would be the heterosexual relationship between "male" and "female" with the sole objective of reproduction.
Also believing that our society has stood out with advances for all sides, even with sexuality, it has become aware of its complexity. For Hall (1904), adolescence would be an experience compared to a second birth, in which the human being would have the opportunity to go through all the previous stages and, thus, obtain the apex of their development, in addition, this would be a chaotic and difficult phase due to the speed with which the transformations take place. This statement will be reinforced in the psychoanalytic theory that brings adolescence as a stage of confusion, stress and grief also caused by the sexual impulses that it manifests in this phase of development.
So much so that V. presents is in constant search
So we understand that we are in a generation that is quite reluctant on the issues of taboos regarding sexuality and we understand that each person has their uniqueness that are very far from the questions of normative heterosexuality that has been imposed in many societies throughout human history. Currently, we are able to approach homosexuality, bisexuality, asexuality or pansexuality with adequate naturalness. Gradually, the idea that diversity is freedom and enrichment is beginning to spread, something that encourages everyone to define their particular form of affective-sexual orientation.
Thus it is reinforced again during the conversation about sexuality, that the young person is constantly in search of true identity, even though they identify themselves as homosexual and still find themselves discovering more aspects about their desires, believing that the human being
V. CONCLUSION
The interviews with four adolescents and young people brought to light the experiences and life expectations for the future according to their perspectives. We elect people between 16 to 20 years of age, since its objective is to observe the development of human sexuality in the genital phase in adolescence and youth, so we can assess the vision of each one in different social contexts.
In view of the data analysis, we can see that teenage pregnancy is a very delicate moment in a woman's life, as it causes several problems both in the family, in the studies of the teenager, sometimes in physical health and especially in the psychological aspect if she is not prepared to assume the commitment of motherhood and all the responsibilities that pregnancy carries. After the birth of the child, we can see that generally the woman lives or wants to live for him, he has no plans for his future as a person and his desires and desires are left aside. However, it was verified in the face of a young woman's vision that her life goals are well established, that she is in search of occupational identity and that she is also in search of her autonomy, it is a decisive and desired phase for the realization of herself. Finally, analyzing the last interviewee, we can highlight the search for his personal identity, he is in the process of discovering his own sexuality, which considers asexuality as his sexual orientation. Due to a very sexualized society it is a difficult and even painful passage, especially when looking for partners who have the same sexual orientation.
Thus, we conclude that there is a great need for projects in the scope of sexuality in general, both in the biological, as well as personal and psychological. For teenagers and young people in schools, projects that really clarify students' doubts and that provide an open and welcoming place for their positions, without judgments. In addition to the fact that the family is a support for these individuals, thus being able to create a wide space for dialogue between them and, in this way, being able to guide in the best possible way in this turbulent and decisive period in their lives.
|
2020-08-13T10:09:57.955Z
|
2020-08-06T00:00:00.000
|
{
"year": 2020,
"sha1": "1bf06b749a0fb45801e7f3d30b5ae9338f456139",
"oa_license": "CCBY",
"oa_url": "https://ijaers.com/uploads/issue_files/1IJAERS-07202067-Expressions.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "a59e5dc9a36e89b671de7be7a2881fa6faacaa9c",
"s2fieldsofstudy": [
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
51753819
|
pes2o/s2orc
|
v3-fos-license
|
Ganoderma lucidum-are the beneficial medical properties substantiated ?
Ganoderma lucidum, commonly treated as Lingzhi mushroom, is a traditional Chinese medicine which has been widely used over two millennia in Asian countries for maintaining vivacity and longevity. Numerous publications can be found reporting that G. lucidum may possess various beneficial medical properties and contributes to a variety of biological actions by primary metabolites, such as polysaccharides, proteins and triterpenes. Although G. lucidum still remains as a popular agent in commercial products, there is a lack of scientific study on the safety and effectiveness of G. lucidum in humans. There have been some reports of human trials using G. lucidum as a direct control agent for various diseases including arthritis, asthma, diabetes, gastritis, hepatitis, hypertension and neurasthenia, but scientific evidence is still inconclusive. In this paper, we discuss various aspects pertaining to the beneficial medical properties of G. lucidum (excluding anti-cancer activities). In particular, we have addressed some of the loopholes in previous studies that support G. lucidum and its secondary metabolites as effective agents to treat various human diseases. Most of the clinical trials were successful with G. lucidum preparation, however factors like small sample size, lack of a placebo control group, lack of information regarding long term treatment of the drug, age, patient‟s gender and side effects, standard method of extraction of G. lucidum, standard dosage, and number of patients treated undermine the validity of the results. Hence, G. lucidum can be used as a therapeutic drug when more direct and supportive scientific evidence are available in near future.
Introduction
Ganoderma lucidum is a popular medicinal mushroom and widely used to promote health and longevity for over two millennia (Zhao et al. 2015).It is known as "Lingzhi" and was first indexed in Shen Nong"s Materia Medica (206 BC-8 AD) as a longevity promoting and tonic herb of the non-toxic superior class (Zhu et al. 2007).Ganoderma lucidum (Curt: Fr.) Karst.belongs to phylum Basidiomycota, order Polyporales and family Ganodermataceae (Index Fungorum 2016, http://www.indexfungorum.org/).Liu (1974)
compiled a monograph of Traditional Chinese
Anti-oxidant activity Mohan et al. (2015) demonstrated oxidation is a fundamental biological process of many living organisms for the production of energy and the uncontrolled production of oxygen-derived free radicals is hostile and damaging to cells.Moreover, they were believed to cause diseases, including, aging, arthritis, atherosclerosis, Alzheimer"s diseases, cancer, carcinogenesis, genetic damage, heart diseases, inflammation, Parkinson"s, tissue loosening and further it promotes tumor invasion and metastasis.Nowadays many synthetic antioxidants are used to reduce oxidation damage of cells; however, researchers found that synthetic antioxidants can result in health hazards such as liver damage and carcinogenesis (Singh & Rajini 2004, Yuan et al. 2008).Hence, it is necessary to develop efficient natural antioxidants to protect body cells from free radicals and reduce the risk of side effects and various other diseases.Triterpenes, Polysaccharides, polysaccharide-peptide complex and phenolic components of G. lucidum have been proposed to be responsible for the antioxidant effect (Kana et al. 2015, Mehta 2014).Ganoderma lucidum antioxidants were found to be absorbed quickly after ingestion, resulting in an increase in the plasma total antioxidant activity of human subjects (Wachtel-Galor et al. 2004).The antioxidant activities of the polysaccharides extracted from G. lucidum still remains poorly unknown (Kana et al. 2015).Its polysaccharides exhibited a relatively high level of radical scavenging activity with lower IC 50 (Half maximal inhibitory concentration) values and a higher antioxidant activity since G. lucidum polysaccharides (GLP) were rich in antioxidant components, such as proteins, amino acids, peptides, phytosterols, ascorbic acid and microelements (Mohan et al. 2015).Polysaccharides of G. lucidum decrease the production of oxygen free radicals and antagonize the respiratory burst in order to help anti-aging process.Its polysaccharides also exhibit reducing power and chelating effects on Ferrous (Fe 2+ ) ions (Liu et al. 2010, Kozarski et al. 2012).A homo-polysaccharide composed of mannose has antioxidant activity under in vitro and in vivo conditions and it has promising free radicals (O 2 ; HO and DPPH) scavenging ability and further it increases the activity of antioxidant enzymes (Ferreira et al. 2014).Ganoderma lucidum peptidoglycan prevent induced necrosis of macrophages by t-butyl hydroperoxid (t-BOOH) in order to protect the mitochondria, endoplasmic reticulum and macrophage microvilli from oxidative damage and malfunction (Giavasis 2014).Ganoderma lucidum glucans have been reported to act as free radical scavengers in food and inhibit lipid peroxidation simultaneously stimulating interferone synthesis in human blood cells (Giavasis 2014).The radicals scavenging activity increases the activity of antioxidant enzymes: superoxide dismutase (SOD) which catalyzes dismutation of superoxide anion to hydrogen peroxide; catalase (CAT) which detoxifies hydrogen peroxide and converts lipid hydroperoxides to nontoxic substances; and glutathione peroxidase (GSH-Px) maintains the levels of reduced glutathione (GSH) (Ferreira et al. 2014).The superoxide anion radical scavenging effects and Hydroxyl radical scavenging activities of the G. lucidum polysaccharides were high at increased concentrations of GLP (Mohan et al. 2015).Glycopeptide isolated from G. lucidum showed an antioxidant activity against the injury of macrophages induced by ROS (You & Lin 2002).Jia et al. (2009) showed the antioxidant activity of GLP on streptozotocin (STZ)-a diabetic rat.The results revealed that it increased non-enzymatic and enzymatic antioxidants, serum insulin level, and also reduced the lipid peroxidation.GLP80, exhibited promising antioxidant activities (Kana et al. 2015).A glycopeptide isolated from G. lucidum, composed of 17 amino acids and rhamnose, xylose, fructose, galactose, mannose and glucose as sugars had antioxidant activity by reducing ROS formation, MDA (Malondialdehyde) levels and increasing the activity of manganese superoxide dismutase in rat cerebral cortical neuronal cultures exposed to hypoxia (Zhao et al. 2004).This glycopeptide also showed antioxidant activity (free radicals scavenging ability) by protecting against alloxan induced pancreatic islets damage under in vitro and in vivo conditions (Zhang et al. 2003).Methanol extracts of G. lucidum were reported to prevent kidney damage through restoration of the renal antioxidant defense system inducing the anti-cancer drug cisplatin (Sheena et al. 2003).Mohan et al. (2015) concluded that no direct link has been established between the antioxidant properties of G. lucidum and its immunomodulatory and anticancer effects, and whether it acts as an antioxidant or pro-oxidant.However, Nithya et al. (2015) reported that G. lucidum has potential activity against mammary carcinoma probably by its antioxidant and enzymatic activity with the strong evidence by decreased enzymatic and non-enzymatic reaction such as superoxide dismutase, catalase, Glutathione peroxidase, reduced glutathione, lipid peroxidation, Vitamin C, vitamin E decreased mitochondrial and glycolytic enzymes.Flavonoids and tannins in the G. lucidum extract (GWater-Alc) indicate antioxidant activity.The anti-oxidant activity protects cell damage caused by oxygen reactive species involved in the inflammation pathology (Fidelis et al. 2014).The hydroethanolic solution of G. lucidum (GWater-Alc) showed an anti-inflammatory activity very significantly (Wadt et al. 2015).
Taylor & Reide (1998) revealed that only two classes of drugs are currently used to treat HIV (anti-human immunodeficiency virus) infection in western medicine and one class is protease inhibitors.They interfere with HIV replication by inhibiting post-translational processing of viral precursor polypeptides.Ganoderma lucidum has anti HIV-1 protease activity and hence it could be used to treat HIV infection via the same mechanism.Several triterpenoid compounds of G. lucidum possess anti-HIV-1 activity, including Ganoderic acid A which exhibited inhibitory activity against HIV-1 proteases (El-Mekkawy et al. 1998).Wang & Ng (2006) demonstrated that G. lucidum also contained laccases which might inhibit HIV-1 reverse transcriptase.Some compounds such as Ganodermanondiol, Lucidumol B, Ganodermanontriol, Ganoderic acid B and Ganolucidic acid A showed significant HIV-1 protease activity.Eo et al. (2000) found that Ganoderic acids had antiviral activity against HIV and Epstein-Barr virus.Giavasis (2014) reported, Lentinan, an acidic proteoglucan from G. lucidum has been used as anti-HIV drug.It increased host resistance to HIV virus, and limiting the toxicity of conventional anti-HIV drugs.Polysaccharide fractions extracted from G. lucidum are shown to exhibit activity against herpes simplex virus-1 (HSV-1) and herpes simplex virus-2 (HSV-2) (Oh et al. 2000, Liu et al. 2004, Pillai et al. 2010).Ganodermadiol exhibited activity against herpes simplex virus type 1 (Bisko & Mitropolskaya 1999).A marked synergistic effect was reported with protein-bound polysaccharide (PBP) from G. lucidum, when used in tissue culture in conjunction with anti-herpetic agents, acyclovir or vidarabine, and with interferon alpha (IFN-α) (Kim et al. 2000, Oh et al. 2000).
Malaria, a human parasite causing about 2.5 million deaths each year, is an infectious disease caused by the genus Plasmodium (Mendis et al. 2000).Very few drugs those are active against Malaria up to now and any direct therapeutic agents were still not available (Wells et al. 2009, Gamo et al. 2010, Anthony et al. 2012, Kulangara et al. 2012).The new lanostanes Ganoderic acids TR and S, Ganoderic aldehyde TR and Ganodermanondiol extracted from G. lucidum by Adams et al. (2010), exhibited moderate in vitro antiplasmodial activity.Water soluble substances like GLhw (high molecular weight components isolated from water soluble substances of G. lucidum) and GLlw (low molecular weight components isolated from water soluble substances of G. lucidum) and methanol soluble GLMe-1-8 isolated from fruit bodies inhibited replication of influenza A virus.Polysaccharides showed a direct action towards hepatitis B virus (HBV) by inhibiting DNA polymerase.Water extract from G. lucidum inhibited proliferation of HPV transformed cells (Hernandez-Marquez et al. 2014).Zhang et al. (2014) evaluated the antiviral activities of two G. lucidum triterpenoids (GLTs), 9(11),15; and Ganoderic acid Y (GLTB), against EV71 (Enterovirus 71) infection.These two natural compounds display significant anti-EV71 activities without cytotoxicity in human rhabdomyosarcoma (RD) cells as evaluated by 3-(4, 5-Dimethylthiazol-2-yl)-2, 5diphenyltetrazolium bromide (MTT) cell proliferation assay.The results suggested that GLTA and GLTB prevent EV71 infection through interaction with the viral particle to block the adsorption of virus to the cells.In addition, the interactions between EV71 virion and the compounds were predicated by computer molecular docking, which illustrated that GLTA and GLTB may bind to the viral capsid protein at a hydrophobic pocket (F site), and thus may block uncoating of EV71.Moreover, they demonstrated that GLTA and GLTB significantly inhibit the replication of the viral RNA (vRNA) of EV71 replication through blocking EV71 uncoating.
Anti-diabetic effects
Polysaccharides, proteoglycans, Proteins and Triterpenoids from Ganoderma lucidum are responsible to have hypoglycemic effects (Ma et al. 2015).Polysaccharides of G. lucidum showed hypoglycemic effects by increasing plasma insulin levels and decreasing plasma sugar levels in mice (Hikino et al. 1985(Hikino et al. , 1989)).These polysaccharides enhanced the activities of hepatic glucokinase, phosphofructokinase, and glucose-6-phosphate dehydrogenase and inhibit glycogen synthetase activity.Further they decreased the hepatic glucose production and prevent hyperglycemia (Agius 2007, McCormack et al. 2001).Zhang et al. (2003) found that G. lucidum polysaccharides protect pancreatic cells against alloxan induced damage by inhibiting NF-κ B activity.He et al. (2005) reported the main cause of mortality and morbidity in patient with diabetes was endothelial cell apoptosis which is associated with cardiovascular problems.Laboratory tests revealed that G. lucidum consumption can provide beneficial effects in treating type 2 diabetes mellitus (T2DM) by lowering the serum glucose levels through the suppression of the hepatic PEPCK gene (Phosphoenolpyruvate carboxykinase) expression (Seto et al. 2009).Oliver-Krasinski et al. (2009) showed G. lucidum polysaccharides with low molecular weights can cause hypoglycemic effects, protect pancreatic cells from cell death, and promote cell regeneration by up regulating Bcl-2(B-cell lymphoma 2), an anti-apoptosis protein and PDX-1 (Pancreatic and duodenal homeobox 1).Ganoderma lucidum polysaccharides can increase body weight, blood glucose levels and serum insulin levels and decrease blood Cholesterol levels (Li et al. 2011).Studies have demonstrated that these Polysaccharides can decrease the mRNA level of key enzymes in glycogenolysis and gluconeogenesis such as hepatic glycogenphosphorylase, fructose-1, 6-bisphosphatase, phosphoenolpyruvate carboxykinase and glucose-6-phosphatase.Further, these polysaccharides decreased serum glucose levels and abnormal serum insulin levels in STZ/high fat diet-induced type II diabetic mice.Zheng et al. (2012) reported inducible nitric oxide synthases and caspase-3 were down-regulated in STZ-induced diabetic rats, which induced apoptosis.Ganoderma lucidum polysaccharides can stimulate wound healing and increase wound healing capacity in STZ-induced diabetic mice (Tie et al. 2012, Cheng et al. 2013).These polysaccharides enable decreasing mitochondria oxidative stress, inhibiting activity and nitration of manganese superoxide dismutase (Mn SOD), suppressing glutathione peroxidase (GPx) activity, decreasing redox enzyme p66Shc expression and phosphorylation (Ma et al. 2015).
Ganoderma lucidum triterpenoids inhibit aldose reductase and α-glycosidase enzymes and aldose reductase converts glucose into sorbitol which is a key step in Polylol pathway (Fatmawati et al. 2010a(Fatmawati et al. , b, 2011a(Fatmawati et al. , b, 2013) ) however, the accumulation of sorbitol can cause diabetic complications such as neuropathy, cataracts, and retinopathy (Bhatnagar & Srivastava 1992, Schemmel et al. 2010).Extracts of G. lucidum contain ganoderic acid C2, ganoderenic acid A and ganoderic acid Df which has aldose reductase enzyme inhibitory activity (Fatmawati et al. 2009).α-Glycosidase converts disaccharides and oligosaccharides to glucose in the small intestine epithelium and hence inhibition of α -glycosidase by Ganoderic acids lead to relieve hyperglycemia (Fatmawati et al. 2011a).Ling Zhi-8 (LZ-8) is a protein found in G. lucidum and it shows immunomodulatory and anti-type I diabetes activities (Kino et al. 1989(Kino et al. , 1990)).LZ-8 has mitogen activity and it can lower the plasma Glucose concentration, further LZ-8 decreased lymphocyte infiltration and increased antibody detection of insulin in beta cells in NOD (Non-obese diabetic) mice (Ma et al. 2015).Ma et al. (2015) concluded that LZ-8 responsible for immunomodulatory activity to inhibit diabetes by adjusting subsets of immune cells.Teng et al. (2011) reported an acidic proteoglycan FYGL (Fudan-Yueyang-G.lucidum)extracted from G. lucidum can inhibit PTP1B (Protein tyrosine phosphatase 1B) in vitro.PTP1B helps in negative regulation of insulin receptor signaling and decreases expression of insulin receptor β subunit (Combs 2010, Feldhammer et al. 2013).FYGL has dose dependent hypoglycemic and hypolipidemic effects and further it increases blood insulin levels and inhibits PTP1B activity and decreases PTP1B protein expression in skeletal muscle cells (Teng et al. 2012).Pan et al. (2013) revealed FYGL in skeletal muscle cells and adipocyte cells induce glucose transporter 4 (GLUT4) protein expressions in diabetic (db/db) mice.FYGL increases the use of glucose in muscle cells and adipocytes and lower hepatic glucose output into the blood to decrease blood glucose levels.Further, FYGL has effects on pancreatic islet regeneration and antioxidant activity in db/db mice.Pan et al. (2014) found a highly water-soluble proteoglycan FYGL-n, a hyper branched heteropolysaccharide can be isolated from G. lucidum.Hence FYGL-n may play special roles for its bioactivities in PTP1B inhibition and antihyperglycemic potency.Ma et al. (2015) reported FYGL was sensitive to glycosidase, and hypothesized glycan was released in the stomach or small intestine and the glycan dissociated protein motifs and interact with PTP1B.Wang et al. (2015) revealed that GLSP (G.lucidum spore powder) consumption reduced the blood glucose levels by promoting glycogen synthesis and inhibiting gluconeogenesis.GLSP treatment was also associated with the improvement of blood lipid compositions through the regulation of cholesterol homeostasis in the type 2 diabetic rats.Sudheesh et al. (2013) reported that administration of Ganoderma lucidum and α-Tocopherol significantly protected mitochondria by preventing the decline of antioxidant status and mitochondrial membrane potential (ΔΨmt) or by directly scavenging the free radicals in order to reduce cardiac toxicity and mitochondrial dysfunction.Gao et al. (2004) demonstrated Ganopoly treatment (G.lucidum polysaccharides) was well tolerated and active in patients with Coronary Heart Diseases.It significantly decreased the percentage of abnormal ECG, blood pressure and serum cholesterol level, further it lasts the cholesterol level unchanged of the patients.Lai et al. (2006) suggested G. lucidum significantly reduced oxidative damage and apoptosis in PTEC (Proximal tubular epithelial cells) induced by HSA (Human serum albumin).The differential reduction of IL-8 (tubular secretion of interleukin) or sICAM-1 (soluble intercellular adhesion molecules) released from HSA-activated PTEC by different components of the LZ (G.lucidum extract) proves that components of G. lucidum with different molecular weights could have different roles and operate different mechanisms in preventing HSA-induced PTEC damage.
Other beneficial properties of Ganoderma lucidum
The compounds cyclooctasulfur and oleic acid isolated from Ganoderma lucidum (culture broth) inhibit releasing histamine which is an important activity for treatment of inflammation, allergies, and anaphtlactic shock (Tasaka et al. 1988a(Tasaka et al. , 1988b)).The alkaloids, choline and betaine were isolated from the spores of G. lucidum.Vitamins (including β-carotene) and essential elements have been isolated (Paterson 2006).Elemental analysis of G. lucidum fruit bodies revealed Phosphorus, Silica, Sulphur, Potassium, Calcium, and Magnesium to be their main mineral components and Iron, Sodium, zinc, Copper, Manganese, Strontium, Lead, Cadmium, and Mercury were also detected in traces (Chen et al. 1998).It also contains organic Germanium (Chiu et al. 2000), Protein (Chang & Buswell 1996, Mau et al. 2001), lectins (Kawagishi et al. 1997, Thakur et al. 2007), enzymes such as metallo-protease, nucleosides and nucleotides such as adenosine and guanosine (Wasser et al. 2005, Paterson 2006).Ganoderma lucidum could be considered as sources of preservatives of food industry (Kana et al. 2015).Lower urinary tract symptoms in men can be treated with G. lucidum ethanol extract effectively (Noguchi et al. 2008a, b).Further, G. lucidum extracts suppress prostatic growth partly by its ability to inhibit 5αreductase, which is over expressed in Benign Prostatic Hyperplasia (BPH) tissues (Liu et al. 2007).Enzyme α-reductase converts testosterone to the more potent form dihydrotestosterone which promotes growth of prostate cells by stimulating the androgen receptor (Liu et al. 2007).Ganoderma lucidum is having a neuroprotective effect caused by the compounds of methyl Ganoderic Acid A, methyl Ganoderic Acid B, Ganoderic Acid -S1 and Ganoderic Acid-TQ, including promoted neuronal survival and reduced fatigue (Zhang et al. 2011, Zhao et al. 2011, Zhao et al. 2012).The potential use of this fungus for the treatment of neurological diseases has also been studied and found that long term consumption of G. lucidum can decrease the progression of Alzheimer"s disease (Lai et al. 2008, Zhou et al. 2012).This neuroprotective effect is caused by promotion of neuritogenesis and reduction of senescence of the neurons (Seow et al. 2013).Liu et al (2015b) discovered Ganoderic acid C1 (GAC1) significantly reduced TNF-α production by murine macrophages (RAW 264.7 cells) and peripheral blood mononuclear cells (PBMCs) in asthma patients.Inhibition was associated with down-regulation of NF-κB expression, and partial suppression of MAPK and AP-1 signaling pathways.Chemical structures of methyl Ganoderic acid A, methyl Ganoderic acid B. Ganoderic acid S1 and Ganoderic acid TQ (Zhang et al. 2011).Ganoderma lucidum is part of several cosmetic products in the Chinese beauty products, many of which are used in the skin lighting function.In an enzyme based assay, G. lucidum extract was found to be potent tyrosinase inhibitor.Tyrosinase enzyme is a key enzyme in the melanin formation.Importantly the IC50 levels were much lower than other Basidiomycetes mushroom, thus justifying its use as a skin lightening active in cosmeceutical products (Chien et al. 2008).Ganoderma lucidum triterpenoids have been improved learning and memory dysfunction of Alzheimer's disease by increasing acetylcholine content in the brain in a rat model (Zhang et al. 2011).Water extracts of G. lucidum inhibit acetylcholine esterase activity in brain tissues and prevent reduction of acetylcholine levels ensuring a protection of brain tissues from cerebral ischemia, vascular dementia and Alzheimer's dementia (Zhang et al. 2014).Memory impairment caused by the lack of acetylcholine due to malfunction of cholinergic nervous system (Choi et al. 2015).Dementia patients with neuronal damage generate only a small amount of acetylcholine even under active acetylcholine esterase enzyme.This results in abnormal neurotransmission and pathological phenomena, such as learning disorders, memory deficits, and cognitive impairment (Talesa 2001).Lee et al. (2011) reported that lanostane triterpenes separated from fruit bodies of G. lucidum were exceptional inhibitors of acetylcholine esterase.When activity in brain tissues AChE was examined to determine the efficacy of fermented G. lucidum water extracts in improving memory despite scopolamine-induced memory and cognitive impairment, the scopolamine group showed significantly increased AChE activity (Choi et al. 2015).
Toxicity
Most papers on Ganoderma lucidum focused on its miraculous healing quantities but few have shown that it can have toxic effects on humans.Studies on the toxicity and adverse effects of G. lucidum are much less common, however, in vitro study revealed that G. lucidum extracts can have the potential to cause toxicity.When G. lucidum extracts exposed to cells at higher levels of concentrations than which required for stimulatory effects, it causes significant reduction in cell viability observed in some cell lines (Gill & Rieder 2008).Human sensitization to G. lucidum antigen was first reported in 1979 in Ontario, CA, USA with patients who attended chest and allergy clinics and found positive to the Ganoderma antigen (Tarlo et al. 1979) and similar study was done in 1985 in Auckland, New Zealand with more positive data for G. lucidum allergy (Cutten et al. 1988).In 1995 sensitization was reported in India with patients which showed marked skin reactivity to spore and whole body extracts of G. lucidum (Singh et al. 1995).Wanmuang et al. (2007) reported hepatotoxic effects from a patient in Hong Kong who was under the treatments of G. lucidum spore powder.Patients with hypoglycemia should be treated very carefully with G. lucidum since it lowers the blood sugar level (Hikino et al. 1989). Tao et al. (1990) with blood disorders like Thrombocytopenia and patients who were taking anticoagulants or antiplatelets should be cautious, since G. lucidum has anticoagulant effects.Further, patients with gastric ulcers and active gastrointestinal bleeding should be vigilant because of apparent anticoagulant effect of G. lucidum.Patients with tendency for bleeding should be cautious since G. lucidum has an additive effect on clotting factors and prolongation of Prothrombin time (Ulbricht et al. 2010).Patients who are under treatments for Hypertension should be very careful as G. lucidum has hypotensive properties (Lee et al. 2001).Ganoderma lucidum is not recommended for lactating women and pregnant women since no scientific data was found on effects of lactation (Ulbricht et al. 2010).Ganoderma lucidum extract increased total sleep time and non-rapid eye movement significantly in rats, with a possible mechanism related to TNF-α (Cui et al. 2012).Gao et al. (2002 a) reported that GLPS produced a mucosal healing effect in the rat model, partially due to the suppression of TNF-α and induction of c-myc and ODC gene.Ganoderma lucidum pharmacopuncture (GLP) on chronic gastric ulcers in rats was studied by Park et al. (2014) and found two local acupoints CV12 and ST36 can provide significant protection to the gastric mucosa.Aqueous extracts from G. lucidum showed absence of embryotoxic or neurotoxic effects when incubated with mouse embryonic fibroblast (BALB/3T3) and mouse neuroblastoma (N2a) cells (Smiderle et al. 2015).
Are the beneficial medical properties truly substantiated?
Ganoderma lucidum has a very ancient history as a medicinal mushroom hence it has gained an almost divine status in its usage to promote health.This fungus is now becoming accepted as a natural adjuvant supplement in combination with other therapies to enhance the healing effects by supporting the immune system.Recent in vitro and in vivo studies demonstrate the beneficial effects of G. lucidum on various diseases and the Western medical researchers are increasingly involved to study this topic nowadays, since this mushroom was introduced to the Western world only within the past 30 years.However, to confirm if G. lucidum has healing power or not, there is a need for a deeper scientific understanding of medical properties, mechanisms of actions, and their interrelationships with other molecules.Published medical investigations performed on G. lucidum except anticancer, are compiled in Table 1.Very few studies have been conducted with G. lucidum in human patients.Most of the studies were performed with small sample size without a placebo control group (Fu & Wang 1982, Kanamatsuse et al. 1985, Jun & Ke-yan 1990, Soo 1994, Wanachiwanawin et al. 2006, Wanmuang et al. 2007, Nayak et al. 2015).Further, there is a lack of information regarding long term treatment of the drug, age, patient"s gender (Kanamatsuse et al. 1985, Jin et al. 1996, Hijikata & Yamada 1998, Futrakul et al. 2002), side effects, standard method of extraction of G. lucidum, its standard dosage and number of Atrophic Myotonia (Myotonia dystrophica).400mg per day for 2 weeks, then the treatment was continued up to 8 months to 6.8 years.
After 2 weeks of treatment, sleep, weight gain, physical strength and muscular strength and relief of Myotonia improved in all patients.Two reported improvement, and 3 reported slight improvement in muscle strength with mytonic symptoms relieved.Six patients displayed at least some long-term results.Blood pressure decreased significantly in essential hypertension patients and slightly decrease in patients with mild hypertension, total cholesterol decreased and fibrinogen increased slightly.Kanmatsuse et al. 1985, Frost 2016 15 volunteers and 33 atherosclerotic patients were orally treated with G. lucidum extract.
Length and weights (wet and dry) of the extracorporeal thrombi were reduced, maximum platelet aggregation inhibition rates were then 31.49% and 17.7 %.Effective inhibitory agent of platelet aggregation.
Jun & Ke-yan 1990 4 patients were treated with G. lucidum extract.Responded quickly to treatment and no patient developed post-herpetic neuralgia (PHN) after more than one year of follow-up.Hijikata et al. 2005 Randomized, double-blind, placebo controlled and dose ranging study for male volunteers above 50 years of age with an International Prostate Symptom Score (I-PSS; questions 1-7) ≥ 8 and a prostate-specific antigen (PSA) value < 4 mg/ml.Benign prostatic hyperplasia.
The overall administration was well tolerated with no adverse effects.Statistically significant reductions in IPSS versus placebo were observed at the 6 mg and 60 mg dose (weeks 4 and 8; 3 points placebo).This significant improvement in I-PSS is confirmed.No changes were observed with respect to Qmax, residual urine, prostate volume and PSA levels.The recommended phase II dose of the extract is 6 mg in men with mild symptoms of BOO.
Neurasthenia
Ganopoly or placebo orally at 1,800 mg three times a day for 8 weeks.
In 123 patients, lower scores after 8 weeks in the CGI (Clinical Global Impression) severity score and sense of fatigue, with a respective reduction of 15.5% and 28.3% from baseline, whereas the reductions in the placebo group were 4.9% and 20.1%, well tolerated in the study patients, Ganopoly was significantly superior to placebo with respect to the clinical improvement of symptoms in neurasthenia.Anti-microbial activity of spore powder of Ganoderma lucidum on Prevotella intermedia isolated from sub gingival plaque from 20 patients.13 out of the 20 clinical samples were tested that showed sensitivity at various concentrations.Chronic periodontitis.12 samples -8 mcg/ml 11 samples -4 mcg/ml 8 samples -2 mcg/ml 5 samples-1 mcg/ml showed sensitivity.
Mean MIC value of G. lucidum spore powder for Prevotella intermedia was 3.62 mcg/ml.Nayak et al. 2015 patients treated, further available information on the number of trials and patients enrollment was very limited.However, well designed in vivo tests and randomized controlled clinical studies with G. lucidum can provide statistically significant results to confirm the efficacy and safety of G. lucidum preparations.Work on the identification of active ingredients, isolation and purification of individual compounds should be carried out and this will enable the active ingredients within nutriceutical products to be measured and to understand whether the beneficial compounds in G. lucidum act synergistically or independently, and to explain potential synergistic effects and establish safe and beneficial dose ranges of active ingredients for each disease type.Further, standardization and quality control of G. lucidum strains, cultivation processes, extracts and commercial formulations, are needed to accept G. lucidum as a natural product for potential use in the prevention and treatment of various diseases.In the nearest future, studies on this medicinal mushroom will be conducted on broad scale with standard scientific methods.These products are recommended for adjutant therapy or alternative mode of medicine but not for direct cure of any diseases.These can improve the comfort of patients" lives or prevent certain diseases or to support drug treatment in chronic diseases to reduce side effects.However, clearly defined protocols and medical standards on exact bioactive compounds and improved culture conditions should be incorporated.
Conclusion
There has been significant increase in developing natural drugs all over the world and Ganoderma lucidum has been used as a functional food to prevent and treat many immunological diseases over the last few decades.Some in vitro and in vivo studies of medicinal properties of G. lucidum appear to be promising, but more in-depth investigation and accurate scientific evidence is still required to confirm the efficacy and safety of the drug in order to incorporate G. lucidum as an integrative therapy.
Table 1
Clinical trials performed with Ganoderma lucidum preparations.
over intervention study. Fasting blood and urine from healthy, consenting 18 adults (aged 22-52 years) was collected before and after 4 weeks supplementation with G. lucidum and or placebo.
No significant change in any of the variables, but a slight trend toward lowering lipids, increased antioxidant capacity in urine.No evidence of liver, renal or DNA toxicity with G. lucidum intake.
Trametes versicolor +G. lucidum) for 2 months. Randomized, double-blind, placebo- controlled study, 42 patients were randomized at a ratio of 1:1 to receive the herbal formula (containing Crataegus pinnatifida, Alisma orientalis, Stigma maydis, G.lucidum, Polygonum multiflorum, and Morus alba ) or placebo.
Difference in the changes in low-density lipoprotein cholesterol (LDL-C) levels between placebo and active treatment was significantly better with active treatment.HbA1c (Glycated haemoglobin levels) significantly decreased by -3.9% in the active treatment group, but the change was not significantly different from that with placebo, no apparent adverse effects or changes in laboratory safety parameters with either treatment, mild beneficial effects on plasma LDL-C after 12-weeks treatment in subjects with dyslipidemia without any noticeable adverse effects.
counts between 100 and 200) were grouped into 3 sets: ARV (anti-retroviral) only, ARV in combination with G. lucidum and G. lucidum only.
ARV in combination with G. lucidum group showed low immunity associated oral thrush infections healed within 3 to 7 days, average body weight and hemoglobin level increased, CD4 (T-helper cells) count increase significantly enhanced smoothness of the skin of the patients and improvements in general body fitness.
|
2018-07-21T05:09:40.655Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "41979f43e93e271c8f01c53149aa9bda2c020739",
"oa_license": "CCBYNCSA",
"oa_url": "https://doi.org/10.5943/mycosphere/7/6/1",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "41979f43e93e271c8f01c53149aa9bda2c020739",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
256757817
|
pes2o/s2orc
|
v3-fos-license
|
Digital financial inclusion and economic growth in Sub-Saharan Africa: the role of institutions and governance
Purpose – This study examines the role of institutions and governance on the digital financial inclusion and economic growth nexus in Sub-Saharan Africa (SSA) from 2014 to 2020. Design/methodology/approach – This study adopts the generalised method of moments technique which controls for endogeneity. The authors employed four main variables namely, index of digital financial inclusion, gross domestic product per capita growth, institutions and governance. Findings – The results suggest a significant positive effect of institutional quality and governance on the digital financial inclusion-economic growth nexus in SSA. Furthermore, the authors find that effect of trade and population growth on economic growth was significantly positive while inflation reduces economic growth in the region. Research limitations/implications – This study also ignored the effect of digital financial inclusion on environmental quality. Future researches should focus on addressing these drawbacks and replicating the study in Africa as a whole and other developing countries across the world that are experiencing digital financial inclusion and economic growth challenges. The results from the study imply that a positive relationship between digital financial inclusion and economic growth. It is important to note that the study was carried out on the premise that institutions play a pivotal role in enhancing economic growth in SSA. Practical implications – The results confirm the significance of policies that enhances institutional quality and governance which are other avenues the authorities can pursue to enhance economic growth in SSA. Social implications – The paper documents the importance of institutions in boosting economic growth which impacts on social life rather than digital financial inclusion only. Originality/value – The paper makes a contribution through analysing the role of institutions and governance on the digital financial inclusion-economic growth nexus rather than the traditional financial inclusion – economic growth nexus which is common to the majority of the available empirical studies.
Introduction
Digital financial inclusion has in recent decades received much attention from researchers and policymakers (Ozili, 2018). It is viewed as a change agent that can result in a revolutionary development in the global financial sector. Regarding the concept, digital financial inclusion is the proportion of individuals and firms that access and use formal financial services through digital platforms. Digitalisation has transformed financial systems in developing and developed countries (Wysoki nska, 2021). Barriers in traditional financial systems continues to fall (Kooli et al., 2022), leading to an increase in financial inclusion which is also recognised as a key enabler for achieving the 2030 Sustainable Development Goals (Allen et al., 2016). It has been argued that countries with high digital financial inclusion levels are better able to withstand economic growth challenges (Khera et al., 2021;Shen et al., 2021;Thaddeus et al., 2020). Therefore enhancing digital financial inclusion can have significant positive effects on many individuals and organisations in those countries that can be affected by economic downturns.
Sub-Saharan African (SSA) countries are making tremendous progress in improving their governance and institutional environment. Several economists have defended the notion that "institutions matter" citing institutional reform as a crucial accelerator for economic growth and social advancement (Acemoglu et al., 2005;Rodrick et al., 2004;Asadullah and Savoia, 2018). These economists propose the concept of extractive institutions, examining countries with weak political institutions in the form of distortionary policies, insecure property rights and document that countries with better institutions achieve higher economic growth. Thus, institutional failures constitute sources of market inefficiency, market exclusion and misallocation of resources, leading to a reduction in economic growth (Webb et al., 2020). Institutions are groups or organisations that operate in the public sector and conduct governmental duties, such as ministries and courts. Governance is a set of procedures that determines how those bodies or entities are managed, how successfully they execute their mandate and how well they resultantly assist households and businesses.
Many economies in SSA lack the level of financial inclusion necessary to reap the benefits, in spite of many advantages that come with it. The region is the most economically excluded in the world (World Bank, W, 2018) as portrayed in Table 1 which summarises the indicators of regional financial inclusion. The results demonstrate that, with the exception of mobile money penetration, SSA underperformed the world average on all measures of financial inclusion among the six regions. Lower levels of digital financial inclusion are a deterrent to Global (Ahmad et al., 2021;Banna et al., 2020;Banna and Alam, 2021;Khera et al., 2021;Shen et al., 2021;Thaddeus et al., 2020). Therefore, it is critical to take practical action to increase the reach of formal financial services. We argue that deliberate government intervention could increase access to finance by the poor in SSA and subsequently spark economic growth. The Indian government is a case in point. Due to a strong central government decision to promote account ownership through the implementation of biometric identification cards, the gap between males and women's account ownership in India fell from 20% in 2014 to 14% in 2018. (World Bank, W, 2018. Literature is replete with studies examining the relationship between economic growth and digital financial inclusion (Ahmad et al., 2021;Banna et al., 2020;Banna and Alam, 2021;Khera et al., 2021;Shen et al., 2021;Thaddeus et al., 2020). However, there is dearth of studies linking institutions, governance, digital financial inclusion and economic growth. We contend that institutions and governance can influence the benefits of digital finance for boosting economic growth. Because economic recession encompasses multiple dimensions that originate in various institutional failures, we contend that it should be addressed from multiinstitutional perspective. Access and use of digital technology, such as search engines, mobile phones, or robotics for banking purposes depends on the quality of institutions and governance. Thus we explore the role of institutions and governance on the interplay between digital financial inclusion and economic growth using the system generalised method of moments (S-GMM) model. By examining the effects of institutions and governance on digital financial inclusion and economic growth in SSA, a topic that, to the best of our knowledge, has not yet been addressed-our study contributes to the literature on economic growth and neoinstitutionalisation. Second, our study investigates a yet-to-be-examined potential causal relationship between the variables. Finally, by creating a variable for digital financial inclusion, we theoretically contribute to the research. Section 2 of our study investigates pertinent theories and empirical literature, Section 3 deals with methodological concerns, Section 4 provides the findings and discusses them, and Section 5 summarises the study and offers recommendations and conclusions.
Literature review 2.1 Digital financial inclusion and economic growth
The link between financial inclusion and economic growth has been acknowledged in previous studies, but in this paper, we focussed on digital financial inclusion and economic growth. Using the generalised method of moments which account for heterogeneity issues, Van et al. (2021) investigated the link between financial inclusion and economic growth using international evidence. The findings of the study reveals a positive relationship between financial inclusion and economic growth. The relationship was stronger for countries with low income and a relatively lower financial inclusion degree. These findings were also affirmed by Khan et al. (2022) who suggested a positive effect of financial inclusion on economic growth, poverty, sustainability and financial efficiency for G20 countries using the generalised method of moments and the autoregression distribution lag. However these studies did not consider institutional controls in their growth models. Countries with different institution frameworks may target different financial inclusion levels and thus, it may affect the magnitude of the financial inclusion-economic growth nexus. It is therefore essential to include institutional factor in the analysis. Our study seeks to close this gap by analysing the role of institutions and governance on the digital financial inclusion-economic growth nexus.
Literature is replete with studies examining the nexus between digital financial inclusion and economic growth for developing and developed countries (Ahmad et al., 2021;Banna and Alam, 2021;Khera et al., 2021;Shen et al., 2021;Thaddeus et al., 2020), with each study providing insights into the subject matter. Using the fixed effect regression approach, Ahmad et al. (2021) have probed into the digital financial inclusion-economic growth nexus and concluded a positive impact of digital financial inclusion on economic growth in China. This supports the studies conducted by Shen et al. (2021) and Khera et al. (2021) who used the spatial dependence model and the cross-sectional instrument variable procedure respectively and concluded a significant positive effect of digital financial inclusion on economic growth. These studies have their own limitations. Ahmad et al. (2021) has used time series data which cannot be generalised in all countries whilst, Shen et al. (2021) and Khera et al. (2021) have failed to consider heterogeneity of spatial dependency and to cater for the speed of adjustment respectively. These studies have also failed to consider the effect of institutions and governance on the relationship, the gap which this study seeks to close. Our study used two-step system generalised method of moments on a balanced panel data and also considered institutions and governance.
Using the vector error correction model and the granger causality test, Thaddeus et al. (2020) finds a unidirectional causality running from economic growth to digital financial inclusion in the long run for 22 SSA countries using quarterly data for the period 2011-2017. On the other hand, Banna and Alam (2021) conclude that digital financial inclusion brings banking stability and economic development using 574 banks from seven emerging Asian countries for the period 2011 to 2018. These studies however did not compute an index of digital financial inclusion to comprehensively define the concept but rather used single indicators to proxy digital financial inclusion. This study seeks to address these gaps. We constructed an enhanced digital financial inclusion index which is different from the traditional financial inclusion index using a three-stage principal component analysis (PCA).
Using the pooled ordinary least squares, two stage least squares and GMM approaches, Ozturk and Ullah (2022) examined the impact of DFI on economic growth and environmental sustainability in 42 One Belt and Road Initiative (OBRI) countries for the period 2007 to 2019. Findings of the study reveals that DFI has a positive effect on economic growth and a negative effect on quality of the environment through carbon dioxide emissions. Myovella et al. (2020) toed the line when they suggested a positive effect of digitalisation on economic growth, for SSA and OECD countries using the GMM approach. Ozturk and Ullah (2022) however used only two proxies such as ATMs and debit cards to measure DFI. Our study used several aspects of DFI to comprehensively define the concept.
Institutions, governance, digital financial inclusion and economic growth
The words "institutions" and "governance" are at times used interchangeably, but they actually refer to different ideas most of the time. For instance, institutions and governance are both sometimes thought to be aspects of one another. In this study, we considered institutions as the framework or structure (the skeleton) and governance as the means by which the framework or structure is operated (the muscles), a description that may be evocative of Williamson's (1998). Even though they are still distinct elements of an economy's structure, the two ideas are crucially intertwined and intrinsically related; one cannot exist without the other. It has been debated for decades how finance and economic growth are related. Since the seminal work of North (1991), which noted that institutions play a significant role in forming advanced economies and are important, potentially positive or negative, drivers of real economic change, the significance of institutions has been acknowledged in the literature. As a result, institutions should be taken into account when modelling the subject. Another significant contribution was done by Acemoglu et al. (2005) who researched on the role of institutions from various perspectives and consideration of historical evidence, particularly on the causes of the significant institutional variation across nations.
According to Bosma et al. (2018), political and economic institutions are the most significant variables in determining how different economies grow. Furthermore, in order to properly enforce property rights and other institutions based on the free market and promote economic progress, a strong and trustworthy legal system is essential (Baklanova et al., 2020). The rule of law is a crucial institution proxy that has a solid reputation in the literature. Legal frameworks and history may have a favourable impact on economic growth and digital finance in an indirect manner (Platteau, 2015). The legal system plays a crucial role in fostering digital financial inclusion since its enforcement protects investors, which encourages further capital allocation and investment (Beck and Levine, 2002). The comprehension of how economic institutions affect digital financial inclusion and the implications for growth, however, is still far from perfect. Yiadom et al. (2021) examines the role of institutions on the long-run effect of financial inclusion on poverty and economic growth in Africa over the 2011-2018. Using dynamic panel regression of 42 economies, they reported a positive effect of countries with strong institutions on financial inclusion through poverty reduction and improved per capita GDP. The study however did not include an index of digital financial inclusion which is comprehensive. Our study therefore examine the effects of institutions and governance independently and determine whether they influence the digital financial inclusioneconomic growth nexus.
Conceptual framework
Since there are no empirical studies that have examined the role of institutional quality and governance on digital financial inclusion and economic growth, the conceptual framework in Figure 1 indicates the relationship between the variables under study. Institutions and governance are the moderating factors whilst digital financial inclusion and economic growth are the dependent and independent variables.
Methodology
Studies that examined the nexus between digital financial inclusion and economic growth have employed the ordinary least squares-fixed effect (Ahmad et al., 2021), the spatial dependence model and the cross-sectional instrument variable (Shen et al. (2021), fractional logit and random effects empirical estimation (Khera et al., 2021). However, these techniques have not been able to address the challenges of heteroskedasticity and endogeneity, and they do not provide reliable and robust results for panel data techniques in most cases (Kim et al., 2018). We therefore addressed these concerns without sacrificing the robustness of our findings by employing the S-GMM system estimator by Arellano and Bover (1995) which is a robust panel data technique to examine the role of institutions and governance on the digital financial inclusion-economic growth nexus in SSA. We formulated our research models as follows: where: DFI is the digital financial inclusion index; GDPPCG is Gross domestic product per capita growth (Economic growth), GDPPCG i;t−1 is the lag of GDPPCG, DFI i;t−1 is the lag value of DFI i;t , INSTIT i;t denotes institutional quality; INTERACTION denotes the interaction term between economic growth and institutional quality (GDPPCG i;t *INSTIT i;t Þ and digital financial inclusion and institutional quality (DFI i;t *INSTIT i;t Þ and the same applies to governance : β signifies the independent variables long run coefficients, ε i;t is the error term where i and t represent the country and time, respectively. CONTROL i;t denotes control variables which include: TRADE (the log of net export); EDU (the log of primary school enrolment); POPG (the growth rate of the population) and INFL (the inflation rate). We employed gross domestic product per capita growth, institutions, governance and digital financial inclusion as the main variables. Trade, population growth, inflation and education were also included as control variables. Following Khera et al. (2021) we constructed an enhanced digital financial inclusion index which is different from the traditional financial inclusion index which was used by most scholars. The indices consist of access, penetration and usage indicators provided by digital financial services including fintech companies, mobile money operators and other new entrants in the financial sector, sourced from the World Bank WDI. We used four indicators of digital financial inclusion (Percentage population with access to Internet, Mobile subscription per 100 people, Number of registered mobile money agents per 100,000 adults and Number of active mobile money accounts per 1 000 adults) as suggested by the upgraded G20 Financial Inclusion Indicator System to compile a comprehensive digital financial index. In constructing the digital financial inclusion index variable, we employed the PCA which is a modern multivariate data analysis tool. The PCA technique retains all variations that will be available in the data, reduces data dimensionality and resolves the possible multicollinearity that may arise among the variables (Nizam et al., 2020). Using the PCA, we normalised all the indicators for each dimension to have values between zero (0) and one (1) to make immaterial the scale that they were measured. Thereafter, the PCA extracts the common principal component of the dimensions that capture various aspects of the inclusive financial sector. Following Tandelilin and Hanafi (2021), we compiled data on institutions and governance from World Bank's WGI. Six different indicators make up the WGI, each of which focusses on a different component of institutions and governance.
Variables description and data
We employed the indicators such as voice and accountability, political stability and lack of violence and government effectiveness to represent the aspect of institutions (the framework or structure), and regulatory quality, rule of law and control of corruption to represent the aspect of governance (how the institutions are run). Figure 2 reflects how the various WGI are related to the summary statistics in this study and how they can be compared to alternative measures. We considered government effectiveness in terms of laws, quality of regulations and rule of law concerning the transfer/repatriation of funds abroad. Usually when the funds obtained from these digital services remain in the economy it improves liquidity but if the funds are repatriated then there is a "sieve" draining funds. This is likely to have a negative effect on economic growth.
In addition, we included control of corruption since the issue of governance linked to corruption is connected to the type of government in place, whether it is democratic, military, or authoritarian and these type of government systems have an effect on the relationship under study. Bad institutional quality in the form of widespread corruption, weak enforcement of property rights, political instability, poor bureaucratic quality, unaccountable leadership and poor corporate governance cripples financial institutions performance, and hence increases financial exclusion. It may dampen people from depositing their funds in formal banks, due to fear of financial losses, thus increasing financial exclusion. In order for us to have summary measures on institutions and governance, we created indicators for the averages across the three indicators. This assisted us in the assessment of the overall impact and the relative importance of governance and institutions. However, it should be highlighted that institutions and governance are highly intertwined, and the methodology used here offers data on how governance and institutions function across a nation while stating very little about the impact of specific institutions working in an economy.
The summary statistics for the main and control variables are indicated in Table 2. On average, digital financial inclusion in SSA is very low at 39%, the minimum and maximum values being 2 and 68%, respectively. This implies that the SSA continent has serious digital financial inclusion discrepancies consistent with Mehrotra and Yetman (2015). 95% of the adult population in SSA had subscribed to mobile phones and 29% uses Internet thereby increasing the chances of brining on board the unbanked. Moreover, institutional quality and governance in SSA is fragile as portrayed by mean values of À0.46 and À0.50 respectively. Inflation was 11.78% with the minimum and maximum values of À17.59 and 558%, the maximum being witnessed in Zimbabwe. Table 3 displays the correlation between the variables under study, giving an insight into the nature and strength of the relationships and the probability of multicollinearity. The study reveals a significant positive association between digital financial inclusion and variables such as institutions, governance, trade and education. In addition, economic growth had a significant positive association with education and governance and an inverse relationship with institutions, trade and inflation. The association between population growth and variables such as digital financial inclusion, institutions, governance and trade is negative and significant at 5% level. The correlation coefficients are less than 0.8 suggesting no serious multicollinearity issues among other estimation variables with the exception of institutions and governance. We also conducted Sargan-Hansen test to check for instrumental variables validity.
Regression results
In this section, are two major results presented in two separate tables. Table 4 shows the results from the S-GMM on the role of institutions and governance on the impact of digital financial inclusion on economic growth. Table 5, on the other hand, reveals the results of the role of institutions and governance on the impact of economic growth on digital financial inclusion. 3.3.1 Digital financial inclusion and economic growth: does institutional quality and governance matter?. From the results in Table 4, we test the raw effect of digital financial inclusion on economic growth in the absence of institutions and governance. The findings show that the effect of digital financial inclusion on economic growth was significant and positive at 10% level. This show that all other things being equal a unit increase in digital financial inclusion increases economic growth by 17.27 units. This finding supports the popular opinion that digital financial inclusion by itself enhances economic growth in line with various scholars (Ahmad et al., 2021;Banna and Alam, 2021;Khera et al., 2021;Shen et al., 2021). Note(s): *p < 0:05; **p < 0:10; parentheses Table 5. System-GMM results -DFII We, also introduced institutions and governance and retest whether institutions and governance matter in explaining the effect of digital financial inclusion on economic growth. The results shown in Table 4 column (2) and column (3) suggest that poor institutional quality significantly reduces economic growth in SSA implying that institutional quality is good for economic growth. Institutions recorded a coefficient of À2.84 indicating that if the institutional quality is strengthened by at least one percentage point, an economy is likely to experience a significant recession of about 2.84%. This could have been caused by prevailing poor bureaucratic quality, quality of law and order and governance in these nations, which also affects the selection, monitoring and replacement process of the government and the government capacity to effectively implement policies post formulation. This result however contradicts Heras Recuero and Pascual Gonz alez (2019) who concluded a positive effect of institutional quality on economic growth in middle income countries. Moreover, we introduced an interaction term between digital financial inclusion (DFII) and institutional quality (INSTITUTIONS) to test whether the impact of digital financial inclusion goes through institutional quality.
The results shown in column (4) and column (5) of Table 4 reveals that the interaction term (INTERACTION) recorded a positive significant coefficient of 19.36 and 43.30, respectively. The results suggest that economic growth in SSA is greater when institutions and governance are of higher quality, even when other factors increasing economic growth are accounted for. This shows that the combined effect of institutional quality and governance on digital financial inclusion significantly increase economic growth. Although SSA is characterised by widely weak institutions and fragile digital financial inclusion condition as confirmed by the descriptive statistics in Table 2, an interaction between institutions and digital financial inclusion and that of governance and digital financial inclusion has the potential to increase economic growth in SSA.
For the control variables, inflation (INFLATION), net trade (TRADE) and population growth (POPGROWTH) were all significant in the five models shown in Table 4 and maintained their respective expected signs consistent with theory. Trade and population growth rate promoted economic growth. Inflation significantly reduces economic growth implying that inflation hurts the economic growth process causing uncertainty in SSA in line with Ifediora et al. (2022).
That means high inflation rate creates price instability in the economies, which negatively influences economic growth in the studied region. The result is consistent with the study by Nawaz et al. (2014).
3.3.2 Economic growth on digital financial inclusion: does institutional quality and governance matter?. The results in column (1) of Table 5 indicate that economic growth ðGDPPCGRÞ is relevant in explaining digital financial inclusion dynamics in SSA. This means that a 1% point increase in economic growth significantly increases digital financial inclusion by 0.2. % points. This is in line with documented empirical literature (Thaddeus et al., 2020). This implies that a growth in the economy would cause people to buy digital gadgets and use Internet which increases financial inclusion. We introduced institutional quality and governance into the economic growth-digital financial inclusion nexus and presents the results in column (2) and column (3) of Table 5. The results show that both institutions and governance are weak in the interrelationship between the two variables despite maintaining the expected negative sign. This result indicates that as a stand-alone institutions and governance do not matter in the economic growth-digital financial inclusion nexus. We interact economic growth ðGDPPCGRÞ with institutions and governance to determine whether the impact of economic growth on digital financial inclusion is contingent on the host nation's institutional quality and governance.
The results shown in columns (4) and (5) of Table 5 reveals that the interaction term (INTERACTION) recorded a significant positive coefficient of 0.003 for both indicators. The results suggest that digital financial inclusion conditions in SSA is greater when institutions and governance are of higher quality. This shows that the combined effect of institutional quality and economic growth and that of governance and economic growth significantly increase digital financial inclusion conditions in SSA. Although SSA is characterised by widely weak institutions and fragile digital financial inclusion condition as confirmed by the descriptive statistics shown in Table 2, an interaction between institutions and economic growth and that of governance and economic growth has the potential to increase digital financial inclusion conditions in SSA.
The level of education and population growth had a significant positive effect on digital financial inclusion at 10% level. For example, an educated resident is more likely to benefit from the financial system by having a registered account and so does the population growth.
Post-estimation test and robustness of results
The Hansen test for over identification restriction shown in Table 4 attests that the S-GMM instrumental variables are valid and contemporaneously exogenous. We also diagnose the presence of autocorrelation in the S-GMM model. Theoretically, if the calculated p-value is greater than the significant level of 0.05 we fail to reject the null hypothesis of no autocorrelation among the residuals. The findings from the study are robust to the digital financial inclusion index.
Discussion and conclusion
An increasing trend of pursuing the strategy of digital financial inclusion in recent years in several parts of the world has attracted numerous scholars. Although the influence of digital financial inclusion on economic growth have been acknowledged in the literature, empirical studies on the role of institutions and governance on this relationship has been scanty. This paper is conducted to give better insights into the link between digital financial inclusion and economic growth and the role of institutions and governance. It follows several seminal papers documenting the importance of framework conditions for economic growth. The work in those studies and this one suggests that SSA countries can easily reap economic benefits by ratcheting up their efforts directed at framework conditions, and our study suggests that digital financial inclusion is a good candidate for this, especially if flanked by proper institutions and governance. First, we find that there exists a bi-directional causality between economic growth and digital financial inclusion. This outcome however contradicts Thaddeus et al. (2020) who supported the supply-leading hypothesis by concluding a unidirectional causality from economic growth to digital financial inclusion. Second, the results suggest that economic growth and digital financial inclusion in SSA are greater when institutions and governance are of higher quality. The result provides strong evidence to suggest that a country benefits from digital financial inclusion depending on the quality of available institutions and governance. This suggests that the efforts to improve institutional quality combined with increased digital financial inclusion can boost economic growth to a large extent than the improvement of institutional quality alone. We find that governments should not solely depend on financial reforms; rather, they should simultaneously target both institutional areas. Unfortunately, SSA does not have the adequate institutional quality to reap the dividend associated with digital financial inclusion. We also report that economic growth boosts digital financial inclusion if a country has sufficient institutional quality. We advise that SSA countries take financial economic growth and digital financial inclusion as a central government direct responsibility and strengthen institutional quality. Deliberate policies should also be made to ensure that digital financial access is extended to the poor. One way of doing so is to ensure governments effectiveness in terms of laws, quality of regulations and rule of law concerning the transfer/repatriation of funds abroad. Usually when the funds obtained from the digital services remain in the economy it improves liquidity but if the funds are repatriated then there is a "sieve" draining funds. This is likely to have a negative effect on economic growth.
Our study recommends that institutional quality and governance should be the focus of the policy makers in SSA so as to enhance economic growth in these economies. To enhance digital financial inclusion, policy makers should interact institutional quality and governance with economic growth policies. Governments should be very serious in terms of control of corruption since the issue of governance linked to corruption is connected to the type of government in place, whether it is democratic, military, or authoritarian and these type of government systems have an effect on the relationship under study. Bad institutional quality in the form of widespread corruption, weak enforcement of property rights, political instability, poor bureaucratic quality, unaccountable leadership and poor corporate governance cripples financial institutions performance, and hence increases financial exclusion. It may dampen people from depositing their funds in formal banks, due to fear of financial losses, thus increasing financial exclusion. Policies options to improve regulatory and bureaucratic quality, law and order situations, and political stability must be prioritised by policymakers. Besides, digital financial inclusion should be another target of policy makers to accelerate the pace of economic growth in SSA.
We also find that rising levels of inflation constitutes a drag that leaks out and diminishes the growth benefits of digital financial inclusion in SSA. In terms of implications for research, practice and/or society, our research is helpful for policy makers in recommendations on ease of access to funds from digital inclusion. Based on these results, we conclude that policies that promotes institutions and governance are imperatively important in promoting the digital financial inclusion and growth of economies in SSA. Future studies can include digital financial inclusion indicators, such as microfinance institutions and financial literacy variables and also compare performance in SSA against other developed nations. Our study also ignored the effect of DFI on environmental quality. Researches in the future should focus on addressing these drawbacks and replicating the study in the whole African region and other developing countries across the world that are experiencing digital financial inclusion and economic growth challenges.
|
2023-02-11T16:04:56.771Z
|
2023-02-08T00:00:00.000
|
{
"year": 2023,
"sha1": "a4be62f0714a12deb4ad2a7e50ec32d3c2cb566d",
"oa_license": "CCBY",
"oa_url": "https://www.emerald.com/insight/content/doi/10.1108/AJEMS-09-2022-0372/full/pdf?title=digital-financial-inclusion-and-economic-growth-in-sub-saharan-africa-the-role-of-institutions-and-governance",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "13ecd4ea60ff87eb0bcabe6a290545e731264af3",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.