id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
59252072
|
pes2o/s2orc
|
v3-fos-license
|
Emerging Functional Imaging Biomarkers of Tumour Responses to Radiotherapy
Tumour responses to radiotherapy are currently primarily assessed by changes in size. Imaging permits non-invasive, whole-body assessment of tumour burden and guides treatment options for most tumours. However, in most tumours, changes in size are slow to manifest and can sometimes be difficult to interpret or misleading, potentially leading to prolonged durations of ineffective treatment and delays in changing therapy. Functional imaging techniques that monitor biological processes have the potential to detect tumour responses to treatment earlier and refine treatment options based on tumour biology rather than solely on size and staging. By considering the biological effects of radiotherapy, this review focusses on emerging functional imaging techniques with the potential to augment morphological imaging and serve as biomarkers of early response to radiotherapy.
Introduction
The introduction of cross-sectional anatomical imaging using computed tomography (CT) and magnetic resonance imaging (MRI) in the 1970s revolutionised clinical oncology by permitting non-invasive determination of tumour burden that is essential for diagnosis and staging and assessing treatment responses. To standardise characterisation of tumour responses that were frequently incomplete and heterogeneous and for comparison between clinical trials the World Health Organisation defined the first response criteria in 1979 [1]. Although subsequently refined the criteria used today, most commonly in the form of the Response Evaluation Criteria in Solid Tumours (RECIST) [2,3], are recognisably similar ( Table 1). Measurements of tumour burden are often an excellent determinant of disease progression or response but changes usually manifest themselves slowly and can sometimes be misleading [4]. In the forty years since the introduction of size response criteria, functional imaging techniques have emerged that are capable of reporting many aspects of tumour biology. In response to therapy, biochemical changes precede anatomical changes, sometimes by many months [5]. Earlier determination of response to treatment would facilitate modification of treatment before significant disease progression and reduce the physical, psychological and financial costs of ineffective or unnecessary therapy. Of the fourteen million people diagnosed with cancer worldwide every year, more than half receive radiation therapy [6]. Identification of radio-resistant tumour regions at a pre-or early therapy stage could be used for localised or global dose escalation or initiation of concomitant chemotherapy. Following treatment, functional imaging is a In this review we will focus on emerging functional imaging techniques that exploit the biological changes in tumours following radiation therapy and have the potential to improve the early detection of treatment response. Many of the functional imaging techniques discussed can also be applied to delivering intensity-modulated or stereotactic body radiation therapy using increasingly sophisticated methods, a subject that is beyond the scope of this review but has been extensively reviewed elsewhere [7,8].
Biological Effects of Radiation
Ionising radiation refers to particles that have sufficient energy to release electrons from an atom. The most significant biological target of ionising radiation is DNA that can be ionised directly or indirectly by free radicals (e.g., hydroxyl ( • OH), superoxide (O 2 − ) and hydrogen peroxide (H 2 O 2 )) produced by ionisation of adjacent molecules [9,10]. DNA ionisation can result in damage to any part of the molecule. Base damage and single-stranded breaks occur frequently but efficient repair mechanisms limit the biological effect. In contrast, double-stranded breaks and DNA crosslinking are less frequent events but are also less likely to be repaired effectively resulting in either genomic mutation or repair failure and subsequent cell death. Radiation-induced cell death can result from activation of cellular senescence or apoptosis, the latter predominantly via the intrinsic pathway [11]. However, particularly in tumour cells where cell cycle checkpoint controls, DNA repair and apoptotic pathways are frequently perturbed, cell death predominantly occurs from mitotic catastrophe, a result of premature induction of mitosis before S and G 2 phase completion that ultimately results in cell necrosis [9,12]. Radiobiological effects are dependent on external factors such as the dose and type of radiation used. For example, protons and alpha particles have a high linear energy transfer and are more likely to induce complex DNA damage with a higher probability of lethality [13]. Additionally, biological variables result in heterogeneous radiation sensitivity between and within tissues. Hypoxia and low rates of proliferation tend to promote radio-resistance and cancer stem cells may be more resistant than the bulk of tumour cells [14].
The effects of radiation on tumours are not limited to cell death with virtually every aspect of the tumour microenvironment responding to the insult. In the acute phase necrosis and vascular disruption leads to hypoperfusion, oedema and an inflammatory response that begins in the first few hours following acute radiation injury. Chronic activation of the inflammatory response results in dysregulated tissue remodelling characterised by decreased vascularity and fibrosis [14].
Imaging Apoptosis and Necrosis
Accurate determination of cell death would find application in a wide range of conditions including stroke, myocardial infarction and cancer. Several probes have been designed to assess biochemical events that occur during cell death. Phosphatidylserine, an anionic phospholipid and a major component of the inner leaflet of the cell membrane, is externalised by stressed or dying cells and is a target for phagocytosis [15,16]. Annexin-V binds to externalised phosphatidylserine with low nanomolar affinity and has been radiolabelled with 18 F and 99m Tc for positron emission tomography (PET) and single photon emission computed tomography (SPECT) imaging, respectively [17]. 99m Tc-annexin-V has progressed to clinical trials, where increases in labelling of 20-30% in the first 72 h following chemotherapy or radiotherapy were associated with treatment response in lung, breast, lymphoma and head and neck cancers [18]. Unfortunately, annexin-V is limited by slow pharmacokinetics and high levels of non-specific binding, particularly to the abdominal organs [17]. An alternative is using the C2A domain of synaptotagmin-I which binds to anionic phospholipids and has been labelled with 99m Tc and 111 In for SPECT [19,20], 18 F for PET [21] and gadolinium chelates for magnetic resonance imaging (MRI) in preclinical in vivo studies [22]. To improve the biodistribution, simplify labelling and improve pharmacokinetics (which are often slow when using peptide-based tracers), a smaller modified C2A protein (C2Am) has been labelled with 99m Tc for SPECT ( Figure 1) [20] and a near-infrared fluorophore for multispectral optoacoustic tomography (MSOT) [23], demonstrating high sensitivity and specificity for cell death. Tumour uptake of 99m Tc-duramycin which binds to phosphatidylethanolamine, another phospholipid externalised during cell death, has demonstrated improved sensitivity in detecting early treatment response compared to 18 F-FDG in preclinical studies [24].
Several small-molecule imaging probes that detect cell or mitochondrial membrane depolarisation and/or acidification of apoptotic cells have also been developed. In patients with intracranial tumours, the change in 18 F-ML-10 uptake from before to 48 h after CyberKnife stereotactic radiotherapy correlated with the decrease in tumour volume measured at 2-4 months after treatment [25]. Similar correlations have been made in patients with brain metastases imaged before and nine days after whole-brain radiotherapy [26].
Cell membrane changes are not specific to apoptosis and increased binding and uptake are also seen in autophagy, necroptosis and necrosis. Several PET radiotracers have been designed to detect cleaved caspase 3 and 7, components of the final common pathway of apoptosis that have greater specificity for apoptosis. Of these, 18 F-ICMT-11 has recently been used in breast and lung cancer patients, although low tumour uptake explained by low cleaved caspase 3 expression before and after treatment limited the conclusions [27].
Imaging Changes in Vasculature
Radiation therapy results in acute endothelial cell dysfunction, apoptosis and disruption of blood vessels. Above doses of 8-10 Gy endothelial cell apoptosis is induced by activation of the acid sphingomyelinase (ASMase)/ceramide signalling pathway [28][29][30]. Therefore, activation of this pathway does not occur with the lower doses delivered in fractionated radiotherapy, only with the higher single doses delivered with stereotactic radiotherapy [31]. Capillaries increase in permeability and become thrombosed due to platelet aggregation and microthrombus formation with subsequent hypoperfusion causing further tumour necrosis [12,32]. This suggests that imaging changes in perfusion have potential for early detection of tumour responses to radiotherapy.
Dynamic Contrast-Enhanced (DCE) CT
DCE-CT following an intravenous bolus of iodinated contrast agent is a highly reproducible imaging technique that permits relatively simple absolute quantification of blood flow, blood volume, permeability-surface area product, mean transit time and extravascular volume [33]. Correlation of DCE-CT metrics with histological determination of microvessel density and vascular endothelial growth factor (VEGF) expression has been possible in some studies [34,35]. Reductions in blood flow, blood volume, mean transit time and permeability-surface area product have been demonstrated in patients with rectal cancer, head and neck cancer and brain metastases following radiotherapy ± chemotherapy [35][36][37][38][39][40]. However, in patients with cervical cancer increases in tumour blood volume were observed three weeks into chemoradiotherapy which were predictive of complete metabolic response at three months [41]. The conflicting findings may reflect heterogeneity between tumour types and responses to treatment but also differences in timing of the post-treatment study.
Perfusion MRI
Following injection, paramagnetic contrast agents (typically low-molecular-weight gadolinium (Gd 3+ ) chelates) are distributed via the blood and diffuse freely into the interstitial space but do not cross the cell membrane. Paramagnetic contrast agents cause magnetic field inhomogeneities that reduce the T 1 , T 2 and T 2 * relaxation times of nearby protons resulting in temporal changes in MR signal intensity and can provide information on the concentration of the injected contrast agent, microvessel density, perfusion and vessel permeability [42][43][44]. The most commonly used techniques are DCE-MRI and dynamic susceptibility contrast MRI, which exploit the T 1 and T 2 * effects of paramagnetic contrast agents, respectively. In addition to subjective visual analysis of the rate, total amount and decrease (washout) of contrast enhancement in lesions, semi-quantitative and quantitative parameters can be derived similar to those of DCE-CT, although the post-processing is complicated by a nonlinear relationship between contrast agent concentration and change in signal intensity [33]. The use of an exogenous contrast agent can be avoided by using arterial spin labelling (ASL) in which blood water protons are magnetically labelled. This suffers from low temporal and spatial resolution and low signal-to-noise ratio but has greatly reduced post-processing requirements when compared to imaging exogenous contrast agents [45].
In patients with cervical cancer high contrast enhancement before and in the first few weeks after the initiation of chemoradiotherapy is a better predictor of response than tumour volume measurements [46][47][48]. Other semi-quantitative and quantitative measures, particularly higher rates of K trans (the volume transfer constant between plasma and the extravascular extracellular space) and plasma flow, have also been shown to be predictive of response [47,49,50]. Similar results have been obtained for rectal cancer and head and neck cancer where a high K trans before chemoradiotherapy and a large decrease or low K trans after therapy are generally associated with good response [51,52]. In high-grade gliomas and cerebral metastases, reductions in K trans and changes in tumour blood volume and flow (from DSC-MRI and ASL) have detected response to stereotactic radiosurgery or whole-brain irradiation as early as one week after treatment [53][54][55]. Correlations between high perfusion on DCE-MRI and radio-sensitivity have frequently been attributed to decreased hypoxia in well-perfused tumours [46]. However, it has also been reported that higher microvessel density and increased angiogenesis correlates with greater metastatic potential and poorer outcome [56].
In addition to prognostication, another potential application of perfusion MRI is the differentiation between radiation necrosis and recurrence in high-grade glioma, which often appear similar using conventional contrast-enhanced MRI [57]. A meta-analysis concluded that sensitivity and specificity for tumour recurrence was 90% and 88%, respectively, using DSC-MRI and 89% and 85% with DCE-MRI [58]. Initial studies using ASL have also demonstrated its ability to differentiate disease recurrence from radiation necrosis with a high degree of accuracy ( Figure 2) [59][60][61][62].
Ultrasound and Optical Imaging
Radiation-induced changes in vasculature can also be imaged using dynamic contrast-enhanced ultrasound (CEUS). Low solubility, gas-containing microbubbles have different acoustic properties and can be used to image microvascular density and perfusion. This has been used to predict response of a number of different cancer types to chemotherapy [63][64][65], while decreased vascular density following radiation therapy has been used as an early marker of response in preclinical tumour models [66,67]. Furthermore, using antibodies conjugated to the surface of microbubbles could permit tumour targeting. Microbubbles targeting the angiogenesis regulators α v β 3 integrin and ICAM-1 that are upregulated in response to radiotherapy were increased in a rat prostate tumour model so treated [68].
Optical coherence tomography (OCT) is a non-invasive imaging technique that can produce 3D in vivo images at a resolution of a few micrometres by measuring the interference pattern of back-scattered light [69]. Although OCT has unrivalled spatial resolution, the scattering of light within biological tissues limits the imaging depth to a few millimetres. OCT is well established for high-resolution 3D retinal imaging and, more recently, functional imaging of the microvasculature of tumours has been demonstrated using speckle variance OCT [70]. In pancreatic human tumour xenografts irradiated with ≥10 Gy, the vascular volume density decreased by 26% just 30 min post-radiotherapy. Early changes were predominantly seen in small vessels <30 µm in diameter and were transient, potentially indicating rapid microthrombus formation following radiotherapy [71]. Maximal reductions in vascular volume density were seen after 2-4 weeks, depending on delivered dose, and preceded reductions in tumour volume by several weeks [70].
MSOT uses an ultrasound transducer to measure acoustic waves generated in response to localised thermoelastic expansion of tissue induced by pulses of laser light [72]. In addition to detection of exogenous contrast agents (see Section 3), endogenous biomarkers of perfusion and hypoxia can be derived due to the different light absorption spectra of oxy-and deoxyhaemoglobin [73]. In patient-derived head and neck squamous cell carcinoma (HNSCC) xenografts early changes in haemoglobin oxygen saturation following radiotherapy correlated with subsequent changes in tumour volume [74]. Although the MSOT spatial resolution of around 500 µm is inferior to OCT a tissue depth of up to 7 cm is possible [75]. Despite depth limitation both techniques have great potential for non-invasive and endoscopic imaging of a wide range of tumours.
PET Imaging of Perfusion
Several PET tracers have been developed to measure perfusion, of which 15 O-H 2 O has been the most extensively used. 15 O-H 2 O is an inert PET tracer that freely diffuses across cell membranes and allows absolute quantification of tumour blood flow with a reproducibility comparable to other imaging modalities [76,77]. High tumour blood flow on 15 O-H 2 O PET before treatment was predictive of poor response to radiotherapy in head and neck cancer [78]. Unfortunately, the short half-life (2 min) of 15 O that requires an onsite cyclotron has limited the widespread use of the technique.
Hypoxia Imaging
Hypoxia is an important biological determinant of radio-sensitivity and is well characterised having first been recognised in the early part of the 20th century [79]. Dysregulated tumour proliferation and angiogenesis, the latter resulting in the formation of structurally and functionally abnormal neovasculature, combine to increase the distance between cells and a sufficient blood supply resulting in chronic hypoxia and nutrient depletion. The abnormal vasculature is also prone to transient occlusion and hypoperfusion, causing acute, fluctuating hypoxia. Both sources of hypoxia contribute to radio-resistance and the transcriptional regulation of many genes associated with tumour growth and survival, notably hypoxia inducible factor 1 (HIF-1) [80]. This has led to great efforts to minimise tumour hypoxia particularly prior to radiotherapy, with variable levels of clinical success [79]. Imaging modalities that are sensitive to hypoxia could be prognostic and potentially improve outcomes by permitting dose and treatment modification or dose painting.
PET Imaging of Hypoxia
18 F-labelled 2-nitroimidazole-based markers have been widely used for PET imaging of hypoxia.
In an anoxic environment reduction of the NO 2 moiety of 2-nitroimidazole by nitroreductases produces highly reactive intermediates which bind to many macromolecules and also undergo glutathione conjugation [81]. 18 F-fluoromisonidazole ( 18 F-FMISO) was the first PET tracer for hypoxic imaging to be developed and has subsequently been the most extensively used, with accumulation having been demonstrated in human glioma, head and neck squamous cell carcinomas (HNSCC), breast, lung and renal tumours. In HNSCC patients, high baseline and ongoing 18 F-FMISO uptake in the first two weeks of uptake was significantly associated with loco-regional recurrence and was used as a rationale for radiation dose escalation [82,83]. The feasibility of dose painting based on hypoxic and nonhypoxic tumour subvolumes to improve local tumour control has also been demonstrated [84]. Similarly, in locally advanced non-small cell lung cancer, 18 F-FMISO uptake on baseline scans is strongly associated with poor prognosis. Although dose escalation was possible in this study without excessive toxicity it was not shown to improve outcome [85].
The inherent sensitivity of PET imaging makes it an attractive modality for hypoxia imaging where detection of relatively small changes in oxygen concentration is required and the initial studies as a prognostic marker have been mostly positive. However, the resolution of clinical PET images is typically around 5 mm, which may lead to a lack of sensitivity when subvoxel hypoxia variation exists and is a potential barrier to dose painting based on hypoxia imaging [93]. It should also be recognised that no imaging modality is sensitive to hypoxia alone and even specific PET tracer uptake is dependent to a certain degree on perfusion, cellularity and other biological variables.
MRI Imaging of Hypoxia
Hypoxia imaging is also possible with MRI because dissolved oxygen and deoxyhaemoglobin are paramagnetic and decrease T 1 and T 2 * relaxation. Methods that exploit the effects of these molecules on T 1 relaxation are termed oxygen-enhanced (OE) or tumour oxygenation level-dependent (TOLD) MRI, while imaging of T 2 * effects is termed blood oxygenation level-dependent (BOLD) MRI [98]. In a typical study, baseline imaging is performed with the patient breathing room air followed by imaging while the patient inhales oxygen to create arterial hyperoxia with the difference in signal between the two images corresponding to the effect of oxygen inhalation. These assays have been shown to correlate with tumour pO 2 [98][99][100] and several studies have used the techniques to detect or predict response to radiotherapy. In animal prolactinoma and fibrosarcoma tumour models, BOLD was able to predict growth response after a single radiation dose [101] and the technique has now entered early clinical trials in head and neck cancer patients [102]. OE-MRI is still in preclinical development. Nevertheless, in rats with subcutaneous prostate tumours improved oxygenation of tumours after radiotherapy correlated with response and OE-MRI measurements offered better prognostication than BOLD [103]. Additionally, OE-MRI has also been used in mouse models to differentiate radiation necrosis from glioma [104].
Diffusion-Weighted MRI
Diffusion-weighted imaging (DWI) is an MRI technique that measures the random, or Brownian, movement of water molecules in tissues that can be quantified. The simplest and most commonly used quantifiable metric used in DWI is the apparent diffusion coefficient (ADC) but more complex models such as VERDICT (vascular, extracellular and restricted diffusion for cytometry in tumours) can extract additional data related to cell size, vascular, intra-and extracellular volume fractions and perfusion effects, which may lead to improved detection of early treatment response [105]. Additionally, numerous imaging techniques have evolved from DWI. For example, diffusion tensor imaging (DTI) and diffusion kurtosis imaging (DKI) can provide information on diffusion directionality and tissue microstructure, respectively [106,107]. Recently, filter-exchange imaging (FEXI) has been used to determine the exchange rate of water across the cell membrane [108,109].
In tissues, Brownian motion of water is limited by membranes and macromolecules, giving a lower ADC value (indicating restricted diffusion) for intracellular water than extracellular water [110]. ADC has been shown to have a strong inverse correlation with tumour cellularity in glioma, lung and ovarian tumours, but the relationship is less significant for other tumour types [111]. Following radiotherapy, ADC can transiently decrease due to cellular swelling before increasing due to cell death, the latter being associated with decreasing cellularity and a response to treatment in most studies. At later stages, reductions in ADC can occur due to inflammation and fibrosis [112,113]. Unfortunately, many of these processes coexist, resulting in conflicting effects on diffusion imaging and potentially limiting the early predictive value of the technique. Few studies have looked at the time of assessment with DWI, but in the longitudinal assessment of brain metastases from a range of primary sites treated with whole-brain external beam radiotherapy, the optimal timepoint for prediction of response was after seven fractions on day seven to nine [114]. Similarly, in HNSCC patients an increase in ADC one week after radiotherapy was predictive of response with a sensitivity of 86% and specificity of 83% [115] However, in cervical cancer, although diffusion imaging could detect treatment response upon completion of chemoradiotherapy, reimaging performed in the first two weeks of treatment was unable to differentiate complete, partial and non-responders [116]. In rectal cancer, although DWI alone is not sufficiently accurate for prediction of early response following chemoradiotherapy [117], a combination of DWI, 18 F-FDG-PET/CT and T 2 -weighted volumetry permitted early response prediction with a sensitivity and specificity of 75% and 94%, respectively [118]. In primary glioblastoma, DWI can help to differentiate progression from pseudoprogression ( Figure 3) [119]. Recently a combination of twelve multi-parametric imaging features (including ADC) differentiated pseudoprogression from progression with a sensitivity and specificity of 71% and 90%, respectively, versus 100% and 20% for ADC alone [120]. Furthermore, a positive correlation has been reported between necrosis and ADC following treatment and, in most studies, tumour recurrence has generally been found to have a lower ADC than radiation necrosis [121][122][123][124].
Chemical Exchange Saturation Transfer MRI
The MRI techniques discussed so far all image the protons ( 1 H) of water molecules, the abundance of which in biological tissues (60-80 M) facilitates imaging at high temporal and spatial resolution.
In the presence of a magnetic field, nuclei with spin (e.g., 1 H and 13 C) resonate at a frequency that is partly dependent on the electronic environment of the molecule they are part of, for example, amide protons resonate at a different frequency to water protons. This phenomenon is known as chemical shift and means that MR spectroscopy (MRS) can non-invasively detect the presence and relative concentration of multiple metabolites in vivo [125]. However, the low concentration of these metabolites makes MRS a technique with low temporal and spatial resolution. Chemical exchange saturation transfer (CEST) MRI, described in detail elsewhere [126], is a technique that allows the indirect detection of molecules containing exchangeable protons via attenuation of the water signal. This indirect detection offers greatly enhanced sensitivity, facilitating high-resolution imaging. Amide proton transfer (APT), a CEST technique that detects exchangeable amide protons present in mobile peptides and proteins, has been used to detect the higher concentration of proteins present in tumours corresponding to a higher APT signal than surrounding normal tissue (Figure 4) [127]. In neuro-oncology the technique has promise for the differentiation of progression and radiation necrosis in particular and has already been demonstrated in a study of patients with brain metastases [128].
Imaging Changes in Metabolism
Aberrant nutrient uptake and subsequent metabolism is a feature of malignant tumours that results from the increased demand for the synthesis of proteins, nucleic acids, fatty acids and other macromolecules required for increased growth and proliferation. Aerobic glycolysis, whereby glucose is reduced to lactate even when oxygen is abundant, was first described by Otto Warburg nearly a century ago and has subsequently been observed in many malignant tumours [129,130]. Following treatment, a decrease in tumour metabolic activity precedes changes in structure and volume, making metabolic imaging attractive for detecting early treatment response [125].
Imaging Changes in Glycolysis and TCA Cycle Metabolism
2-( 18 F-fluoro)-2-deoxy-D-glucose ( 18 F-FDG) is a glucose analogue that is transported into cells and phosphorylated, trapping the tracer intracellularly and allowing identification of glucose-avid tissues upon subsequent PET imaging. 18 F-FDG-PET is the most commonly used PET tracer and is used as an adjunct to morphological imaging in the follow-up of many tumours following treatment ( Figure 5). Its widespread use and standardisation of acquisition has meant that 18 F-FDG is currently the only functional imaging technique to be (semi)quantified for use in response evaluation criteria, most notably in PERCIST 1.0 and the EORTC guidelines [131,132]. In HNSCC, the negative predictive value for primary and nodal disease with 18 F-FDG-PET/CT was 99-100% four months after chemoradiotherapy [133,134]. 18 F-FDG is also useful in several cancers, for example, nonsmall cell lung cancer, for differentiating recurrence from radiation necrosis [135]. However, attempts to shorten the interval scanning time have produced mixed results [136], with a lack of early response often attributed to inflammation and macrophage infiltration, although the evidence for this mechanistically is limited [137]. As discussed earlier, the low concentration of biological metabolites and low sensitivity of NMR limits the temporal and spatial resolution of MRS in vivo. The method of dynamic nuclear polarisation (DNP) of 13 C-labelled substrates is a technique that can increase the signal-to-noise ratio of 13 C MR spectroscopy and imaging by >10 4 in vivo [138]. Hyperpolarised (1-13 C)pyruvate has been the most widely used substrate due to its high polarisation levels, long polarisation lifetime and its position in the glycolytic pathway. Following injection, hyperpolarised (1-13 C)pyruvate enters cells via monocarboxylate transporters and, in tumours, is predominantly reduced to lactate by lactate dehydrogenase [139]. Compared to 18 F-FDG, hyperpolarised (1-13 C)pyruvate has improved specificity for indicating the Warburg effect and may better differentiate inflammation from tumour progression/recurrence [140]. A reduction of (1-13 C)lactate production following hyperpolarised (1-13 C)pyruvate injection after antiandrogen therapy has been observed in a prostate cancer patient [141]. In an orthotopic rat glioma model, a reduction in label flux from (1-13 C)pyruvate to (1-13 C)lactate was seen in all animals in the first 96 h after radiotherapy despite increases in tumour size, suggesting that (1-13 C)pyruvate may be useful to differentiate progression and pseudoprogression ( Figure 6) [142].
The other readily translatable hyperpolarised substrate is (1,4-13 C 2 )fumarate, which is hydrated to malate by fumarase. During cell death an increase in membrane permeability results in leakage of fumarase into the extracellular space and an increased rate of malate production following hyperpolarised (1,4-13 C 2 )fumarate injection which, in preclinical studies, has been shown to be a sensitive indicator of cell death [143,144].
Imaging Proliferation
Several PET tracers have been designed as biomarkers of proliferation. 3 -deoxy-3 -18 Ffluorothymidine ( 18 F-FLT) is taken into cells and phosphorylated by thymidine kinase, the first step of the thymidine salvage pathway essential for DNA synthesis. Thus, 18 F-FLT preferentially accumulates in cells undergoing proliferation with potentially greater tumour specificity than 18 F-FDG. Several systematic reviews have concluded that ( 18 F-FLT has potential as a marker of early response and shown that a change in uptake correlated well with progression-free and disease-free survival [145,146]. In HNSCC, a comparison of 18 F-FLT and 18 F-FDG during radiotherapy showed the overall accuracy of 18 F-FLT to be significantly higher (74 vs. 30%) [147]. However, following chemoradiotherapy in rectal cancer, despite correlations with disease-free survival, decreases in 18 F-FLT uptake did not correlate with pathological response, a discrepancy attributed to changes in perfusion following radiotherapy [148].
PET Imaging of Brain Tumours
Lack of tumour specificity of 18 F-FDG is a particular problem in neuroradiology where there is high background uptake from normal brain tissue and radiation necrosis is often hypermetabolic [149]. Numerous tracers have been designed that are superior to 18 F-FDG for detection of recurrence and differentiation from pseudoprogression. Brain tumours often have increased uptake of amino acids relative to normal brain and several have been labelled with 11 C and 18 F for PET imaging [150]. 11 C-methionine has been the most widely used amino acid PET tracer. It can differentiate tumours from normal brain with an accuracy of 94% versus 80% for 18 F-FDG [151] and is also more sensitive for differentiating recurrence from radiation necrosis following radiotherapy [152]. However, the application of 11 C-labelled substrates will always be limited by the short half-life (20 min) requiring onsite production of the tracer. Therefore, several alternatives have been developed including 18 F-fluoro-ethyl-tyrosine, an artificial amino acid that is not incorporated into proteins but has increased uptake into tumours [153,154]. Decreased uptake in the first 10 d following chemoradiotherapy was predictive of progression-free survival with an accuracy of 75% [155]. Other tracers that have demonstrated improved performance over 18 F-FDG for distinguishing recurrence from radiation necrosis include 11 C-and 18 F-labelled choline (Figure 4), surrogate measures of the rate of phospholipid membrane synthesis, and 18 F-dihydroxyphenylalanine ( 18 F-DOPA), an analog of the dopamine precursor L-DOPA (Figure 7) [156][157][158].
Conclusions and Future Perspectives
There has been an explosion in the number of functional imaging techniques that can non-invasively report on multiple biological characteristics of the tumour microenvironment with great potential to guide therapy and improve outcomes as personalised therapy in oncology becomes realistic. Several functional imaging techniques have already been clinically translated, including 18 F-FDG-PET and DWI-MRI. Detection of early treatment response remains challenging but, as highlighted in this review, there are numerous functional imaging biomarkers that are sensitive to the early biological effects of radiation therapy and can provide prognostic information and guide future treatment.
There are several common limitations that affect many imaging studies. Most studies are technically challenging and expensive and therefore recruit a small number of patients (typically <50). Quantification is seen as a major strength of functional imaging but a lack of consensus over the vast number of imaging biomarkers to use significantly limits the comparison of findings and meta-analysis. Unfortunately, in clinical practice, quantitative metrics do not necessarily perform better than simple qualitative analysis [159]. Furthermore, few studies prospectively define cutoff points or perform multicentre or external validation and variation between scanners is a significant barrier to quantitative analysis. Reference to the imaging biomarker roadmap should help to address these limitations and facilitate the translation of functional imaging biomarkers into clinical practice [160].
Funding: This research received no external funding.
Acknowledgments:
We would like to thank Harpreet Hyare and Andrew Plumb of University College London Hospital for kindly providing the clinical images used in this review.
Conflicts of Interest:
The authors have no conflicts of interest to declare.
Abbreviations
CT computed tomography DNP dynamic nuclear polarisation MRI magnetic resonance imaging MSOT multispectral optoacoustic tomography OCT optical coherence tomography PET positron emission tomography SPECT single photon emission computed tomography
|
2019-01-26T14:02:48.614Z
|
2019-01-23T00:00:00.000
|
{
"year": 2019,
"sha1": "268fcc9bde4d30c88f9eb6c3ac6b6cdaea096589",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/11/2/131/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "268fcc9bde4d30c88f9eb6c3ac6b6cdaea096589",
"s2fieldsofstudy": [
"Medicine",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13977890
|
pes2o/s2orc
|
v3-fos-license
|
On the critical pair theory in abelian groups : Beyond Chowla's Theorem
We obtain critical pair theorems for subsets S and T of an abelian group such that |S+T|<|S|+|T|+1. We generalize some results of Chowla, Vosper, Kemperman and a more recent result due to Rodseth and one of the authors.
The Cauchy-Davenport Theorem was generalized to abelian groups by several authors including Mann [20] and Kneser [19]. The first generalization to cyclic groups is due to Chowla [3]: it states Theorem 1 Let S, T be nonempty subsets of Z/nZ such 0 ∈ S. Assume that every element of S \ {0} has order exactly n. Then |T + S| ≥ min(n, |S| + |T | − 1).
Subsets achieving equality in an additive theorem are known as critical pairs of the theorem. One may easily check that the only interesting critical pairs for the Cauchy-Davenport Theorem arise when |S|, |T | ≥ 2 and |S + T | ≤ p − 2. Under these assumptions Vosper's Theorem [24] states that |S + T | = |S| + |T | unless both S and T are arithmetic progressions with a common difference. This statement determines the critical pairs of the Cauchy-Davenport Theorem.
Generalizing Vosper's Theorem to arbitrary abelian groups requires a lot of care. The importance of this question was mentioned by Kneser in [19]. Motivated by Kneser's work, Kemperman proposed in [18] a recursive procedure which generalizes Vosper's Theorem to abelian groups. The main tools used by Kemperman are basic transformations introduced by Cauchy, Davenport and Dyson [21]. One of the results obtained by Kemperman is the following: Theorem 2 (Kemperman, [18]) Let G be an finite abelian group and let S, T be subsets of G such that |S| ≥ 2, |T | ≥ 2 and |S + T | = |S| + |T | − 1 ≤ p − 2, where p is the smallest prime divisor of |G|. Then S and T are arithmetic progressions with the same difference.
Note that the existence of a short direct proof for this result is unlikely since the statement contains Vosper's Theorem. This result has been recently extended to non abelian groups by Károlyi [17] and independently by one of the authors [8,Theorem 3.2].
By using the additive transformations mentioned above, Rødseth and one of the authors recently characterized the critical pairs of Vosper's Theorem [10] : Theorem 3 Let S, T be subsets of a group of prime order Z/pZ, with |T | ≥ 3 and |S| ≥ 4 such that |S + T | = |S| + |T | ≤ p − 4.
Then S and T are included in arithmetic progressions with the same difference and of respective lengths |S| + 1 and |T | + 1.
There are several methods currently available in additive theory. One of them is based on Fourier analysis. Examples of applications of this method can be found the monographs of Freiman [15] and Tao and Vu [23], or in the papers by Deshouillers and Freiman [5], and by Green and Ruzsa [6]. Another powerful tool is the polynomial method introduced by Alon, Nathanson and Ruzsa [1]. Károlyi recently [16] used this method to obtain a remarkable critical pair theorem for restricted sums.
In this paper we obtain improvements of some of the above results using the isoperimetric method. This method has been used to generalize addition theorems to non abelian groups in some papers including [25,12,8,11]. It also derives additive inequalities, mainly from the structure of the k-atoms of a set. If S is a generating subset containing 0 of an abelian group G, a set A is called a k-atom of S if it is of minimum cardinality among subsets X such that |X| ≥ k, |X + S| ≤ |G| − k, and |X + S| − |X| is of minimum possible cardinality (see Section 2 for detailed definitions). It is proved in [7] that any 1-atom containing 0 is a subgroup. This result implies easily Mann's generalization of the Cauchy-Davenport Theorem. The structure of 2-atoms has proved more difficult to describe but potentially gives stronger results: 2-atoms have been used in [9,13] to derive critical pair results. In groups of prime order, the description of 2-atoms was completed by two of the present authors in [22]. Atoms of higher order were used in [14] to classify sets S, T ⊂ Z/pZ with |S + T | ≤ |S| + |T | + 1.
In the present paper we first study the structure of 2-atoms in general abelian groups. Our main result in the first part of this paper is Theorem 21: broadly speaking it states that, under some technical conditions that will be shown to be quite tight, 2-atoms have cardinality 2 or are subgroups. In the rest of the paper we apply this fact to obtain critical pair results.
We shall first obtain a critical pair result for Chowla's Theorem 1 which reduces to Vosper's Theorem if n is a prime. To be precise, we will actually be dealing with a strengthened version of Theorem 1 (Corollary 8) that only requires the order of every element of S \ {0} to exceed |S| − 1 rather than to equal n. We call this requirement a weak Chowla condition. The description of the corresponding sets S and T are obtained in Theorem 14 and Corollary 16.
We then move on to give a description of subsets S, T , with |S + T | ≤ |S| + |T |, in arbitrary abelian groups provided S contains no element of order less than |S| + 1 (another weak Chowla condition). We show that, if the abelian group has no subgroups of order 2 or 3, then S and T are made up of arithmetic progressions with at most one missing element and periodic subsets with at most one missing element, see Theorems 28 and 29. This last result is a generalization to abelian groups of Theorem 3 of Rødseth and one of the authors, since it reduces to it when the group is of prime order.
The paper is organized as follows: Section 2 gives some preliminary results and Section 3 uses them to derive a solution to the critical pair problem for Chowla's Theorem and its strengthened version. Section 4 works out some tools necessary to Section 5 which is devoted to the description of 2-atoms. Sections 6 and 7 make up more preliminary material for section 8 which derives the generalization to abelian groups of Theorem 3.
Isoperimetric tools
In this section we recall known results on isoperimetric numbers of subsets in finite abelian groups and derive some consequences relevant to us later on. Our prime objects of concern are the 2-atoms of a subset: we shall see that they are either subgroups or Sidon sets and, in the last case, they have the largest possible isoperimetric numbers.
Let S be a subset of a finite abelian group such that 0 ∈ S. Denote by S the subgroup generated by S. For a positive integer k, we shall say that S is k-separable if there exists X ⊂ S such that |X| ≥ k and |X + S| ≤ | S | − k.
Suppose that S is k-separable. The k-th isoperimetric number of S is then defined by κ k (S) = min{|X + S| − |X| X ⊂ S , |X| ≥ k and |X + S| ≤ | S | − k}. (1) For a k-separable set S, a subset X achieving the above minimum is called a k-fragment of S. A k-fragment with minimal cardinality is called a k-atom.
The following easy facts will be used regularly throughout the paper: • The translate A + g of a k-atom A is also a k-atom.
If S is not k-separable, we shall put by convention κ k (S) = k|S| − 2k + 1 so as to have, for all |S| ≥ k, The definition of a k-atom implies the following lemma: Lemma 4 Let 0 ∈ S be a k-separable subset of a finite abelian group. Let A be a k-atom and suppose that |A| > k. Then, for each a ∈ A and s ∈ S we have Proof.
In other words, no element x in S + A can be uniquely written as x = s + a, s ∈ S and a ∈ A. This means that Next we recall: Let 0 ∈ S be a k-separable subset of a finite abelian group G. Let F be a k-fragment of S and g ∈ S . Then g − F and S \ (F + S) are k-fragments of −S. Moreover κ k (−S) = κ k (S).
The following is a particularly useful property of k-atoms.
Lemma 6 (The intersection property [8]) Let 0 ∈ S be a k-separable subset of a finite abelian group G. Let A be a k-atom of S. Let F be a k-fragment of S such that A ⊂ F . Then The intersection property implies easily the following description of 1-atoms.
Corollary 7 ([7])
Let 0 ∈ S = S be a subset of a finite abelian group G. Let A be a 1-atom of S such that 0 ∈ A. Then A is the subgroup generated by S ∩ A. In particular κ 1 (S) is a multiple of |A|.
From these early results we can derive the following generalization of Chowla's Theorem: Corollary 8 Let 0 ∈ S be a generating subset of a finite abelian group G such that the order of every element of S \ {0} is at least |S| − 1. Then κ 1 (S) = |S| − 1.
In particular, for every nonempty subset X ⊂ G, we have
Proof.
If S is not 1-separable, then by definition we have S = S and by the convention preceding (2) we have κ 1 (S) = |S| − 1. Suppose therefore that S is 1-separable. Let A be a 1-atom of S containing 0. By Corollary 7, A is the subgroup of G generated by S ∩ A and The last inequality in the statement is a direct consequence of the definition of κ 1 .
Recall that a subset X of an abelian group is a Sidon set if no two pairs of (not necessarily distinct) elements in X have the same sum. In particular |S ∩ (S + x)| ≤ 1 for each x.
Corollary 9 Let 0 ∈ S be a k-separable subset of a finite abelian group G. Let A be a k-atom of S such that 0 ∈ A, and suppose that p ≥ k where p is the smallest prime divisor of |G|. Then either A is a subgroup of G or |A∩(x+A)| ≤ k −1 for every x ∈ G, x = 0. In particular a 2-atom of a 2-separable set is either a subgroup or a Sidon set.
Proof.
Without loss of generality we may suppose S = G. The double inequality k ≤ |A ∩ (x + A)| < |A| is forbidden by Lemma 6 because x + A is also a k-atom of S. Suppose that there is x ∈ G, x = 0, such that A = A + x. Then we have A = A + x : hence A ∩ (a + A) ⊃ a + x for every a ∈ A. Since | x | ≥ p ≥ k, Lemma 6 implies that we have A = a + A for every a ∈ A and A is a subgroup.
Proof. Suppose on the contrary that κ 1 (S) ≤ |S| − 2. Then S is 1-separable. Let 0 ∈ A be a 1-atom of S. Then A is a nonnull subgroup of G and κ 1 (S) is a multiple of |A|.
The next result determines the second isoperimetric number of Sidon sets. In what follows we use the following notation. Given a subgroup H of G, by the decomposition of a subset S ⊂ G modulo H we mean the minimal partition of S into nonempty subsets, each one contained in a single coset of H.
Lemma 11
Let 0 ∈ S be a subset of a finite abelian group with |S| ≥ 3. If S is a Sidon set then κ 2 (S) = 2|S| − 3.
Proof. Let G = S . Suppose S is 2-separable, otherwise the result follows by the convention preceding (2).
If A is a Sidon set, then Suppose that A is a subgroup. Then κ 2 (S) is a multiple of |A|. In particular, |A| ≤ 2|S|−4. But then, since |A| ≥ 4, again a contradiction.
The following corollary is a result obtained in a more general context in [9]. The simple proof given here is similar to a proof given in [22].
Corollary 12 Let S be a generating set of the finite abelian group G with 0 ∈ S, |S| ≥ 3 and
If A generates G then 2|A|−3 = κ 2 (A) ≤ |S+A|−|S| = |A|+m, a contradiction. Therefore we may assume that A generates a proper subgroup Q of G. Let S = S 1 ∪ · · · ∪ S j , where j ≥ 2, be the decomposition of S modulo Q. We may assume that |S 1 + A| ≤ · · · ≤ |S j + A| and, by translating S, that 0 ∈ S 1 .
against our assumption. Therefore we may assume that It follows that |A| ≥ |Q| − 1, which is impossible since A is a Sidon set.
Finally, the following lemma will be useful to us in ruling out the possibility that a 2-atom is a subgroup.
Lemma 13
Let 0 ∈ S be a 2-separable subset of a finite abelian group G. Suppose A is a 2-atom of S which is a subgroup of cardinality at least 3. Then there exists s ∈ S, s = 0, such that the order of s is not more than κ 2 (S).
Proof.
Note that if A is a subgroup then κ 2 (S) is a multiple of |A|. By Lemma 4 we have Therefore there is a non-zero element s of S in A, and its order is not more than |A| ≤ κ 2 (S).
Critical pairs under the weak Chowla condition.
With the previous results we can already prove a critical pair theorem improving on the theorems of Chowla and Vosper. We first state its isoperimetric version. Recall that a subset S of an abelian group G is periodic if there is a nonnull subgroup H of G such that S +H = S. In other words, S is a union of cosets of H.
Theorem 14
Let 0 ∈ S be a generating 2-separable subset of a finite abelian group G such that κ 2 (S) ≤ |S| − 1. Also assume that every element of S \ {0} has order at least |S|. Then either S is an arithmetic progression or S \ {0} is periodic.
Proof.
By Corollary 8 we have κ 1 (S) = |S| − 1. Let 0 ∈ A be a 2-atom of S. Assume |S| ≥ 3 otherwise there is nothing to prove. By Lemma 13, the condition on the order of elements of S implies that A is not a subgroup. But then Corollary 12 implies that we have |A| = 2, say A = {0, r}. Assume first that r generates G. This forces S to be an arithmetic progression with difference r. Assume now that r generates a proper subgroup for all but one subscript. In particular S ∩ H = {0} since otherwise S contains a nonzero element with order at most |H| ≤ |S| − 1.
The above theorem will translate into a Chowla-type characterization of sets S and T with small sumset, this will be Corollary 16. The next result is a generalization of Theorem 2.
By the stabilizer of a subset X of an abelian group G, we mean the set of group elements x ∈ G such that X + x = X.
Proposition 15 Let 0 ∈ S be a generating subset of a finite abelian group G and let 0 ∈ T be a subset of G. Let Q denote the stabilizer of S \ {0}. Suppose that Also assume that every element of S * = S \ {0} has order ≥ |S|. Let σ : G → G/Q denote the canonical projection. One of the following holds: (ii) or σ(S) and σ(T ) are arithmetic progressions with the same difference. Moreover, at most one member of the decomposition of T modulo Q is not a complete coset modulo Q.
Proof. Either T = {0} and thus T ⊂ Q or the conditions on S imply that S is 2-separable and κ 2 (S) ≤ |S| − 1. Assume first Q = {0}. By Theorem 14, S is an arithmetic progression. It follows easily that T is an arithmetic progression with the same difference. Assume now Q = {0}.
We have |σ(T ) + σ(S)| ≤ |σ(T )| + |σ(S)| − 1. Otherwise there are |σ(S)| cosets in σ(T ) + σ(S) not present in σ(T ). But all these cosets are saturated in Moreover, the order of every element x ∈ σ(S) \ {0} is at least ⌈|S|/|Q|⌉ = |σ(S)|. Since the stabilizer of σ(S) * = σ(S * ) must be {0}, either σ(T ) = 0 and T ⊂ Q or Theorem 14 in G/Q implies that σ(S) is an arithmetic progression. It follows now that σ(T ) is an arithmetic progression with the same difference. Since σ(T ) contains at most a single element that is not expressible in G/Q in two different ways as a sum of one element of σ(S) and one element of σ(T ), we deduce that at most one coset modulo Q that intersects T is not included in T .
Corollary 16
Let 0 ∈ S and T be non-empty subsets of a finite abelian group G. Suppose that where Q denotes the stabilizer of S \ {0} and H is the subgroup of G generated by S. Also assume that every element of S \ {0} has order at least |S|. Let T 1 ∪ T 2 ∪ · · · ∪ T j be a decomposition of T modulo H such that Then |T i | = |H| for all i ≥ 2. Moreover one of the following conditions holds: (ii) σ(S) and σ(T 1 ) are arithmetic progressions with the same difference, where σ : G → G/Q denotes the canonical projection. Proof.
By Corollary 8 we have κ 1 (S) = |S| − 1. If j ≥ 2 we have |T 2 + S| = |H| since otherwise, By Proposition 15, either T 1 ⊂ Q or σ(S) and σ(T 1 ) are arithmetic progressions with the same difference. Now, if 0 ∈ T 1 then the same argument gives At the heart of the proof of Theorem 14 was the claim that, under the right conditions, a 2-atom containing the zero element is of cardinality 2 or is a subgroup. In Section 5 we shall find more general conditions under which we can make the same claim. Before that we need some more tools.
The fainting technique
In this section we use a method developed in [22]. The idea is to consider the sequence of subsets (S + A) \ S, (S + 2A) \ (S + A), · · · , (S + iA) \ (S + (i − 1)A), · · · and to claim that if A is a 2-atom of S of cardinality |A| > 2, then this sequence must decrease and faint, implying that S is a "large" subset of G.
Let X and Y be subsets of an abelian group G. For each integer i ≥ 0 we denote by In what follows we use the notation Y * = Y \{0}. We start with the two following lemmas.
Lemma 17 Let G be an abelian group and let X,
Proof.
Suppose that the statement holds for all i, r ≤ i ≤ j, for some j ≥ r, and let The result follows by induction.
Lemma 18 Let 0 ∈ S be a 2-separable subset of a finite abelian group G and let 0 ∈ A be a 2-atom of S with cardinality |A| ≥ 3 which is not a subgroup of G. Then, denoting
Proof.
Without loss of generality S generates G. The first part of the result is just Lemma 4. Now, since A is not a subgroup, we have S + A = S + A, otherwise we would have |S + A | − | A | < |S + A| − |A| in contradiction with A being a 2-atom. Therefore there exists x ∈ N 2 (S, A) = (S + 2A) \ (S + A). Recall that, by Lemma 5, the subset x − A is a 2-atom of−S and G \ (S + A) is a 2-fragment. Observe that x ∈ (x − A) ∩ (G \ (S + A)) and that x ∈ N 2 (S, A) means x − A is not contained in G \ (S + A): the intersection property of 2-atoms (Lemma 6) implies therefore that (x − A) ∩ (G \ (S + A) = {x}), but this means The following Lemma is a key tool for the proof of the main result of the next section. It says that, under some conditions, a set X verifying the statement of Lemma 18 with some other set must be a large subset of the ground group.
Lemma 19 (The Fainting Lemma) Let G be a finite abelian group and let X, Y ⊂ G with 0 ∈ X ∩ Y and set m = |X + Y | − |X| − |Y |. Assume that Y generates G and that Then |X| ≥ |G| − m + 4 2 .
Proof.
Since X + Y = X + (Y \ {y}) for any y ∈ Y , we have X + Y = X + Y * and X + (Y − y) = X + (Y * − y). By induction on i it is seen that Let H be the subgroup of G generated by Y * − y. One can verify easily that H = n(Y * − y), and hence By Lemma 17, Since is a union of cosets of the subgroup H generated by Y * − y. In particular H = G and, by (4), y)) + y contains a full coset of this subgroup. However, we have N i (X, Y ) ∩ X = ∅ and, by (3), X + H = G, a contradiction. Let ℓ be the largest integer for which N ℓ (X, Y ) = ∅. We have just shown that, for each i, 1 ≤ i < ℓ, Therefore, Since |N 1 (X, Y )| = |Y | + m we have |N 2 (X, Y )| ≤ m + 2. Hence, since 3 ≤ |Y | ≤ m + 3, the largest possible value in the right hand side of inequality (6) is taken if |Y | = 3 and ℓ = m + 3 giving as claimed.
We finish this set of preliminary results with the following Lemma.
Lemma 20 Let A and S be subsets of a finite abelian group Q. Assume that |A| = 3 and that for each a ∈ A we have S + A = S + (A \ {a}). Then 3|S| ≥ 2|S + A|.
Description of 2-atoms
The next theorem gives the structure of the 2-atoms for not too large subsets of an abelian group.
Theorem 21 Let G be a finite abelian group and let 0 ∈ S be a generating 2-separable subset of G such that |S| ≥ 3 and Let A be a 2-atom of S containing 0. If |S| < |G| − m+4 2 then either |A| = 2 or A is a subgroup of G.
Proof. Suppose that the conclusion of the theorem does not hold, so that 0 ∈ A is a 2-atom of S with |A| ≥ 3 which is not a subgroup. Then it follows from Corollary 9 that S is a Sidon set and then, by Lemma 11, κ 2 (S) = 2|S| − 3 ≥ |S|. In particular m ≥ 0.
By Corollary 12 we have |A| ≤ m + 3. Moreover, A * − a is also a Sidon set, and Lemma 10 implies that A satisfies condition (i) of the Fainting Lemma. By Lemma 18, S and A satisfy condition (ii) of the Fainting Lemma: therefore if A generates G its conclusion must hold. In that case we have |S| ≥ |G| − m+4 2 against the hypothesis of the Theorem. Therefore A must generate a proper subgroup Q of G. Let S = S 1 ∪ S 2 ∪ · · · ∪ S t be the decomposition of Since |S + A| = |S| + |A| + m, the decomposition of S + A modulo Q gives Now, as mentioned above, A is a Sidon set and by Lemma 10, we have κ 1 (A) = |A| − 1. Therefore |S i + A| − |S i | ≥ |A| − 1 for i ∈ V . Notice furthermore that Lemma 4 implies S i + A = S i + A * , so that |S i | ≥ 2 for each i ∈ I. Therefore, since by Lemma 11 we have κ 2 (A) = 2|A| − 3, we have |S i + A| − |S i | ≥ 2|A| − 3 for i ∈ W . Inequality (7) gives us In particular we have w ≤ 2 and v ≤ 3. Now for any i ∈ I let us write and, for J ⊂ I, put δ(J) = i∈J δ(i). Notice that δ(I) = |S + Q| − |S + A|, δ(U ) = 0, δ(V ) = v and that we have shown that m i ≥ |A| − 3 ≥ 0 for i ∈ W . We consider two cases.
If w = 1, say W = {1}, then (11) translates to If m = m 1 then we must have δ(V ) = 0 and the right hand side of (12) equals 1 + m 1 +4 2 . If m 1 < m then δ(V ) = v ≤ 3 and m 1 ≥ 0 imply that the right hand side of(12) is again ≥ 1 + m 1 +4 2 . In both cases this contradicts the Fainting Lemma applied to S 1 and A.
The following example shows that the result of Theorem 21 does not hold anymore if m = 5.
Example.
Take G = Z/7Z × Z/qZ where q > 7 is a prime. Consider the sets • S = {0, 1, 2, 4} × X where |X| = 4 and X is a Sidon set in Z/qZ and Then |S +A| = |S|+|A|+5. The group G has only two proper subgroups H 1 = Z/7Z×{0} and H 2 = {0} × Z/qZ and On the other hand, if B = {0, x} we have The last inequality being because, for any y = 0, |X ∪ (X + y)| ≥ 7 in Z/qZ since X is a Sidon set. Therefore subgroups and subsets of size 2 are not 2-atoms of S. Furthermore we have κ 2 (S) ≥ |S| + 5 since otherwise Theorem 21 would apply: therefore A is a 2-atom of S and κ 2 (S) = |S| + 5.
Finally, note that Theorem 21 together with Lemma 13 give a sufficient condition to rule out the possibility of a 2-atom being a subgroup.
Corollary 22
Let G be a finite abelian group and let 0 ∈ S be a generating 2-separable subset of G such that |S| ≥ 3 and Also assume that every non zero element of S has order at least |S| + m + 1.
Atoms of small sets
We next show some results about k-atoms of small sets.
Lemma 23
Let S be a 4-separable generating subset of a finite abelian group such that 0 ∈ S and κ 4 (S) = |S| = 3. Let 0 ∈ A be a 4-atom of S. Then |A| = 4.
Proof.
Let G = S . Suppose that |A| > 4. We shall apply the Fainting Lemma to A and S.
Take z ∈ S * . By Lemma 4 we have By Lemma 17 we have N 2 (A, S) − S * ⊂ N 1 (A, S). Now we may apply the Fainting Lemma and obtain |A| ≥ |G|−6. But then |A+S| ≥ |G|−3 contradicting that A is a 4-fragment of S.
Claim. S * is an arithmetic progression.
Let us write N i = N i (A, S), i ≥ 0. Note that |N 1 | = |S + A| − |A| = κ 3 (S) = |S| = 4. For each subset X ⊂ S and for each i ≥ 1, let us denote by N X i the set of elements u ∈ N i such that u − X ⊂ N i−1 and X is a maximal subset of S with this property. By the definition, We have just shown that: By Lemma 4, for each x ∈ S * , we have On the other hand, for each x ∈ S * , inequality (13) implies Let us now estimate |N X 2 | and |N X 3 | for X ⊂ S * . Note that by Corollary 8 as κ 1 (Z) = |Z| − 1 for each subset 0 ∈ Z ⊂ G with |Z| ≤ 3, since the order of any nonzero element in G is at least 5. Therefore, using (14), we have Since there are at most two 2-subsets of S * for which |N X 1 | = 2, we have Since |N 1 | = 4, then N S * 2 − S * cannot be a coset. Therefore, since This implies that |N S * 2 | ≤ 2. Suppose that |N S * 2 | ≤ 1. Then |N 2 | = X⊂S * |N X 2 | ≤ 3 and, by applying (14) with i = 3 and 4, we get |N 3 | = |N S * 3 | ≤ 1 and |N 4 | = 0. Therefore |N 2 | + |N 3 | ≤ 4 < |G| − |S + A|. This means that Y = A ∪ N 1 ∪ N 2 ∪ N 3 = G and Y + S = Y , which contradicts that S generates G.
Suppose now that |N S * 2 | = 2. Then |N S * 2 − S * | = |S * | + 1 which implies that S * is an arithmetic progression. This proves the claim. Now we have S = {0, a, a + d, a + 2d} for some d ∈ G. By repeating the argument of the claim to S − a − d we get that {−a − d, −d, d} is an arithmetic progression as well.
Quasi-progressions
A subset S of an abelian group G will be called a quasi-progression of difference r if S is not a progression with difference r and if S can be obtained by deleting an element of an arithmetic progression of difference r.
Lemma 25
Let 0 ∈ S be a quasi-progression with difference r in the cyclic group Z/nZ. Suppose that S generates Z/nZ and |S| ≥ 3. Let T ⊂ Z/nZ be such that |T | ≥ 3 and Then one of the following conditions holds: (i) T is either a quasi-progression with difference r or a progression with difference r.
(ii) n = 12 and T is a coset of order 4.
For a subset X ⊂ Z/nZ let us call connected components of X the maximal arithmetic progressions with difference 1 contained in X. Case 1. There is a connected component C 1 of T = Z/nZ \ T such that |C 1 | ≥ |S|.
If U = {u, u + 1, · · · , v} is a connected component of T , then v + 1 ∈ (S + T ) \ T . Since |S + T | ≤ |S| + 4, it follows that T has exactly 4 components, and for each such component Lemma 26 Let S and T be subsets of Z such that |S| = 3, |T | = 4 and |S + T | = 7. Then S is either a progression or a quasi-progression.
The proof is an easy exercise.
Lemma 27 Let S be a 4-separable generating subset of an abelian group G of order n such that 0 ∈ S, |S| = 3 and κ 4 (S) = |S| = 3. Assume moreover that gcd(n, 6) = 1. Then G is a cyclic group and S is a quasi-progression.
We show first that every element of S \ {0} generates G. Suppose on the contrary that x generates a proper subgroup K of G. Since gcd(|G|, 6) = 1 we have min{|H|, |G/K|} ≥ 5.
Let φ denote the canonical morphism from G onto G/K. Decompose A = A 1 ∪ · · · ∪ A j , j ≥ 2, modulo the subgroup K and assume that 0 ∈ A 1 and |A 1 | ≤ |A i |, i ≥ 2. Notice that Since |A| = 4 and gcd(|G|, 6) = 1, we have |A + {x, y}| ≥ |A| + 1. Assume first that |A + {x, y}| = |A| + 1. Then A is an arithmetic progression with difference y − x. But 0 ∈ A and hence y − x is invertible since A generates G. Without loss of generality we may assume A = {0, 1, 2, 3}. Now it comes easily that S is a quasi-progression, and the result holds.
Now since x is invertible in G = Z/nZ, we may write, without loss of generality, S = {0, 1, t} with |A ∩ (A + 1)| ≥ 2. By translating and multiplying by −1, we can also assume that t ≤ (n + 1)/2 (notice that n 2 is not a unit if n is even). Therefore A can be represented by two pairs of consecutive integers, and hence by a subset of 4 integers included in an interval of length ≤ (n + 1)/2. On the other hand, one of the following two possibilities holds for S: • S can be represented by a subset of an integral interval of length ≤ (n − 3)/2. In that case the sum A + S in Z/nZ has the same cardinality as the sum A + S in Z, and we are done by Lemma 26.
• We have t = (n−1)/2, in which case S is included in an arithmetic progression of length 4 and difference 2 −1 (2 is invertible since n is odd) and we are done.
Improving both the Theorems of Chowla and of Vosper
Next we shall generalize Theorem 14 to the case when |S + T | ≤ |S| + |T |. Our result is also a generalization to abelian groups of Theorem 3, i.e. the main result of [10]. Let us state it first under an isoperimetric formulation. Let us call a set quasi-periodic if it can be obtained by deleting one element from a periodic set.
If every element of S \ {0} has order at least |S| + 1, then either S is a quasi-progression or S \ {0} is quasi-periodic.
Claim. The result holds if A generates a proper subgroup K of G.
Assume first that A = K. In this case κ 3 (S) is a multiple of |A| and hence |S| ≥ |A|. It follows that S ∩ A = {0}, since otherwise A would contain an element of order at least |S| + 1. Now S + A is the disjoint union A ∪ (S * + A). Hence |S * + A| = |S * | + 1 so that S * is quasi-periodic and the result holds.
We may therefore assume that A generates G. We now consider three cases.
In that case A is 4-separable and κ 4 (A) ≤ |A|. If κ 4 (A) < 3, then Theorem 14 implies that A is a progression and thus S is a quasi-progression. If κ 4 (A) = 3, then by Lemma 27, A is a quasi-progression. By Lemma 25, S is a quasi-progression.
Theorem 28 translates into a characterization of subsets S and T such that |S + T | ≤ |S| + |T | under some Chowla-type conditions. This was our final goal in this paper.
Let 0 ∈ S be a generating subset of G such that |S| ≥ 4 and every element in S * has order at least |S| + 1. Let Q be a maximal subgroup such that |S * + Q| − |S * | ≤ 1 and let σ : G → G/Q denotes the canonical projection.
Let T be a subset of G such that |T | ≥ 3 and suppose that |S + T | = |S| + |T | ≤ |G| − 4. Then the following holds: • If Q = {0} then S and T are progressions or quasi-progressions with the same difference.
The conditions |S| + |T | = |S + T | ≤ |G| − 4 and |T | ≥ 3 imply that S is 3-separable and that κ 3 (S) ≤ |S|. By Theorems 14 and 28, S is an arithmetic progression or quasi-progression. By Lemma 25 it follows that T is an arithmetic progression or quasi-progression with the same difference.
This holds clearly if S * is Q-periodic. So we may assume |S * + Q| − |S * | = 1. Let us then denote by S 1 the unique subset of S of size |Q| − 1 in the decomposition of S modulo Q. If Σ is not Q-periodic then some Q-coset must have a trace U of size |Q| − 1 on the set Σ, and we have U = S 1 + T ′ where T ′ = (a + Q) ∩ T , for some a. Since |S 1 | = |Q| − 1 we must have |T ′ | = 1. Note also that σ(S 1 ) + σ(T ′ ) cannot be obtained in any other way as a sum of an element of σ(S) and of an element of σ(T ), therefore (S 1 + T ′ ) ∩ (S + (T \ T ′ )) = ∅, hence |S + (T \ T ′ )| < |S| + |T \ T ′ | − 1, but this contradicts κ 1 (S) = |S| − 1 (Corollary 8) and proves (19).
By our assumptions, Q is a maximal subgroup such that |S * + Q| − |S * | ≤ 1. This is easily seen to imply that σ(S * ) is not periodic. Moreover, each element in σ(S) * has order at least (|S| + 1)/q ≥ |σ(S)| − 1 + 1/q. Then, by Proposition 15, σ(S) and σ(T ) are arithmetic progressions with the same common difference d. Since −d is also a difference of σ(S) and σ(T ), we may assume without loss of generality that the terminal element u of σ(S) is not 0. Therefore if we set S ′ = σ −1 (u) ∩ S we have |S ′ | ≥ |Q| − 1. Let us suppose, without loss of generality, that the initial element of σ(T ) is 0.
Remark.
One may wonder what happens if we remove from Theorem 29 the hypothesis |σ(S + T )| < |G|/|Q| − 1. Then the sets σ(S) and σ(T ) are not necessarily arithmetic progressions any more. However, one may show that there again exists T 1 ⊂ T , such that |σ(T 1 )| ≤ 1 and T \ T 1 is Q-periodic or Q-quasi-periodic. We leave out the details.
|
2014-10-01T00:00:00.000Z
|
2006-03-20T00:00:00.000
|
{
"year": 2006,
"sha1": "feead1b1b097417841bd0bcb4a897daf78e7b40e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math/0603478",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "feead1b1b097417841bd0bcb4a897daf78e7b40e",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
}
|
247089571
|
pes2o/s2orc
|
v3-fos-license
|
TMED2/9/10 Serve as Biomarkers for Poor Prognosis in Head and Neck Squamous Carcinoma
Background: Head and neck squamous carcinoma (HNSC) is one of the most common malignant tumors with high incidence and poor prognosis. Transmembrane emp24 structural domain (TMED) proteins are involved in protein transport and vesicle budding processes, which have implicated various malignancies’ progression. However, the roles of TMEDs in HNSC, especially in terms of development and prognosis, have not been fully elucidated. Methods: We applied TIMER 2.0, UALCAN, GEPIA 2, Kaplan-Meier plotter, GEO, The Human Protein Atlas (HPA), cBioPortal, Linkedomics, Metascape, GRNdb, STRING, and Cytoscape to investigate the roles of TMED family members in HNSC. Results: Compared with normal tissues, the mRNA expression levels of TMED1/2/4/5/7/8/9/10 were significantly increased in the TCGA HNSC dataset. And we combined GEPIA 2 and Kaplan-Meier Plotter to select TMED2/9/10 with prognostic value. Then we detected the levels of mRNA in the GEO HNSC database and the protein expression in HPA. It was found that the mRNA and protein expression levels of TMED2/9/10 were increased in HNSC. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis showed that TMED2/9/10 and their co-expressed genes promoted the malignant behavior of tumors by participating in biological processes such as intracellular transferase complex, protein transport, focal adhesion, intracellular protein processing. Single-cell analysis and immune infiltration analysis suggested that immune responses of cancer-associated fibroblasts and endothelial cells might be associated with prognosis. Finally, the transcription factors-genes network and protein-protein functional interaction network pointed to genes such as X-box binding protein 1 (XBP1) and TMED7, which might cooperate with TMED2/9/10 to change the progression of HNSC. Conclusions: Our study implied that TMED2/9/10 and related genes mightjointly affect the prognosis of HNSC, providing specific clues for further experimental research, personalized diagnosis strategies, and targeted clinical therapy for HNSC.
INTRODUCTION
Head and Neck Squamous Carcinoma (HNSC) is the most common head and neck region malignancy, mainly from the mucosal epithelium of the oral cavity, pharynx, and larynx (Bhat et al., 2021). Unfortunately, HNSC patients were usually diagnosed at an advanced stage due to the small size of HNSC lesions and the lack of effective indicators for early detection of tumor development. Therefore, this carcinoma currently has a 5year survival rate less than 65% (Miller et al., 2016). At the same time, not only the characteristics of HNSC prone to recurrence and metastasis but also the dramatic decrease in the quality of life of patients seriously threatens the overall survival (Osazuwa-Peters et al., 2018;Saada-Bouzid et al., 2019). Therefore, we urgently need to develop new biomarkers for early screening and diagnosis to improve patient prognosis.
Transmembrane emp24 structural domain (TMED) proteins, also known as p24 proteins, are associated with bidirectional transport processes between the endoplasmic reticulum and the Golgi apparatus. According to previous studies, abnormal expression of TMED proteins with related pathways was closely associated with poor prognosis in many diseases, such as non-alcoholic fatty liver, multiple myeloma, diabetes, Alzheimer's disease, strong chordoma, osteoarthritis (Wang et al., 2012;Hou et al., 2017;Shin et al., 2019;Ge et al., 2020;Yang J. et al., 2021;Huang et al., 2021). For instance, TMED2 was expressed higher in sphere-shaped clones (SCs) and might play a role in cancer cell proliferation; the increased expression of TMED2 was significantly related to unfavorable outcomes in patients with breast cancer (Sial et al., 2021). TMED3 played a role in promoting the progression and development of lung squamous cell carcinoma, liver cancer, and breast progression (Zheng et al., 2016;Pei et al., 2019;Xie et al., 2021), and TMED8 methylation was a novel predictive and prognostic feature for patients with high-risk neuroblastoma (Liu and Li, 2021). Besides, the high expression of TMED9 might promote the proliferation of cancer cells by inhibiting autophagy and predict poor prognosis in hepatocellular carcinoma (HCC) and colon cancer (Schwarz and Allikmets, 2019;Ju et al., 2021). In addition, downregulated Golgi-endoplasmic reticulum (ER) traffic mediators TMED2 and TMED10 were related to positive prognosis in Prostatic cancer (PCa) (Chen and Hu, 2019). Therefore, TMED proteins might serve as prognostic markers to predict tumor prognosis. Current studies have found that the expression level of TMED2 in HNSC was upregulated and related to different cancer stages, races, genders, and ages (Sial et al., 2021). Nevertheless, the potential prognosis value of the TMED family has not been fully elucidated in HNSC.
In this study, we first examined the expression level of the TMED family in HNSC tissues and their prognostic value. With the above analyses, we identified TMED2/9/10 as diagnosis and prognosis biomarkers for HNSC. Further, we performed expression-related gene analysis, GO and KEGG enrichment analysis, single-cell analysis, and immune infiltration analysis of TMED2/9/10 to elaborate on their physiological and immune functions. Based on the functional interaction of TMED proteins, we discovered other potential prognostic molecular biomarkers and validated the role of these genes in HNSC progression. Our experimental results may provide research directions for future studies of molecular biomarkers of HNSC development and prognosis, leading to new diagnosis and treatment modalities based on risk stratification.
MATERIALS AND METHODS
TIMER 2.0 TIMER 2.0 (http://timer.cistrome.org/) is The Cancer Genome Atlas (TCGA) database visual portal for the analysis of gene expression differences between tumor and normal tissues and the association between gene expression and immune infiltration . We used the "Gene_DE" module in TIMER 2.0 to analyze the differential TMED expression between HNSC and normal tissues. Moreover, the "Gene" module and "Correlation" module was used to obtain correlation analysis between TMED2/ 9/10 and immune cell infiltration levels in HNSC (Immune Infiltrates: Cancer-associated fibroblasts, Endothelial cells, B cells). These analyses were performed using the TCGA HNSC dataset (n = 520) by spearman analysis, and differences with a p-value < 0.05 were considered statistically significant. The gene expression levels were displayed with log2 RSEM.
UALCAN
(http://ualcan.path.uab.edu/index.html) is a comprehensive web tool based on TCGA database (Chandrashekar et al., 2017). "TCGA Gene analysis" module was used to analyze mRNA levels of the TMED2/9/10 in HNSC patients and healthy individuals and their correlation with clinicopathological parameters, including age, gender, tumor grade, lymph node metastasis, TP53 mutation status, and cancer stage. These analyses were performed using the TCGA HNSC dataset (n = 520), with p-values < 0.05 considered statistically significant results. the "Similar Genes Detection" module to explore the top 1000 genes that have related expression patterns with TMED2/9/10.
The Human Protein Atlas
The Human Protein Atlas (HPA, https://www.proteinatlas.org/) is an online database that represents protein expression by immunohistochemical staining techniques (Uhlen et al., 2017). We compared TMED2/9/10 protein expression levels in normal and tumor tissues by using the "TISSUE" and "PATHOLOGY" modules. The protein expression scores were based on manually scored immunohistochemical data, including staining intensity (Not detected, Low, Medium or High). The following tissue information was used in this study: patient ID: 2615, male, 17 years old, tonsil (T-61100), normal tissue, NOS (M-00100); patient ID: 2513, male, 27 years old, tonsil (T-61100), normal tissue, NOS (M-00100); patient ID: 2608, male, 51 years old, skeletal muscle (T-13000), head and neck (T-Y0000), squamous cell carcinoma, NOS (M-80703). cBioPortal cBioPortal (https://www.cbioPortal.org/) is a repository of cancer genomics datasets from the TCGA database for genomics analysis (Gao et al., 2013). Based on the TCGA HNSC dataset (Nature 2015, 279 total samples), the "Query" module was analyzed for mRNA levels of TMED2/9/10 with Genomic Profiles set to Mutations, Structural Variant, Putative copy-number alterations from GISTIC and mRNA expression Z-score relative to all samples (log RNA Seq V2 RSEM). The case set is complete samples (279). Mutation data were obtained from whole-exome sequencing. The mutation rate of TMED2/9/10 in HNSC compared to normal tissues and expression Heatmap of TMED2/9/10 was detected.
LinkedOmics
LinkedOmics (http://www.linkedomics.org/) is a TCGA database visual web portal for genomics analysis (Vasaikar et al., 2018). The LinkedOmics database was used to identify TMED2/9/10 coexpressed genes, and the number of positive/negative genes was counted separately. We used the Pearson correlation coefficient to analyze the TMED2/9/10 data (n = 517) from the RNAseq of TCGA (HNSC), resulting in 20163 related genes.
Metascape
Metascape (http://metascape.org/) is an open database for studying the functions between genes of interest, using the GO and KEGG databases for pathway enrichment analysis (Zhou et al., 2019). We used Metascape to perform pathway enrichment analysis of TMED2/9/10 and co-expressed genes. Studies were carried out with the default parameters of minimal overlap = 3, minimal enrichment = 3, and p-value cutoff = 0.01.
GRNdb
GRNdb (http://www.grndb.com/) is a gene regulatory network database that provides a reliable way to predict transcription factors associated with genes . In this study, the "Exact Search" module was used to reveal the upstream regulatory transcription factors of TMED2/9/10 and hub genes in HNSC, as well as to explore the expression levels of TMED2/9/ 10 and hub genes in different cells. The NES (Normalized Enrichment Score for TF-target pair) value = ALL.
Search Tool for the Retrieval of Interacting Genes and Cytoscape
The STRING database (http://string-db.org/) is an accessible online database to predict PPI information with parameters set to Network Type = physical subnetwork, Required score = 0. 900, Size cutoff = no more than ten interactions (Szklarczyk et al., 2021). STRING drew a protein network to discover the interactions between TMED2/9/10 and other proteins, and the results were visualized in Cytoscape software. The obtained PPI network was analyzed by cytoHubba plugin with parameters set to Hubba nodes = Top 10 nodes ranked by degree. (Version: Cytoscape_v3.9.0) (Shannon et al., 2003;Chin et al., 2014).
Microarray Data
The Gene Expression Omnibus (GEO) database (http://www. ncbi.nlm.nih.gov/geo/) is an online gene expression database containing high-throughput microarray and next-generation sequence functional genomic datasets (Barrett et al., 2013). Two HNSC datasets (GSE13601 and GSE89923) were retrieved and downloaded from the GEO database. GSE13601 contains gene expression profiles of patients with oral tongue squamous cell carcinoma (n = 37) and patients with normal mucosa (n = 20); Platforms: Affymetrix Human Genome U95 Version 2 Array (Estilo et al., 2009). GSE89923 contains gene expression profiles of patients with oral squamous cell carcinoma (n = 57) and normal human gingival epithelial cells (n = 33); Platforms: Affymetrix Human Genome U95 Version 2 Array (Woo et al., 2017).
Statistical Analysis
GEO dataset was downloaded using R language GEOquery package as an external validation (Davis and Meltzer, 2007), and the data was normalized by Limma package "normalizeBetweenArrays" function to obtain the expression of TMED2/9/10 in normal head and neck tissues and HNSC (Bolstad et al., 2003;Ritchie et al., 2015). The rank-sum test was used for this analysis. The statistical analysis of the survival data was completed with the survivor R package, and the visualization was carried out with the survminer R package. The correlation analysis was done by using the Spearman method. "ggplot2" package of R software (Version:3.3.3) was used for data visualization (Maag, 2018).
Defining the TMED Family in HNSC
The TIMER 2.0 database was used to analyze 10 genes in the TMED family and to assess the expression levels of each gene in HNSC tissues and normal tissues ( p p-value < 0.05, Gao et al. TMED2/9/10 are Biomarkers in HNSC pp p-value < 0.01, ppp p-value < 0.001). The results showed that TMED3 expression was down-regulated in HNSC tissues and the expression level of TMED6 was extremely low both in HNSC tissues and normal tissues. Nevertheless, the expression of the other eight genes in HNSC tissues was elevated significantly higher than normal tissues ( Figure 1). In addition, we obtained the same results from UALCAN ( Figure 2). The p-value for expression of the TMED family in HNSC versus normal tissues was statistically significant in TIMER 2.0 and UALCAN (p-value < 0.05) ( Table 1).
Prognostic Value of TMED 2/9/10 in HNSC
To better understand the prognostic value of the TMED family in HNSC, we investigated the relationship between the TMED family expression and OS in HNSC patients through the GEPIA 2 (Supplementary Figure S1). By using the GEPIA 2, the results showed that HNSC patients with high TMED2/9/10 expression had a worse prognosis than those with low expression (p-value < 0.05) ( Figure 3A-C), while other members of the TMED family were not statistically significant in survival analysis (Supplementary Figure S1). Therefore, we considered TMED2/9/10 as prognostic markers for HNSC. Moreover, we analyzed its survival value by performing survival curves in the Kaplan-Meier Plotter database. We also found that the higher expression levels of TMED2/9/10 were closely connected with worse prognosis, which indicated the significant prognostic value in HNSC (p-value < 0.05) ( Figures 3D-F). Additionally, we affirmed the diagnostic value of TMED2/9/10 in HNSC patients with the help of the receiver operating characteristic curve (AUC >0.5) ( Figures 3G-I). Surprisingly, when we combined three genes as a new biomarker, its diagnostic value became more significant (AUC = 0.847) ( Figure 3J). The above results suggested that the expression level of TMED2/9/10 had the capacity to serve as potential diagnostic biomarkers in HNSC diagnosis. Further Validation of TMED2/9/10 Expression Levels To further validate the role of TMED2/9/10 in HNSC, we explored the mRNA expression levels by using the GEO dataset. We found that TMED2/9/10 in HNSC also showed high expression in the GEO dataset (GSE13601 and GSE89923) ( Figures 4A,B). Moreover, we analyzed the protein expression levels of TMED2/9/10 by using the immunohistochemistry (IHC) data from the HPA database. The results showed that the protein expression levels of TMED9 and TMED10 were significantly different in normal head and neck tissues and HNSC, which was consistent with the above results ( Figures 4D,E). However, the difference of TMED2 in normal head and neck tissues and HNSC was not significant, which may be due to data heterogeneity, resulting in the difference of protein expression levels of TMED2 from the above results ( Figure 4C).
Correlations Between the Expression Levels of TMED2/9/10 and Clinicopathological Features in HNSC The above data indicated that TMED2/9/10 were up-regulated in HNSC tissues and had an excellent prognostic value on HNSC. Therefore, we further examined the association between TMED2/ 9/10 and clinicopathological features in HNSC. It was found that TMED2/9/10 were significantly associated with age, gender, Co-Expression and Genetic Alteration of TMED2/9/10 in HNSC.
Enrichment Analysis of TMED2/9/10 in HNSC
To further explore the function of TMED2/9/10 in HNSC, we used GO and KEGG analysis on TMED2/9/10 and coexpressed genes by the Metascape. GO function annotation results showed that TMED2 was mainly involved in transferase complex intracellular, protein transport, Golgi membrane, protein modification by small protein conjugation ( Figure 6A); TMED9 was mainly involved in endoplasmic reticulum lumen, cell-substrate junction, extracellular matrix ( Figure 6B); TMED10 was mainly involved in intracellular protein transport, focal adhesion, Golgi membrane ( Figure 6C); Co-expressed genes were mainly involved in endoplasmic reticulum lumen, envelope vesicles, and bone morphogenesis ( Figure 6D). KEGG pathway analysis indicated that TMED2 was enriched in regulation of endocytosis, protein processing in the endoplasmic reticulum, and Yersinia infection pathway ( Figure 6E); TMED9 was enriched in focal adhesion, protein processing in the endoplasmic reticulum protein processing in the cell, focal adhesion, and protein processing in the endoplasmic reticulum ( Figure 6F); TMED10 was enriched in intracellular protein processing in the endoplasmic reticulum, focal adhesion ( Figure 6G); co-expressed genes in the protein processed in the endoplasmic reticulum, phagosome, pathogenic Escherichia coli infection, and focal adhesion ( Figure 6H).
Gene TMED/2/9/10 Expression Profiling in HNSC
To distinguish the enrichment and expression level of TMED2/9/10 in the different cell types of HNSC, a singlecell analysis was conducted by the GRNdb database. The t-SNE plots showed eight-cell types based on the HNSC single-cell dataset ( Figure 7A). The expression levels of TMED2/9/10 were significantly increased in cancer-associated fibroblasts (CAFs), endothelial cells and B cells ( Figures 7B-D).
To further explore the roles played by CAFs, endothelial cells and B cells in HNSC, we used TIMER 2.0 to investigate the association of TMED2/9/10 with various immune infiltrates in human cancers. The analysis showed that TMED2/9/10 were positively correlated with the level of immune infiltration of CAFs and endothelial cells in HNSC ( Figures 8A-H). However, the multiple immune infiltration analysis results showed that TMED2/9/10 were not associated with the level of immune infiltration of B cells in HNSC (Supplementary Figure S2). So, we speculated that TMED2/9/10 might be involved in the immune infiltration process through CAFs and endothelial cells playing crucial roles in immune-oncology interactions.
Analysis of TMED2/9/10 Through Correlation Heatmap and PPI Network
By constructing a correlation heatmap combining the TMED family in HNSC tissues, we found some positive correlations between TMED2/9/10. The results contributed to our insight into the prognostic impact of TMED2/9/10 versus HNSC Figure 10A). The PPI network constructed by STRING showed genes having tight interactions with TMED2/9/10. By analyzing the association scores ranked by MCC method (Supplementary Table S1), we selected the ten highest-scoring hub genes: TMED7, COPI coat complex subunit beta 1 (COPB1), COPI coat complex subunit beta 2 (COPB2), COPI coat complex subunit gamma 2 (COPG2), COPI coat complex subunit gamma 1 (COPG1), coatomer protein subunit alpha (COPA), ARCN1, COPE, TMED3, and COPI coat complex subunit zeta 2 (COPZ2) ( Figure 10B). By further exploring these 10 hub genes in immune infiltration using the TCGA-HNSC cohort in TIMER 2.0, we found that TMED7 expression levels showed a statistically significant positive correlation with CAFs and endothelial cells infiltration levels, which suggested that the hub gene TMED7 might play a role in the immune regulation of HNSC ( Figures 10C-E).
DISCUSSION
Several studies have shown that the TMED proteins were involved in malignant tumors development. TMED2, as a critical factor in cell proliferation and differentiation, was found to exhibit cell-type-specific roles in cancer (Xiong et al., 2010;Shi-Peng et al., 2017). TMED3 was identified as a new prognostic biomarker because its expression was increased in the high-stage and -grade cohorts compared to the low-stage and -grade cohorts in renal cell carcinoma (Ha et al., 2019). Recent studies proposed the idea of TMED8 as a methylated gene regulating energy metabolism in neuroblastoma, which meant TMED8 could be used as a new target for therapy, drug development, and prediction of survival (Liu and Li, 2021). Also, highly expressed TMED9 significantly affected vascular invasion and poor prognosis in patients with hepatocellular carcinoma (Yang Y.-C. et al., 2021). Besides, it has been confirmed that isolated small peptides derived from the extracellular domain of TMED10 could treat cancers with abnormal TGF-β signaling activity by antagonizing TGF-β signaling (Nakano et al., 2017). However, the role of the TMEDs in HNSC has not been fully elucidated. To better explore the effect of the TMED family in HNSC, we picked out TMED2/9/10 for an in-depth study. We addressed the importance of TMED2/9/10 in HNSC from the perspectives of its expression in tumor tissues, prognostic value, expressionrelated genes, GO and KEGG enrichment analysis, single-cell analysis, and immune infiltration analysis, respectively. In this study, we found that in HNSC tissues, the expression levels of TMED1/2/4/5/7/8/9/10 were significantly higher than those in normal tissues (Figure 1). In addition, we validated the expression levels of the TMED family in primary tumor and normal tissue in UALCAN (Figure 2). These results in UALCAN also showed us that the expression levels of TMED1/2/4/5/7/8/9/ 10 in patients were higher. Not only did the above results in TIMER and UALCAN prove the differential expression of the TMED family members, but also many studies explained the abnormalities of the TMED family in tumors. It was reported earlier that increased proliferation and invasion of ovarian cancer cells were positively correlated with ectopic expression of TMED2 (Shi-Peng et al., 2017). Because TMED3 was abnormally elevated in tumor samples from prostate cancer patients, it has also been identified as a potential drug target (Vainio et al., 2012). Evidence showed that the up-regulation of TMED5 in cervical cancer cells promoted malignant behavior and nuclear autophagy, affecting the progression of malignant tumors . Interestingly, elevated TMED2 and TMED9 expression levels in breast cancer patients were identified as poor prognostic factors (Lin et al., 2019;Ju et al., 2021). Therefore, the elevation of TMED proteins may significantly contribute to the proliferation and migration of cancer cells, thereby aggravating cancer progression. Furthermore, we performed survival curve analysis by GEPIA 2 and Kaplan-Meier Plotter successively to assess the clinical value of the TMED family. We first performed survival curve analysis of the TMED proteins with the GEPIA 2 database and found that TMED2/9/10 could be used as a prognostic marker for HNSC ( Figures 3A-C). To ensure this inference, we performed a survival curve analysis in Kaplan-Meier Plotter for TMED2/9/10 ( Figures 3D-F). The doublechecked results indicated that highly expressed TMED2, TMED9, and TMED10 had a worse prognosis for patients with HNSC. In addition, we verified the diagnostic value of TMED2/9/10 in HNSC with the receiver operating characteristic curve. The result showed that the AUC values of TMED2/9/10 were greater than 0.5 ( Figures 3G-I). Moreover, the combination of TMED2/9/10 held higher AUC values in the receiver operating characteristic curve (AUC = 0.847) ( Figure 3J). Therefore, significantly elevated expression of TMED2, TMED9, and TMED10 in HNSC patients was considered a reliable diagnostic criterion. Meanwhile, the combination of TMED genes was a potential diagnostic biomarker in the future. To validate the result reliability, we compared TMED2/9/10 expression levels between normal tissues and HNSC tissues by using the GEO dataset as external validation. TMED2/9/10 were up-regulated in HNSC tissues than normal tissues (p < 0.001) (Figures 4A,B). Besides, we utilized the HPA database for IHC data to better validate our conclusions. The results indicated that the expression levels of TMED9 and TMED10 were significantly up-regulated in HNSC ( Figures 4D,E), while there was no significant difference in TMED2 ( Figure 4C). The above results suggested that the TMEDs might contribute to the development of HNSC.
TMED2/9/10 were significantly associated with critical clinicopathological features such as age, cancer grade, lymphatic metastasis, and cancer stage ( Table 2). Thus, the result provided a new perspective on the relationship between clinicopathological features and prognosis. To better understand the function of TMED2/9/10 in HNSC, we first detected the mutation rates of TMED2/9/10 and found that the results were 4%, 7%, and 10%, respectively ( Figure 5A). Hou et al. found an increased probability of non-alcoholic fatty liver disease in mice with heterozygous mutations in the TMED2 Hou et al. (2017). Therefore, we conjectured those TMED2/9/10 mutations might contribute to tumor development. Although TMED2/9/10 have higher mutation rates in HNSC, the relationship between them remains unclear, which deserves further exploration. To better explore the function of TMED2/9/10, we explored genes associated with TMED2/9/10 expression and studied their roles in the body. We excavated 5 genes most closely associated with TMED2/9/10 positive and negative, respectively, and found 52 genes co-expressed by TMED2/9/10 ( Figures 5B-E). Afterward, we performed GO and KEGG analysis of the top thousand and co-expressed genes associated with TMED2/9/10 expression. GO enrichment analysis showed that the functions of TMED2/9/10 as well as co-expressed genes were mainly concentrated in the transferase complex, endoplasmic reticulum, intracellular protein transport cavity, cell-substrate, focal adhesion as well as coated vesicle ( Figures 6A-D). KEGG enrichment analysis indicated that TMED2/9/10 and co-expressed genes were mainly involved in endocytosis, protein processing in the ER, focal adhesion pathway, focal adhesion, and phagosome ( Figures 6E-H). The analysis results of these expression-related genes validated the function of TMED2/9/10. It has been demonstrated that during chorioallantois attachment, TMED2 functioned as a critical factor regulating the localization of fibronectin and vascular cell adhesion molecule 1 (VCAM1) (Hou and Jerome-Majewska, 2018). A study found that the cell biological mechanism of misfolded protein cargo entrapment was related to the targeting of TMED9 to the small molecule BRD4780 (Dvela-Levitt et al., 2019). In addition, membrane contact between the ER-Golgi intermediate compartment (ERGIC) and the ER-exit site (ERES) mediated by TMED9 constituted the occurrence of autophagosomes . The transmembrane protein TMED10 was recently identified as a protein channel mediating vesicle translocation and secretion of termed cytosolic leaderless proteins (cytosolic proteins lacking a signal peptide) (Nguyen and Debnath, 2020;Zhang et al., 2020). TMED3, as an intracellular transporter, was knocked down to induce abnormalities in apoptosis-related proteins in lung squamous cell carcinoma (LUSC) cells. At the same time, TMED3 knockdown was involved in the regulation of LUSC cell function, for example, inhibition of proliferation, reduction of colony formation, induction of apoptosis and reduction of migration (Xie et al., 2021). These results suggest that TMED2/9/10 may cause the development or deterioration of HNSC by regulating vesicle trafficking or strengthening endocytosis.
In single-cell analysis, we first distinguished different cell types of the head and neck cancer ecosystem in Figure 7A. Interestingly, we found significantly higher expression levels of TMED2/9/10 in both CAFs, endothelial cells and B cells ( Figures 7B-D). The results of the single-cell analysis of TMED2/9/10 implied its relationship with specific immune responses. Recently, it has been shown that TMED2 overexpression was negatively correlated with CD8 + T immune cell levels in HNSC, suggesting that TMED2 might initiate tumor development by altering the levels of immune infiltration in the tumor microenvironment (Sial et al., 2021). Also, Sun et al. found that TMED2 was required for cellular interferon (IFN) responses to viral DNA. MITA (mediator of IRF3 activation, also known as STING) had a vital role in the innate immune response to cytoplasmic viral dsDNA. Interestingly, TMED2 could bind to MITA, stabilize dimerization of MITA, and promote MITA translocation from the ribosome to the ER and the Golgi after viral infection. Moreover, the knockdown of TMED10 did not disrupt TMED2-mediated immune responses Sun et al. (2018). Therefore, we investigated whether TMED2/9/10 expression correlated with immune infiltration levels in HNSC. Our findings suggested that there was a strong positive relationship between TMED2/9/10 expression levels and infiltration levels of CAFs and endothelial cells (Figure 8), and TMED2/9/10 were not associated with the Figure S2). According to previous studies, we knew that HNSC stroma was rich in infiltrating CAFs, with the highest concentrations accumulating near the invasive front of the tumor (Markwell and Weed, 2015). The adaptability of HNSC-CAF with myofibroblast characteristics led to the spread of extracapsular tumor cells, increased invasion, and lymph node metastasis (Marsh et al., 2011). At the same time, endothelial cells could vascularize the growing tumor mass and promote tumor cell invasion (Markwell and Weed, 2015). It has been found that after direct contact between endothelial cells and HNSC cells, the Notch ligand Jagged1 induced by mitogenactivated protein kinase (MAPK) in cancer cells activated the Notch signaling pathway in adjacent endothelial cells, ultimately promoting the formation of the capillary blastema (Zeng et al., 2005). In a word, microenvironmental rearrangements mediated by CAFs and endothelial cells have both direct and indirect effects on HNSC invasion. The high expression of TMED2/9/10 in immune cells validates the vital role of the TMED family in immunity.
The transcription factor-gene network showed the components closely related to TMED2/9/10 and HNSC ( Figure 9). Among them, CREB3 was associated with the overall survival of HNSC patients and could be used as a prognostic biomarker for HNSC (Bornstein et al., 2016). Interestingly, our study found that the contribution of X-box binding protein-1 (XBP-1) to cancer provided new sights for this study. Abnormal accumulation of misfolded proteins in the endoplasmic reticulum (ER) led to ER stress. A compensatory mechanism called the unfolded protein response (UPR) was activated by cells responding to ER stress (Shajahan et al., 2009). XBP1 was an essential component of the UPR signaling pathway. XBP1 maintained proteostasis by stimulating the expression of chaperones and protein degradation machinery in the ER (Zhong et al., 2021). However, abnormal activity of XBP1 affected normal cell proliferation, apoptosis, metastasis, and ultimately tumorigenesis and tumor progression (Shi et al., 2019). Therefore, precise treatment against XBP1 may become a therapeutic direction for HNSC in the future.
We identified a positive correlation between TMED2, TMED9, and TMED10 ( Figure 10A). Interestingly, genetic and biochemical experiments have shown that the stability of TMED proteins could be regulated by other family proteins: knockout or deletion of a TMED protein led to reduced or absent expression of TMED proteins from different subfamilies. For example, Denzel et al., when interrogating changes in the liver of mice heterozygous for the null mutation of TMED10, they found that the deletion of TMED10 not only resulted in developmental arrest before blastocyst formation but also decreased the expression of TMED9 and TMED3 proteins that interacted with them Denzel et al. (2000). So, it was evident that a complex network regulated the function of TMED2/9/10. To better combat HNSC, we should take advantage of the potential network of TMED2/9/10 at the same time. By using PPI network analysis, we identified hub gene TMED7 that was significantly associated with both TMED2/9/10 ( Figure 10B). Similarly, we found strong positive correlations between infiltration levels of CAFs and endothelial cells and TMED7 expression in HNSC ( Figures 10C-E). In particular, TMED7 could inhibit the Tolllike receptor 4 (TLR4) signaling pathway (Doyle et al., 2012). TLRs are important factors in the immune response, which can recognize invading pathogens and activate inflammatory responses. A previous study showed that TLR4 was aberrantly expressed in cancer cells, affecting the tumor microenvironment. To our surprise, there was evidence indicating that high expression of TLR4 was associated with poor prognosis in HNSC (Hu et al., 2021). Therefore, we hypothesized that activation of TMED7 could improve the prognosis of HNSC patients. In Supplementary Figure S1, we assessed the effect of the expression level of TMED7 on the prognosis of HNSC using the GEPIA 2 database. Although this result is not statistically significant, the trend of the survival curve is compatible with our inference. These pieces of evidence demonstrated that the hub gene TMED7 based on TMED2/9/10 could alter HNSC prognosis through immune infiltration. It reminds us that TMED2/9/10, as well as related genes, can be used as biological targets of HNSC.
Taken together, our results suggested that TMED2, TMED9, and TMED10 were significantly up-regulated in HNSC patients, and their upregulation was inversely correlated with HNSC prognosis. At the same time, we validated the above conclusions using GEO dataset and HPA database. Then, we used GO and KEGG enrichment analysis to elaborate in-depth on the functions of TMED2/9/10 and co-expressed genes. In addition, the results of the single-cell analysis and immune infiltration analysis also revealed that TMED2/9/10 affected the development of HNSC through immune cells. And the hub gene TMED7 and the transcription factor XBP1 were also expected to be potential prognostic markers and therapeutic targets for HNSC. So, we can infer that the transcription factor XBP1 might regulate the expression of TMED2/9/10, disturb their functions, boost immune cell infiltration, thereby promoting abnormal invasion of cancer cells and leading to poor prognosis of HNSC.
Regrettably, this study had some limitations. First, our selected sample data were confined to the TCGA and GEO databases, and further HNSC cohorts should be recruited in the future to confirm the results. Second, further experimental studies are required to validate the function of TMED2/9/10 at the cellular level. Finally, we still need to further explore the mechanism that TMED2/9/10 affect the prognosis of HNSC patients to provide more possibilities for clinical treatment.
CONCLUSION
In conclusion, in this study, TMED2/9/10 and related genes entered our horizons as potential prognostic biomarkers, and the intersection of their functions helped researchers understand the pathogenesis of HNSC and provided a new approach for the treatment and prognosis of HNSC. At the same time, we analyzed the potential clinical value of the TMED family in the pathogenesis and development of HNSC and its associated oncogenic signaling pathways, providing clues for multi-target and TMED2/9/10-mediated targeted therapy. Finally, our in-depth exploration of TMED2/9/10 functions and immune infiltration allowed us better to understand the specifically expressed genes in HNSC patients, facilitating us to predict the survival of HNSC patients by the related genes. The above results supported targeting TMED2/9/10 as a new strategy for diagnosing and treating HNSC. However, the value of this conclusion for the prognosis of HNSC patients still needs further validation.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.
AUTHOR CONTRIBUTIONS
WG, Z-WZ, and H-YW contributed to the analysis design, performed the literature search and wrote the manuscript. X-DL was responsible for the manuscript structure and English grammar. W-TP, H-YG, and Y-XL contributed to picture integration and completed the data analysis. AL gave final approval of the article to be published. All authors contributed to manuscript revision, read, and approved the submitted version.
|
2022-02-26T00:02:40.563Z
|
2022-06-08T00:00:00.000
|
{
"year": 2022,
"sha1": "be3cf107837625820ee109c8a7bc50e8a2a95785",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d60437892a419844c65356a697679ed78d942296",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
157564733
|
pes2o/s2orc
|
v3-fos-license
|
Artificial Momentum, Native Contrarian, and Transparency in China
The Chinese stock market has a large ratio of retail investors, which is significantly different from the stock markets in the US and Europe. We have known that momentum profits exist in the latter by applying Jegadeesh and Titman’s (J Financ 48:65–91, 1993) model with 6-month formation and holding periods. However, there are only a few studies on momentum profits in China. Therefore, this study examines whether the Shanghai and Shenzhen stock markets produce momentum profits. We find that these two markets have significant contrarian but not momentum profits. We also create an “artificial momentum” portfolio and follow Bhattacharya et al. (Account Rev 78:641–678, 2003) to compute the transparency indices. Our outcomes show that the corporate transparencies of the winners (losers) in the artificial momentum portfolios are close to those in the commonly-defined momentum portfolios. The averages of the decile transparencies are between 4.5 and 6.5, not only for the top 10% of winners but also for the bottom 10% of losers. According to these results, we suggest that financial transparency is irrelevant to the inertia and reversal of stock prices in the Shanghai and Shenzhen stock markets.
Introduction
Chinese stock markets differ from the US and European markets in several ways. For example, China's stock markets have a backdoor listing, and the main participants in these markets are retail investors instead of investment banks. Intuitively, the unique market structure may be characterized by different phenomena and investment strategies. Institutional investors have more inside information and the ability to understand corporations' operating statements. Therefore, their investment strategies are stable as observed in US stock markets. On the contrary, retail investors usually do not have enough knowledge to have a good understanding of the corporate information. Besides, according to Kang et al. (2002), the information reported by small companies in China's stock markets is not reliable. Rumors and investor sentiment can easily be manipulated by syndicate speculators. Su (2011) even pointed out that market manipulation is pivotal in explaining the industry momentum profits in Chinese stock markets. Rumors significantly affect the volatility of China's stock prices, and transparency is a serious problem in China's financial markets. Therefore, an inertia strategy is not suitable for investors in China's stock markets.
Previous research has found that the US stock markets respond to information gradually. Chan et al. (1996) pointed out that the prices of stocks with the worst past performance respond sluggishly to past news. Hong et al. (2000) identified that (a) there is a concave relationship between momentum profit and firm size; (b) a momentum strategy is more profitable when firm-specific information moves more slowly; and (c) the effect of analyst coverage is larger on past losers than winners.
Recently, many listed companies in China have abandoned the traditional business models and adopted a modern accounting system that uses earnings management to manipulate financial statements and the companies' transparency, which draws our attention to the impact of corporate transparency on stock prices in China. We use three indices of earnings management in the transparency model developed by Bhattacharya et al. (2003) to measure the transparency of a listed company. We suspect that uncertainty caused by low transparency leads to a reversal of stock prices, since true news ferments and takes effect slowly. Before investors have access to accurate information, they can only refer to rumors; portfolios may also be affected by their sentiments (Kang et al. 2002).
Manipulations and speculations exist in the Shenzhen and Shanghai stock markets, as is well known by most local investors. Spreading rumors and using insider information generate excess returns and directly trigger fluctuations in stock prices (Kang et al. 2002). The exposure of true information may give rise to a reversion in stock prices and make momentum profits disappear. Speculation games are staged every trading day. Although it is important to know whether contrarian profits dominate in China's stock markets because of the globalization trend of international capital, only a few studies on momentum profits include samples from the Asia-Pacific (e.g., Fama and French 2012;Griffin et al. 2003;Chui et al. 2010).
In 1992, the Chinese government grouped stocks into two categories, A-shares and B-shares, to reduce shocks from foreign capital markets to its developing finance systems. The HKD must be used to trade B shares on the Shenzhen Stock Exchange, and the USD on the Shanghai Stock Exchange. Because China's forex market is still controlled by the government, there is a liquidity problem with B shares. Since more and more global capital flows into China's stock markets and liquidity is always an important research topic, we must survey the difference in liquidity between A-shares and B-shares and find the special patterns, if any, in the Shenzhen and Shanghai stock markets. Fama and French (2012) and Asness et al. (2013) analyzed the relationships between value (book-to-market) and momentum strategy across different markets. Asness et al. (2013) used data on country equity index futures, government bonds, currencies, commodity futures, and individual stocks in the USA, UK, continental Europe, and Japan, but, unfortunately, their research did not provide a detailed discussion on China's stock market. Consequently, we have an incentive to investigate the strategies used in this large market. Chordia and Shivakumar (2002) showed that momentum profits, explained by a set of lagged macroeconomic factors, disappear once stock returns are adjusted for their predictability based on these macroeconomic factors. Returns to momentum portfolios are positive only during expansion periods and become negative during recession periods. Cooper et al. (2004) examined overreaction theories in the cross section of stock returns and found a correlation between momentum profits and market status. One of their conclusions was that the 6-month momentum strategy generates an average monthly profit of 0.93% after 3-year UP markets and an average monthly profit of −0.37% after 3-year DOWN markets. They also found that macroeconomic elements cannot explain this profitability even with methodological adjustments to accommodate microstructure concerns. Instead, they believe that the business cycle is one factor that affects momentum profitability.
The literature discussed in this paper focuses on determining which macroeconomic factors influence momentum profits. However, the revenue continuity (predictability) could have many connections with the individual corporation's features. Moskowitz and Grinblatt (1999) showed that industry momentum investment strategies that buy stocks from past winning industries and sell stocks from past losing industries appear to be highly profitable, even after controlling for size, book-to-market equity, individual stock momentum, the cross-sectional dispersion in mean returns, and potential microstructure influences. If an individually observable variable that significantly affects momentum strategy exists, then we can use this proxy to assess and forecast the continuity of profit of a single corporation. The transparency index designed by Bhattacharya et al. (2003) is an idiosyncratic and computable variable after financial announcements. We believe that an ex-ante proxy will make it easy for us to evaluate the predictability (continuity) of profit.
Asian economies have rapidly developed in the past ten years. Although the sizes of China's stock markets cannot compare with those of the US stock markets, we cannot ignore China's stock markets in global finance. While there are abundant studies on momentum strategies in various developed countries, little attention has been paid to the momentum strategies in Chinese stock markets. One of the reasons is that the data on corporations in China contain biases and errors. Another reason is that information about corporations is incomplete. However, the data have been progressively corrected, and the published stock prices are accurate. Therefore, we investigate the Shanghai and Shenzhen Stock Exchanges by analyzing the published stock prices. There are four distinctive features of our research. First, we use a technology that differs from that used in the literature to detect the coexistence of contrarian and momentum profits in stock markets, given the same holding and formation periods. Second, we combine Shanghai B-shares with Shenzhen B-shares to discuss the momentum profits of non-RMB investors, which is a novel design as yet not seen in past studies (e.g., Jiang 2012;Wu 2011;Kang et al. 2002;Chui et al. 2010;Su 2011). This design also renders the analysis of momentum profits in China's stock markets more complete and robust. Third, the maximum length of our formation and holding periods is 36 months for nine different subsample markets. Finally, we examine corporate transparency to expose the existence of false financial reports, which corresponds to the study of false news and rumors in Kang et al. (2002).
Under the traditional method in Jegadeesh and Titman (1993), the momentum return could be either "positive" or "negative". In practice, a retail investor can only have a subset of the portfolio identified by using the traditional momentum strategy. This portfolio could bring a return to the investor that differs from the outcome calculated under the traditional definition. Our two-step method allows us to detect these two types of returns simultaneously. In the first step, we will find a "positive" or "negative" momentum return under the traditional definition. In the second step, we will find the other type of return not found in the first step. These two steps enable us to study momentum and contrarian returns in any stock market simultaneously.
The remainder of this paper is organized as follows: Section 2 describes our research logic and methodology to study momentum profits. We also construct an "artificial momentum profit" and compute transparency indices. Section 3 shows the momentum profits in different types of Chinese stock markets, and we also provide possible explanations for investor behavior in the Shanghai and Shenzhen stock markets. In Sect. 4, we discuss the role of transparency in China's stock markets and provide empirical evidence. Section 5 concludes this study.
Models and Data Descriptions
The momentum effect is often calculated in the literature by assuming a 6-month formation period and a 6-month holding period. Many studies find that momentum profits turn from positive to negative when the holding period is extended (e.g., Hillert et al. 2014;Chui et al. 2010;Cooper et al. 2004;Griffin et al. 2003;Chan et al. 1996;Jegadeesh and Titman 1993). To test whether China's stock markets also exhibit the phenomenon of profit reversal, we conduct our analysis by assuming that the holding period ranges from 1 to 36 months. Barucci et al. (2004) argued that the past performance of a stock (short memory) in the forecasting rule (learning process) induces long-range dependencies in the time series. If agents change their prediction rules according to the past performance of the stock, then there is a fluctuation in the financial market (overreaction or delayed overreaction).
Momentum Computation
We take this overreaction into consideration to construct our standardized model as follows.
Suppose that a sample period has Q months, where Q ≥ f + l + H , f is the number of months in the formation period, l is the number of months in the lagged period, and H is the number of months in the longest holding period. We focus on the stocks whose data are complete during the ( f + l + h) month period, where h is the number of months in the holding period. 1 During the f -month formation period of portfolio i, we use the yield P i,t = P f −P 1 P 1 (P f and P 1 are the average stock prices in the f month and the first month of the formation period, respectively) to measure the stock return performance. Then, we rank P i,t to choose the top n i,t k and bottom n i,t k stocks as the winners and losers, respectively, where k is the momentum ratio.
During the l-month time lag, we assume that investors buy the n i,t k winners and sell the n i,t k losers in the investing . We assume that investors hold Portfolio i for h months (h ≤ H , and the longest holding period is H months). After the investors trade Portfolio i in the hth month, the momentum profit of Portfolio
and the average of the winner's return during the h-month
h denotes the profit of loser q in Portfolio i during the h-month holding period. 3 As discussed above, we have Q − f − l − h + 1 portfolios. Hence we can obtain the following momentum profit structure: ] amounts to Q months, the total length of our sample period, so we can acquire the average MP set via our portfolio profit set. The process is as follows:
Fig. 1 Computational process of momentum profit
Therefore, we can obtain the momentum profit set M P = {M P 1 , M P 2 . . . M P H } and graph the M P in Fig. 1 in which the horizontal line is the "h" time.
Artificial Winners and Losers
We propose a general analysis of momentum profit. If researchers follow the original definition of momentum profit in previous studies (e.g., Jegadeesh and Titman 1993;Moskowitz and Grinblatt 1999;Lee and Swaminathan 2000;Chordia and Shivakumar 2002;Cooper et al. 2004;Avramov et al. 2007;Asness et al. 2013;Garlappi and Yan 2011;Hillert et al. 2014), they will find that not all winner-prices rise continuously in holding periods in China's stock markets. In addition, many prices of the losers increase in the holding periods. Therefore, we select the winner shares whose prices rise and call them "artificial winners." We also select the loser shares whose prices decline in the holding periods and call them "artificial losers". These winner and loser shares will produce an "artificial momentum profit".
By adopting Jegadeesh and Titman (1993)'s definitions, we receive only the contrarian profit when analyzing the data for Shenzhen's and Shanghai's data. We refer to such a contrarian profit as "native contrarian profit". That is, we can only study the relationship between the contrarian profit and transparency, and not the correlation between the momentum profit and transparency by using our dataset. With both native contrarian profit and artificial momentum profit, we can not only investigate price reversal but also discuss price inertia.
Our research compares the transparency indices between the momentum and contrarian strategies. A guess is that a high transparency share should have relatively complete information and increase the continuity of its price adjustment (either continually rising or falling). We also suspect that the diffusion of true information makes the probability of price reversal low and thus promotes the occurrence of momentum profit.
Subsample Combinations
We follow Lin et al. (2015) to analyze nine subsamples: (a) Shanghai A-shares, (b) Shenzhen A-shares, (c) Shanghai B-shares, (d) Shenzhen B-shares, (e) a combination of both the Shanghai and Shenzhen stock markets, (f) a combination of A-shares from the Shanghai and Shenzhen stock markets, (g) a combination of B-shares from the Shanghai and Shenzhen stock markets, (h) the Shanghai stock market, and (i) the Shenzhen stock market. We calculate the profits of momentum strategies in each of these nine subsamples and examine whether there are evidences of negative momentum profits.
In general, a stock market uses only one local currency. However, the traders have to use foreign currency in the Shenzhen and Shanghai B-share markets. To study "non-RMB-B-share" and RMB-A-share markets in China and compare the findings with the findings in the literature on momentum strategy, we test whether there are momentum profits in Shanghai B-shares, Shenzhen B-shares and the combination of B-shares in the Shanghai and Shenzhen stock markets.
Data Description
Our Shanghai and Shenzhen samples are collected from COMPUSTAT and the China Stock Markets and Accounting Research Database (CSMAR). The data period extends from January 2004 to April 2015. We rank the stocks in deciles according to their return rates during the formation period. The winner shares are the top 10% shares, and the loser shares the bottom 10%. We calculate the momentum profit at the end of the holding period, and we weigh each stock equally in our calculation. We refer to Naughton et al. (2008) and adopt the raw return in the above calculation.
The data used to calculate transparency are all from CSMAR and China Infobank database. The data are on a monthly basis, and the data period is from December 2006 to March 2012. It contains both A-share stocks and B-share stocks in China's stock markets. We drop the stocks whose transparency data is incomplete. In the calculation of the transparency index, the lengths of the holding period and the formation period are both 6 months. Cooper et al. (2004) excluded stocks below $1 at the end of the formation period, but in this research, we do not. Since only the Shanghai B-share market uses the USD to trade, it does not have a standard RMB price threshold as the A-share market. Our tests are different from most other research because our tests contain a 1-to 36-month formation period. This design increases the accuracy of our results. We use graphs to show the momentum profits with 36 formation periods and 36 holding periods. In line with the literature on momentum profits, this research adopts t tests to analyze momentum profits (Figs. 2, 3). 4 Given a holding period and a formation period, we have Q − f −l −h +1 portfolios (windows), and each one has a mean momentum profit. The sample size is the number of portfolios. We use a t-statistic to test whether the mean profit is different from zero. There is a negative momentum profit in the Shanghai and Shenzhen stock markets, and their t statistics are almost lower than −2. Therefore, we have significant evidence to show that these two markets suffered from price reversals. After observing Figs. 4 and 5, we infer that the A-share market exhibit contrarian profits. Given the formation period, the profit decreases when the holding period is extended. In addition, the closer the length of the holding period to 36 months, the smaller the profits.
c. Shanghai B-shares
Except when there is a shorter holding period, most momentum strategies yield a negative profit. These outcomes are similar to those for the above tests. The maximum B-share contrarian profit occurs near f = 10 and is smaller than that for A-shares in the Shanghai stock market.
Jiang (2012) collected weekly stock returns from January 1993 to December 2013 and considered that there was an insignificant momentum strategy for Shanghai Ashares within the very short run. Using daily data for the Chinese stock market from 1990 to 2001, Wu (2011) also found that the momentum profits are weak. In referring to Jiang and Bao (2015), we find that monthly momentum profits without risk management are not positive in the Shanghai stock market (see Figs. 2 and 6). On the basis of the above literature review and our own findings, we conclude that the positive momentum profit in this market is not obvious (Fig. 7).
d. Shenzhen B-shares
In Fig. 8, a few momentum profits are positive just at f = 1 (a 1-month formation period). Their t-values are greater than 2 in Fig. 9, and there exists a significant momentum profit if h < 30. Most contrarian strategies in this setting yield a substantial profit. Therefore, we can also roughly infer that price reversal occurs in the Shenzhen B-share market.
We know that the winner and loser shares under f = 1 are different from those in cases under f ≥ 2. A common phenomenon in both B-share markets is that there exist momentum profits with a one-month formation period. The market structure may cause these profits. In fact, there are more institutional investors in the B-share market than in the A-share market. These investors are usually investment banks or market makers. They help many companies issue new shares, have access to insider information, and have a better ability to assess rumors.
e. A fully combined market
We have already discussed the contrarian profits in the Shanghai and Shenzhen stock markets. Our tests have also found that the combined Shanghai-Shenzhen stock market reveals the existence of a significant and stable contrarian profit. 5 Naughton et al. (2008) found evidence of a strong momentum profit around the time of earnings announcements in the Shanghai stock market, which is different from our contrarian outcome. One of the reasons may be that their sample period is different from ours. The compositions of stocks will thus differ in different sample periods. Additionally, we do not specifically consider the timing of earnings announcements when calculating the momentum profits. Figures 10 and 11 suggest that under the 6-month formation period, most winner shares saw their prices decline and most loser shares saw their prices rise in the holding period in the overall China stock market. The overall China stock market exhibits a contrarian return, while the US stock market shows a momentum return. In order to verify our conclusion, we have adopted different formation periods to determine the winner and loser shares. The tests all confirm the contrarian returns. To sum up, the contrarian profits are robust under 36 × 36 formation and holding periods (see all of the figures in this section).
Transparency of Artificial Momentum and Native Contrarian
According to the above analysis, we clearly show that China's stock markets favor the contrarian strategy, which is significantly different from the momentum strategies used in the US and European stock markets. Knowing the reason for this difference is important, and we suspect that the reason is that the degree of transparency in each of these markets is different. In general, the corporate financial reports in the developed regions have higher transparency. It has been found that transparency and momentum profits in the US and European stock markets are positively correlated. This section answers the question of whether the lower transparency of Chinese corporations affects the contrarian profits.
Distribution of Winners and Losers
A price reversal occurred at h = 6 according to the previous analysis, which means that the price inertias are broken by the updated expectations of investors after the formation period. In the period t i + f + l, the winners and losers are determined. A price reversal means that "the same" winners had a loss and losers had profits in the period t i + f + l + h. That is, most of the shares chosen in the original portfolios have changed their price trend in holding period f . Now, we focus on the shares that belong to the original portfolios to see if they maintain their price trends. Some shares are selected to form an artificial momentum portfolio. Table 1 shows the numbers of winners and losers in the native contrarian portfolios and the artificial momentum portfolios.
To compare the degrees of transparency between the contrarian portfolios and momentum portfolios, we require that transparency indices be computed for every share in this table. In other words, the shares that have missing data are not included in this table. There are 84 portfolios in our sample.
Methodology of Transparency
From the above outcomes, we are sure that the Shenzhen-Shanghai stock market has contrarian profits, which is different from the situations in the US and European stock markets. Next, we try to discover the causes of the overreaction in the Shanghai stock market.
We follow Bhattacharya et al. (2003) to calculate the transparency of every listed corporation and use just two indices, Earnings Aggressiveness (EA) and Earnings Smoothing (ES), to measure their transparency. We compute the means of these indices and distribute them in deciles.
where DA it is the discretionary accruals; TA it is total accrued profit before the items below the line, TA it = EBXI it − CFO it ; EBXI it is the earnings before interest and tax; CFO it is the cash flow from operations; A it is the average value of the beginning and ending total assets; NDA it is the non-discretionary accruals stemming from the modified Jones model 6 ; AC it is the operating increase in accrued items; and CF it is the operating net increase in cash flow. 6 The modified Jones model is: NDA it = α ı1 The α coefficients are obtained from TA it where A it−1 is the total assets of companies in period t − 1, REV it is the difference in the revenues between period t and period t − 1, REC it is the difference in the net receivables between period t and period t − 1, and PPE it is the gross property plant and equipment in period t (see Dechow et al. 1995). Zero transparency exists in two periods as shown in Fig. 12, because all stock prices of the native contrarian losers exhibit a reversal at h = 6. These paths in Fig. 12 are similar. The logic underlying the analysis in relation to the loser's transparency is the same as the winner's transparency. The transparency paths between the artificial momentum losers and native contrarian losers are close to each other as shown in Fig. 13.
We also tested the Shanghai market, the Shenzhen market, the combination of Ashares for Shanghai-Shenzhen, the combination of B-shares for Shanghai-Shenzhen, the Shanghai A-share market, the Shanghai B-share market, the Shenzhen A-share market and the Shenzhen B-share market. Their results are not different from those for the overall market. To simplify this paper, we show them in "Appendix B".
The degrees of transparency of the winners are higher than those of the losers. In addition, we know that the transparencies between the contrarian and momentum portfolios are similar to each other. Their levels of transparency are close to the average, and the difference in transparency cannot be explained by the significant contrarian profits in China.
Extremely high or extremely low transparency does not occur in any of the 84 momentum or contrarian portfolios, and most of the transparency indices are close to the mean value of 5.5. These outputs imply that a share with extreme transparency has a small probability of price reversal or inertia.
Robustness Check
Other tests that are not listed here (Shanghai A-shares, Shanghai B-shares, Shenzhen A-shares and Shenzhen B-shares) have many periods without artificial momentum winners or losers. If the market has no artificial momentum-winner or momentumloser, then we infer that there is no continuity in the price trends, which means that the contrarian strategy in China is not easy to change (see " Appendix B").
Conclusions
Our research shows that negative momentum profits exist in the Chinese stock market. To confirm this result, we ran tests on nine combinations of datasets: (a) Shanghai A-shares, (b) Shenzhen A-shares, (c) Shanghai B-shares, (d) Shenzhen B-shares, (e) a combination of both the Shanghai and Shenzhen stock markets, (f) a combination of A-shares in the Shanghai and Shenzhen stock markets, (g) a combination of Bshares in the Shanghai and Shenzhen stock markets, (h) the Shanghai stock market, and (i) the Shenzhen stock market. These tests have consistent outcomes and provide evidence that the world's second-largest stock market has a significant price overreaction. Momentum profits in the Shanghai and Shenzhen markets are almost negative, which is significantly different from the results of past studies on the US or European markets.
In the last twenty years, China's economy has grown significantly. However, the institution lacks the mechanism to expose the financial status of almost all listed companies. The companies do not have the incentive to disclose their real statements because the financial regulations are incomplete. This study reveals an insignificant correlation between the reversal of stock prices and financial transparency. In cases of both artificial inertia and native reversal, the listed corporations in China do not have complete transparency.
Our outcome regarding transparency echoes the results of Kang et al. (2002). Stock price overreactions are due to the dominance of individual investors. These retail investors lack credible information about listed companies and rely on market rumors. The "cooked" financial statements have little effect on the trends in stock prices in the Shanghai and Shenzhen markets.
"Backdoor listings" in China's stock markets are frequent. Mergers and acquisitions can cause the prices of losers to rise. Retail investors "vote with their feet" to settle the position of winner shares and cause their prices to go down. These could be the reasons for the price reversal and are left to future research.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Appendix A f. A combination of A-shares in the Shanghai and Shenzhen stock markets
We merge A-shares in the Shanghai and Shenzhen stock markets into a subsample. The momentum profits of the overall A-shares are computed and compared with those of the overall B-shares in the next section. Increasing the length of the formation period makes the contrarian profit more evident in a given holding period. With a longer period of observation, more information can be extracted, and the contrarian profits are enlarged. However, the A-shares of the Shanghai and Shenzhen markets still have price reversals even under the longer formation period (Figs. 14, 15).
g. A combination of B-shares in the Shanghai and Shenzhen stock markets
Before this test, we have observed that the B-shares in the Shenzhen and Shanghai stock markets individually have contrarian profits. When the B-shares in these two markets are combined, they still exhibit the contrarian property (Figs. 16,17).
According to Fama and French (2012) and Griffin et al. (2003), the main non-RMB markets in North America and Europe have momentum profits. However, our research finds that Shanghai B-shares and Shenzhen B-shares individually result in contrarian profits. To test the robustness of the contrarian result and compare these markets with the other main markets, we combined these non-RMB B-share markets and still obtained a contrarian outcome. Figure 18 shows that a negative momentum profit significantly exists between the 2nd and 36th formation periods. The momentum profit in the first period ( f = 1) in the Shanghai stock market does not violate the trend during the extended formation period. Figure 18 tells us that the Shanghai stock market has a momentum intensity at the beginning of the holding period and has contrarian profits after investing after a 2-month holding period (Fig. 19).
i. Shenzhen stock market According to Fig. 20, we know that the Shenzhen stock market has a significant contrarian profit with a shorter formation period. The phenomenon of price reversal still shows that strategies that short the quintile winner and long the quintile loser give rise to profits (Fig. 21).
|
2019-05-19T13:06:18.840Z
|
2018-02-01T00:00:00.000
|
{
"year": 2018,
"sha1": "013faa409b21a5ffc5b83c7a73c82f2dbc5bc918",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10614-017-9699-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "602cc8f98244cbcfb0ad14cd88b52682ead1c459",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Economics"
]
}
|
9863348
|
pes2o/s2orc
|
v3-fos-license
|
Short stature, platyspondyly, hip dysplasia, and retinal detachment: an atypical type II collagenopathy caused by a novel mutation in the C-propeptide region of COL2A1: a case report
Background Heterozygous mutations in COL2A1 create a spectrum of clinical entities called type II collagenopathies that range from in utero lethal to relatively mild conditions which become apparent only during adulthood. We aimed to characterize the clinical, radiological, and molecular features of a family with an atypical type II collagenopathy. Case presentation A family with three affected males in three generations was described. Prominent clinical findings included short stature with platyspondyly, flat midface and Pierre Robin sequence, severe dysplasia of the proximal femora, and severe retinopathy that could lead to blindness. By whole exome sequencing, a novel heterozygous deletion, c.4161_4165del, in COL2A1 was identified. The phenotype is atypical for those described for mutations in the C-propeptide region of COL2A1. Conclusions We have described an atypical type II collagenopathy caused by a novel out-of-frame deletion in the C-propeptide region of COL2A1. Of all the reported truncating mutations in the C-propeptide region that result in short-stature type II collagenopathies, this mutation is the farthest from the C-terminal of COL2A1.
Background
Patients with COL2A1 mutations are collectively called type II collagenopathies. Missense mutations or in-frame derangement in the triple-helical region cause a phenotype on the spondyloepiphyseal dysplasia (SED) spectrum from lethal SED including achondrogenesis type II (ACG2; OMIM# 200610) and hypochodrogenesis through spondyloepiphyseal dysplasia congenita (SEDC; OMIM#183900), while mutations in the triple helical or N-propeptide regions cause Stickler syndrome type I (STL1; OMIM# 108300) or Kniest dysplasia (OMIM# 156550) [1]. Unlike mutations in the triple-helical or N-propeptide regions, those in the C-propeptide region generally produce atypical phenotypes such as platyspondylic lethal skeletal dysplasia, Torrance type (PLSDT# OMIM 151210) [2], spondyloperipheral dysplasia (SPPD) (OMIM# 271700) [3], vitreoretinopathy with phalangeal epiphyseal dysplasia (VPED) [4], avascular necrosis of the femoral head (ANFH; OMIM# 608805) [5], or early-onset osteoarthritis (OA) [6]. Here, we describe a family with atypical features of type II collagenopathies caused by a novel mutation in COL2A1. As far as we know, this truncating mutation in the C-propeptide region is the farthest one from the 3' end of the gene that causes a disease with short stature, suggesting the existence of the mutant protein.
Subjects
We studied a Thai family with skeletal dysplasia who attended the Genetics Clinic at the King Chulalongkorn Memorial Hospital, Bangkok, Thailand. The medical data, pedigree, physical examinations, and laboratory results were recorded. The written informed consent and parental consent (for the proband) was obtained after explanation of the possible consequences of this study.
Genomic DNA preparation and whole-exome sequencing
To perform genetic analysis, genomic DNA was isolated from peripheral blood leukocytes using a Puregene Blood kit (Qiagen, Hilden, Germany). The genomic DNA was sent to Macrogen, Inc. (Seoul, South Korea) for wholeexome sequencing (WES). DNA was captured using a SureSelect Human All Exon version 4 kit (Agilent Technologies, Santa Clara, CA) and sequenced on a Hiseq2000 instrument. Base calling was performed and quality scores were analyzed using Real Time Analysis software version 1.7. Sequence reads were aligned against the University of California Santa Cruz human genome assembly hg19 using Burrows-Wheeler Alignment software (bio-bwa.sourceforge.net/). Single-nucleotide variants (SNVs) and insertions/deletions (Indels) were detected by SAM-TOOLS (samtools.sourceforge.net/) and annotated against dbSNP & the 1000 Genomes Project. After quality filtering, we looked for variants located in the coding regions of known skeletal dysplasia genes for all potential pathogenic SNVs and Indels. Variant calling exclusion criteria were (a) coverage <10×; (b) quality score <20; (c) minor allele frequency ≥1% in the 1000 Genomes Project; and (d) non-coding variants and synonymous exonic variants. The remaining variants were subsequently filtered out if they were present in our in-house database of 165 unrelated Thai exomes. The variants were confirmed by PCR and Sanger sequencing.
Results
A 20 month-old male (IV:3; Fig. 1) is the first child of a non-consanguineous couple. His mother had miscarriages in the first trimester of the two previous pregnancies. The causes of both miscarriages were unknown. He was born at term by normal delivery with a birth weight of 2,850 g (10 th centile) and a length of 45 cm (<3 rd centile, −4 SD). Physical examination revealed short stature, flattened face, cleft palate, micrognathia, short neck, and umbilical hernia (Fig. 2a). A radiograph obtained at age 20 months showed oval-shaped vertebral bodies (Fig. 2b). Ossification of the femoral head was absent. Long bones showed short broad tubular shape with metaphyseal flaring (Fig. 2c). Other ossification centers appeared age-appropriate ( Fig. 2c and d).
Hands and feet radiographs were normal ( Fig. 2e and f). His eyes examination and audiometry showed no abnormality. He was given a clinical diagnosis of SEDC.
The proband's father (III:1; Fig. 1) is a 26 year-old man. He had short trunk dwarfism with a height of 125 cm (−9 SD). He had a barrel-shaped chest, hyperlordosis of the lumbar spine, and flexion contracture of both hips (Fig. 3a). Hands and feet, including radiographs, were apparently normal ( Fig. 3b and c). A radiograph of the thoracolumbar spine showed flattened vertebral bodies with kyphotic deformity (Fig. 3d). Severe dysplasia of the bilateral proximal femoral epiphyses and hip dislocation were observed (Fig. 3e). Generalized osteopenia was noted. He was blind in the left eye since he was 8 years old, while his right eye had severe myopia and retinal detachment. He also had umbilical hernia when he was young but it spontaneously resolved.
The proband's grandfather (II:3; Fig. 1) had short stature (−8 SD). He also had a barrel-shaped chest, hyperlordosis of the lumbar spine and flexion contracture of both hips. A radiograph of the thoracolumbar spine and pelvis showed flattened vertebral bodies, severe dysplasia of the bilateral proximal femoral epiphysis and hip dislocation (Fig. 3f ). His hands and feet were normal. His eyes were reported normal but had never been formally evaluated. He died at the age of 54 because of septicemia after a nephrostomy as treatment for obstructive uropathy from ureteric stones. A blood sample was not available for mutation analysis.
Whole-exome sequencing (WES) of the proband revealed a five-nucleotide out-of-frame deletion (NM_001844.4: c.4161_4165del:p.Gln1387Hisfs*30) in COL2A1. PCR and Sanger sequencing using leukocyte-derived DNA from the proband (IV:3) and his father (III:1) confirmed that both were heterozygous for the mutation (Fig. 4a). This mutation has not been reported previously in the Human Genome Mutation Database or the Exome Aggregation Consortium database. In addition, it was not present in our in-house exome database of 165 Thai individuals.
Conclusions
Major clinical and radiographic features in our patients include short stature with platyspondyly, flat midface, Pierre Robin sequence, hip dysplasia, and retinal detachment. The major extracellular structural protein of these affected organs is collagen type II. This led us to hypothesize that the etiologic mutation was in COL2A1. WES identified a five-base pair deletion in this gene. PCR and Sanger sequencing confirmed that the proband (IV:3) and his father (III:1) harbored a heterozygous mutation, c.4161_4165del, in COL2A1. This out-of-frame deletion is in the Cpropeptide region and is expected to lead to a protein truncation, p.Gln1387Hisfs*30.
Although our patients have a mutation in Cpropeptide region of COL2A1, the phenotype is atypical for those described for the mutations in C-propeptide region. Previous reported mutations in the C-propeptide region of collagen type II lead to one of the six entities: PLSDT, SPPD, VPED, ANFH, STL1, or early-onset OA. However, the clinical features of this family are different from all these six diseases (Table 1).
PLSDT was excluded by absence of wafer-thin platyspondyly, small round scapulae, and brachydactyly in our patients. Moreover, PLSDT patients generally die at birth [7] but our patients live into adulthood. An important diagnostic feature of SPPD, brachydactyly [3], excluded the diagnosis of SPPD. Short stature with platyspondyly without phalangeal epiphyseal dysplasia excluded VPED from the diagnosis [4]. ANFH patients have flattened femoral heads with signs of premature osteoarthritis. However, the femoral heads of proband's father (III:1) and grandfather (II: 3) were totally absent causing bilateral hip dysplasia and dislocation. Another important difference is the fact that ANFH patients do not have short stature, platyspondyly or retinal abnormalities [5], while our patients do. Patients with STLI usually have normal stature and premature OA of many joints [8,9], while all our patients (III:1 and II:3) had short stature (−8 to −9 SD at adulthood). Early-onset OA patients should present with premature OA of joints without other skeletal abnormalities, while our patients had short stature and severe dysplasia of the hips. STL1 is caused by haploinsufficiency of COL2A1. All point mutations in COL2A1 leading to STL1 are therefore expected to be subjected to nonsense-mediated mRNA decay (NMD). Heterozygous mutations leading to type II collagenopathies besides STL1 are predicted to have a dominant negative effect. Previous reports showed that truncating mutations in the C-propeptide region led either to STL1 or SPPD, when the mutant RNA did, or did not, undergo NMD, respectively (Table 2).
Interestingly, our frameshift mutation, expected to create a stop codon 69 nucleotides upstream from the exon 53-54 junction, does not lead to STL1. We therefore hypothesize that the mutant RNA transcribed from the mutant COL2A1 allele found in our patients may not undergo NMD; instead that it may produce a mutant COL2A1 protein that has a dominant negative effect, interfering with the wild-type COL2A1 and leading to the unique phenotype. In general, mRNAs harboring a premature terminal codon (PCT) 50-55 nucleotides upstream of the last exon-exon junction are efficiency degraded [10]. The NMD is signaled by the presence of the exon junction complex (EJC). However, EJCs are not Hearing loss - 17 Exon 54 c.4413_4416del p.Gly1472Profs*9 PLSDT Nishimura et al. [20] equally assembled at every exon junction. Therefore, it is possible that the variation of the EJC may affect NMD efficiency and lead to efficient degradation of COL2A1 mRNAs harboring a PTC at least 70 nucleotides upstream of the final exon-exon junction. On the contrary, mutations causing premature stop codons from 69 nucleotides upstream of the exon 53-54 junction down to the 3' end COL2A1 would probably not undergo NMD, have a dominant negative effect, and therefore cause more severe phenotypes of the spectrum such as SPPD and PLSDT [2,3]. Unfortunately, tissues from our patients that would express COL2A1 are not available to test this hypothesis. Of all the reported mutations in the C-propeptide region that result in short-stature type II collagenopathies, our mutation is the farthest truncating mutation from the C-terminal of COL2A1 (Fig. 4b, Table 2). Our probands and his father had some different phenotypes such as facial profile, eye involvement, and cleft palate. Intra-and extrafamilial variability of type II collagenopathies has previously been observed in patients with the same mutation. For instance, in a family with VPED, a 22-year-old patient had severe premature osteoarthritis requiring hip replacement while two older siblings had only mild osteoarthritis [4]. In another family, while the mosaic mother had SPPD, her fetus suffered from the lethal PLSDT [11]. These data suggest that genetic, epigenetic, and environmental modifiers affect the clinical presentation in type II collagenopathies. Moreover, these modifiers may be other possible explanations for the atypical phenotype of the patients.
In conclusion, we report an atypical phenotype resulting from a novel truncating mutation in the Cpropeptide region of COL2A1 with prominent features including short stature, platyspondyly, hip dysplasia, and retinal detachment.
|
2017-08-03T01:40:39.864Z
|
2016-12-12T00:00:00.000
|
{
"year": 2016,
"sha1": "ff893c217234a17f6a78023af16ec2c822d40300",
"oa_license": "CCBY",
"oa_url": "https://bmcmedgenet.biomedcentral.com/track/pdf/10.1186/s12881-016-0357-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff893c217234a17f6a78023af16ec2c822d40300",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
250382520
|
pes2o/s2orc
|
v3-fos-license
|
On computing viscoelastic Love numbers for general planetary models: the \texttt{ALMA${}^3$} code
The computation of the Love numbers for a spherically symmetric self-gravitating viscoelastic Earth is a classical problem in global geodynamics. Here we revisit the problem of the numerical evaluation of loading and tidal Love numbers in the static limit for an incompressible planetary body, adopting a Laplace inversion scheme based upon the Post-Widder formula as an alternative to the {traditional viscoelastic normal modes method. We also consider, whithin the same framework, complex-valued, frequency-dependent Love numbers that describe the response to a periodic forcing, which are paramount in the study of the tidal deformation of planets. Furthermore, we numerically obtain the time-derivatives of Love numbers, suitable for modeling geodetic signals in response to surface loads variations. A number of examples are shown, in which time and frequency-dependent Love numbers are evaluated for the Earth and planets adopting realistic rheological profiles. The numerical solution scheme is implemented in ALMA${}^3$ (the plAnetary Love nuMbers cAlculator, version 3), an upgraded open-source Fortran 90 program that computes the Love numbers for radially layered planetary bodies with a wide range of rheologies, including transient laws like Andrade or Burgers.
Introduction
Love numbers, first introduced by A.E.H. Love in 1911, provide a complete description of the response of a planetary body to external, surface or internal perturbations. In his seminal work, Love (1911) defined the Love numbers (LN) in the context of computing the radial deformation and the perturbation of gravity potential for an elastic, self-gravitating, homogeneous sphere that is subject to the gravitational pull of a tide-raising body. This definition has been subsequently extended by Shida (1912) to include also horizontal displacements. In order to describe the response to surface loads, an additional set of LNs, dubbed loading Love numbers, has been introduced in order to describe the Earth's response to surface loads (see e.g., Munk & MacDonald, 1960;Farrell, 1972) and today they are routinely used in the context of the Post Glacial Rebound problem (Spada et al., 2011). In a similar way, shear Love numbers represent the response to a shear stress acting on the surface (Saito, 1978) while dislocation Love numbers describe deformations induced by internal point dislocations (see e.g., Sun & Okubo, 1993).
The LN formalism has been originally defined in the realm of purely elastic deformations, for spherically symmetric Earth models consistent with global seismological observations. However, invoking the Correspondence Principle in linear viscoelasticity (see e.g., Christensen, 1982), the LNs can be generalized to anelastic models in a straightforward way. Currently, viscoelastic LNs are a key ingredient of several geophysical applications involving the time-dependent response of a spherically symmetric Earth model to surface loads or endogenous perturbations. For example, they are essential to the solution of the Sea Level Equation (Farrell & Clark, 1976) and are exploited in current numerical implementations of the Glacial Isostatic Adjustment (GIA) problem, either on millennial (see e.g., Spada & Melini, 2019) or on decadal time scale (see e.g., Melini et al., 2015).
Since LNs depend on the internal structure of a planet and on its constitution, they can provide a means of establishing constraints on some physical parameters of the planet interior on the basis of geodetic measurements or astronomic observations (see e.g., Zhang, 1992;Kellermann et al., 2018). For tidal periodic perturbations, complex LNs can be defined in the frequency domain, accounting for both the amplitude and phase lag of the response to a given tidal frequency (Williams & Boggs, 2015). Frequency-domain LNs are widely used to constrain the interior structure of planetary bodies on the basis of observations of tidal amplitude and phase lag (see e.g., Sohl et al., 2003;Dumoulin et al., 2017;Tobie et al., 2019), to study the state of stress of satellites induced by tidal forcings (see e.g., Wahr et al., 2009) or to investigate the tidal response of the giant planets (see e.g., Gavrilov & Zharkov, 1977).
Viscoelastic LNs for a spherically symmetric, radially layered, self-gravitating planet are traditionally computed within the framework of the "viscoelastic normal modes" method introduced by Peltier (1974), which relies upon the solution of Laplace-transformed equilibrium equations using the formalism of elastic propagators. As discussed e.g. by Spada & Boschi (2006) and Melini et al. (2008), this approach becomes progressively less feasible as the detail of the rheological model is increased or if complex constitutive laws are considered. Several workarounds have been proposed in the literature to avoid these shortcomings (see, e.g. Rundle, 1982;Friederich & Dalkolmo, 1995;Riva & Vermeersen, 2002;Tanaka et al., 2006). Among these, the Post-Widder Laplace inversion formula (Post, 1930;Widder, 1934), first applied by Spada & Boschi (2006) to the evaluation of viscoelastic LNs for the Earth, has the advantage of maintaining unaltered the formal structure of the viscoelastic normal modes and of allowing for a straightforward implementation of complex rheological laws. For periodic loads, alternative numerical integration schemes similar to those developed by Takeuchi & Saito (1972) for the elastic problem (Na & Baek, 2011;Wang et al., 2012) have been applied to the viscoelastic case by integrating Fourier-transformed solutions (Tobie et al., 2005. In this work, we revisit the Post-Widder approach to the evaluation of LNs with the aim of extending it to more general planetary models, relaxing some of the assumptions originally made by Spada & Boschi (2006). In particular, we introduce a layered core in the Post-Widder formalism and obtain analytical expressions for the time derivatives of LNs, needed to model geodetic velocities in response to the variation of surface loads. In this respect, our approach is complementary to that of Padovan et al. (2018), who derived a semi-analytical solution for the fluid LNs using the propagator formalism. We implement our results in ALMA 3 (the plAnetary Love nuMbers cAlculator, version 3), an open-source code which extends and generalizes the program originally released by Spada (2008). ALMA 3 introduces a range of new capabilities, including the evaluation of frequency-domain LNs describing the response to periodic forcings, suitable for studying tidal dissipation in the Earth and planets.
This paper is organized as follows. In Section 2 we give a brief outline of the theory underlying the computation of viscoelastic LNs and of the application of the Post-Widder Laplace inversion formula. In Section 3 we discuss some general aspects of ALMA 3 , leaving the technical details to a User Manual. In Section 4 we validate ALMA 3 through some benchmarks between our numerical results and available reference solutions In Section 5 we discuss some numerical examples before drawing our conclusions in Section 6.
Mathematical background
The details of the Post-Widder approach to numerical Laplace inversion have been extensively discussed in previous works (see Spada & Boschi, 2006;Spada, 2008;Melini et al., 2008). In what follows, we only give a brief account of the Post-Widder Laplace inversion method for the sake of illustrating how the new features of ALMA 3 have been implemented within its context.
Viscoelastic normal modes
Closed-form analytical expressions for the LNs exist only for a few extremely simplified planetary models. The first is the homogeneous, self-gravitating sphere, often referred to as the "Kelvin sphere" (Thomson, 1863). The second is the two-layer, incompressible, non self-gravitating model that has been solved analytically by Wu & Ni (1996). For more complex models, LNs shall be computed either through fully numerical integration of the equilibrium equations, or by invoking semi-analytical schemes. Among the latter, the viscoelastic normal modes method, introduced by Peltier (1974), relies upon the solution of the equilibrium equations in the Laplacetransformed domain. Invoking the Correspondence Principle (e.g., Christensen, 1982) the equilibrium equations can be cast in a formally elastic form by defining a complex rigidity µ(s) that depends on the rheology adopted and is a function of the Laplace variable s.
Following Spada & Boschi (2006), at a given harmonic degree n, the Laplace-transformed equations can be solved with standard propagator methods, and their solution at the planet surface (r = a) can be written in vector form as where the tilde denotes Laplace-transformed quantities, vectorx(s) = (ũ,ṽ,φ) T contains the n-th degree harmonic coefficients of the vertical (ũ) and horizontal (ṽ) components of the displacement field and the incremental potential (φ),f (s) is the Laplace-transformed time-history of the forcing term, P 1 and P 2 are appropriate 3 × 6 projection operators, J is a 6 × 3 array that accounts for the boundary conditions at the core interface, and b is a three-component vector expressing the surface boundary conditions (either of loading or of tidal type). In Eq. (1), Λ(s) is a 6 × 6 array that propagates the solution from the core radius (r = c) to the planet surface (r = a), which has the form: where N is the number of homogeneous layers outside the planet core, r k is the radius of the interface between the (k − 1)-th and k-th layer, with r 1 ≤ ... ≤ r N , r 1 = c and r N +1 = a. In Eq.
(2), Y k (r, s) is the fundamental matrix that contains the six linearly independent solutions of the equilibrium equations valid in the k-th layer, whose expressions are given analytically in Sabadini et al. (1982). When incompressibility is assumed, the matrix Y k (r, s) depends upon the rheological constitutive law through the functional form of the complex rigidity µ(s), which replaces the elastic rigidity µ of the elastic propagator (Wu & Peltier, 1982). Table 1 lists T1 expressions of µ(s) for some rheological laws. For a fluid inviscid (i.e., zero viscosity) core, the array J in Eq.
(1) is a 6 × 3 interface matrix whose components are explicitly given by Sabadini et al. (1982); conversely, for a solid core, J corresponds to the 6 × 3 portion of the fundamental matrix for the core Y c (c, s) that contains the three solutions behaving regularly for r → 0. From the solutionx(s) obtained in (1), the Laplace-transformed Love numbers are defined as:h where we have made the n-dependence explicit, m is the mass of the planet and g is the unperturbed surface gravitational acceleration (Farrell, 1972;Wu & Peltier, 1982). Using Cauchy's residue theorem, for Maxwell or generalized Maxwell rheologies Eqs. (3-5) can be cast in the standard normal modes form, which for an impulsive load (f (s) = 1) reads whereL n (s) denotes any of the three LNs, L e n is the elastic component of the LN (i.e., the limit for s → ∞), L k n are the viscoelastic components (residues), s k n are the (real and negative) roots of the secular equation Det(P 2 Λ(s)J) = 0, and where N M is the number of viscoelastic normal modes, each corresponding to one root of the secular equation (Spada & Boschi, 2006). However, such standard form is not always available, since for some particular rheologies the complex rigidity µ(s) cannot be cast in the form of a rational fraction (this occurs, for example, for the Andrade's rheology, see Table 1). This is one of the motivations for adopting nonconventional Laplace inversion formulas like the one discussed in next section.
Love numbers in the time domain
To obtain the time-domain LNs h n (t), l n (t) and k n (t), it is necessary to perform the inverse Laplace transform of Eqs. (3)(4)(5). Within the viscoelastic normal-mode approach, this is usually accomplished through an integration over a (modified) Bromwich path in the complex plane, by invoking the residue theorem. In this case, the inversion of Eq. (6) yields the time-domain Love numbers in the form: where δ(t) is the Dirac delta and H(t) is the Heaviside step function defined by Eq. (14) below, and an impulsive time history is assumed (f (s) = 1). As discussed by Spada & Boschi (2006), the traditional scheme of the viscoelastic normal modes suffers from a few but significant shortcomings that, with models of increasing complexity, effectively hinders a reliable numerical inverse transformation. Indeed, the application of the residue theorem demands the identification of the poles of the Laplace-transformed solutions (see Eqs. 3-5), which are the roots of the secular polynomial equation whose algebraic degree increases with the number of rheologically distinct layers. In addition, its algebraic complexity may be unpractical to handle, particularly for constitutive laws characterized by many material parameters. As shown by Spada & Boschi (2006) and Spada (2008), a possible way to circumvent these difficulties is to compute the inverse Laplace transform through the Post-Widder (PW) formula (Post, 1930;Widder, 1934). We note, however, that other viable possibilities exist, as the one recently discussed by Michel & Boy (2021), who have employed Fourier techniques to avoid some of the problems inherent in the Laplace transform method. While Fourier techniques may be more appropriate to take complex rheologies into account, and are clearly more relevant to address Love numbers at tidal frequencies, the motivation of our approach is to address in a unified framework the computation of LNs describing both tidal and surface loads. IfF (s) = L(F (t)) is the Laplace transform of F (t), the PW formula gives an asymptotic approximation of the inverse Laplace transform L −1 (F (s)) as a function of the n-th derivatives ofF (s) evaluated along the real positive axis: In general, an analytical expression for the n-th derivative ofF (s) required in Eq. (8) is not available. By employing a recursive discrete approximation of the derivative and rearranging the corresponding terms, Gaver (1966) has shown that an equivalent expression is where the inverse transform F (t) is expressed in terms of samples of the Laplace transform F (s) on the real positive axis of the complex plane. Since for a stably stratified incompressible planet all the singularities ofx(s) (Eq. 1) are expected to be located along the real negative axis that ensures the long-term gravitational stability (Vermeersen & Mitrovica, 2000), Eq. (9) provides a strategy for evaluating the time-dependent LNs without the numerical complexities associated with the traditional contour integration. However, as discussed by Valkó & Abate (2004), the numerical convergence of (9) is logarithmically slow, and the oscillating terms can lead to catastrophic loss of numerical precision. Stehfest (1970) has shown that, for practical applications, the convergence of Eq. (9) can be accelerated by re-writing it in the form where M is the order of the Gaver sequence and where the ζ constants are with floor(x) being the greatest integer less or equal to x. Eq. (10) can be applied to (1) to obtain an M -th order approximation of the time-domain solution vector: from which the time-domain LNs can be readily obtained according to .
Recalling that the Laplace transform of F (t) and that of its time derivativeḞ (t) are related by L(Ḟ (t)) = sL(F (t)) − F (0 − ) and being x(t) = 0 for t < 0, it is also possible to write an asymptotic approximation for the time derivative of the solution: from which the time derivative of the LNsḣ n (t),l n (t) andk n (t) can be obtained according to . The numerical computation of the time-derivatives of the LNs according to Eq. (13) is one of the new features introduced in ALMA 3 . The time dependence of the solution vector obtained through Eqs. (12-13) is also determined by the time history of the forcing term (either of loading or tidal type), whose Laplace transform f (s) appears in Eq. (1). If the loading is instantaneously switched on at t = 0, its time history is represented by the Heaviside (left-continuous) step function whose Laplace transform isH (s) = L(H(t)) = 1 s .
Since any piece-wise constant function can be expressed as a linear combination of shifted Heaviside step functions (see, e.g. Spada & Melini, 2019), LNs obtained assuming the loading time history in Eq. (14) can be used to compute the response to arbitrary piece-wise constant loads. However, for some applications, it may be more convenient to represent the load time history as a piece-wise linear function. It is easy to show that any such function can be written as a linear combination of shifted elementary ramp functions of length t r , of the type whose Laplace transform isR Laplace-transformed LNs corresponding to a step-wise or ramp-wise forcing time history can be obtained by settingf (s) =H(s) orf (s) =R(s) in Eq. (1). The ramp-wise forcing function defined by Eq. (16) is one of the new features introduced in ALMA 3 .
Frequency dependent Love numbers
In the context of planetary tidal deformation, it is important to determine the response to an external periodic tidal potential. The previous version of ALMA was limited to the case of an instantaneously applied forcing. For periodic potentials, the time dependence of the forcing term has the oscillating form e iωt , where is the angular frequency of the forcing term, T is the period of the oscillation and i = √ −1 is the imaginary unit. In the time domain, the solution vector can be cast in the form where x δ (t) is the time-domain response to an impulsive (δ-like) load and the asterisk indicates the time convolution. Since the impulsive load is a causal function, x δ (t) = 0 for t < 0 and Eq. (19) can be expressed as where x 0 (ω) is the Laplace transform of x δ (t) evaluated at s = iω. By settingf (s) = L(δ(t)) = 1 and s = iω in Eq.
(1), we obtain Hence, in analogy with Eqs. (3-5), the frequency-domain LNs h n (ω), l n (ω) and k n (ω) are defined as where u n (ω), v n (ω) and ϕ n (ω) are the three components of vector x 0 (ω) = (u n , v n , ϕ n ) T . Since the frequency-domain LNs are complex numbers, in general a phase difference exists between the variation of the external periodic potential and the planet response, due to the energy dissipation within the planetary mantle. If L n (ω) is any of the three frequency-dependent LNs, the corresponding time-domain LNs are: where the phase lag φ is and Re(z) and Im(z) denote the real and the imaginary parts of z, respectively. A vanishing phase lag (φ = 0) is only expected for elastic planetary models (i.e., for Im(L n (ω)) = 0), for which no dissipation occurs. We remark that the evaluation of the frequency-dependent Love numbers (22-24) does not require the application of the Post-Widder method outlined in Section 2.2, since in this case no inverse transform is to be evaluated.
Tidal dissipation is phenomenologically expressed in term of the quality factor Q (Kaula, 1964;Goldreich & Soter, 1966), which according to e.g. Efroimsky & Lainey (2007) and Clausen & Tilgner (2015) is related to the phase lag φ through thus implying Q = ∞ in the case of no dissipation. Tidal dissipation is often measured in terms of the ratio For terrestrial bodies, the quality factor Q usually lies in a range between 10 and 500 (Goldreich & Soter, 1966;Murray & Dermott, 2000). We remark that the quality factor Q is a phenomenological parameter used when the internal rheology is unknown; if LNs are computed by means of a viscoelastic model, it may be more convenient to consider the imaginary part of k 2 , which is directly proportional to dissipation (Segatz et al., 1988).
3 An overview of ALMA 3 Here we briefly outline how the solution scheme described in previous section is implemented in ALMA 3 , leaving the technical details and practical considerations to the accompanying User Manual. ALMA 3 evaluates, for any given harmonic degree n, the time-domain LNs (h n (t), l n (t), k n (t)), their time derivatives (ḣ n (t),l n (t),k n (t)) and the frequency-domain LNs (h n (ω), l n (ω), k n (ω)), either corresponding to surface loading or to tidal boundary conditions. While the original version of the code was limited to time-domain LNs, the other two outputs represent new capabilities introduced by ALMA 3 . The planetary model can include, in principle, any number of layers in addition to a central core. Each of the layers can be characterized by any of the rheological laws listed in Table 1, while the core can also have a fluid inviscid rheology. As we show in Section 5 below, numerical solutions obtained with ALMA 3 are stable even with models including a large number of layers, providing a way to approximate rheologies whose parameters are varying continuously with radius. Time-domain LNs are computed by evaluating numerically Eqs. (12) and (13), assuming a time history of the forcing that can be either a step function (Eq. 14) or an elementary ramp function (Eq. 16). In the latter case, the duration t r of the loading phase can be configured by the user. Since Eqs. (12) and (13) are singular for t = 0, ALMA 3 can compute time-domain LNs only for t > 0. In the "elastic limit", the LNs can be obtained either by sampling them at a time t that is much smaller than the characteristic relaxation times of the model, or by configuring the Hooke's elastic rheology for all the layers in the model. In the second case, the LNs will follow the same time history of the forcing. As discussed in Section 2, the sums in Eqs. (12) and (13) contain oscillating terms that can lead to loss of precision due to catastrophic cancellation (Spada & Boschi, 2006). To avoid the consequent numerical degeneration of the LNs, ALMA 3 performs all computations in arbitrary-precision floating point arithmetic, using the Fortran FMLIB library (Smith, 1991(Smith, , 2003. When running ALMA 3 , the user shall configure both the number D of significant digits used by the FMLIB library and the order M of the Gaver sequence in Eqs. (12) and (13). As discussed by Spada & Boschi (2006) and Spada (2008), higher values of D and M ensure a better numerical stability and accuracy of the results, but come at the cost of rapidly increasing computation time. All the examples discussed in the next section have been obtained with parameters D = 128 and M = 8. While these values ensure a good stability in relatively simple models, a special care shall be devoted to numerical convergence in case of models with a large number of layers and/or when computing LNs to high harmonic degrees; in that case, higher values of D and M may be needed to attain stable results.
Complex-valued LNs are obtained by ALMA 3 by directly sampling Eq. (21) at the requested frequencies ω, and therefore no numerical Laplace anti-transform is performed. While for frequency-domain LNs the numerical instabilities associated with the Post-Widder formula are avoided, the use of high-precision arithmetic may still be appropriate, especially in case of models including a large number of layers. ALMA 3 does not directly compute the tidal phase lag φ, the quality factor Q nor the k 2 /Q ratio, which can be readily obtained from tabulated output values of the real and imaginary parts of LNs through Eqs. (26-28).
Although ALMA 3 is still limited to spherically symmetric and elastically incompressible models, with respect to the version originally released by Spada (2008) now the program includes some new significant features aimed at increasing its versatility. These are: i) the evaluation of frequency-dependent loading and tidal Love numbers in response to periodic forcings, ii), the possibility of dealing with a layered core that includes fluid and solid portions, iii) the introduction of a ramp-shaped forcing function to facilitate the implementation of loading histories varying in a linear piecewise manner, iv) the implementation of the Andrade transient viscoelastic rheology often employed in the study of planetary deformations, v) the explicit evaluation of the derivatives of the Love numbers in the time domain to facilitate the computation of geodetic variations in deglaciated areas, vi) a short but exhaustive User Guide, and vii) a facilitated computation of frequency-dependent loading and tidal planetary Love numbers, with pre-defined and easily customizable rheological profiles for some terrestrial planets and moons.
Benchmarking ALMA 3
In the following we discuss a suite of numerical benchmarks for Love numbers computed by ALMA 3 . First, we consider a uniform, incompressible, self-gravitating sphere with Maxwell rheology (the so-called "Kelvin sphere") and compare tidal LNs computed numerically by ALMA 3 with well known analytical results. Then, we test numerical results from ALMA 3 by reproducing the viscoelastic LNs for an incompressible Earth model computed within the benchmark exercise by Spada et al. (2011). Finally, we discuss the impact of the incompressibility approximation assumed in ALMA 3 by comparing elastic and viscoeastic LNs for a realistic Earth model with recent numerical results by Michel & Boy (2021), which employ a compressible model.
The viscoelastic Kelvin sphere
Simplified planetary models for which closed-form expressions for the LNs are available are of particular relevance here, since they allow an analytical benchmarking of the numerical solutions discussed in Section 2 and provided by ALMA 3 .
In what follows, we consider a spherical, homogeneous, self-gravitating model, often referred to as the "Kelvin sphere" (Thomson, 1863), which can be extended to a viscoelastic rheology in a straightforward manner. For example, adopting the complex modulus µ(s) appropriate for the Maxwell rheology (see Table 1), for a Kelvin sphere of radius a, density ρ and surface gravity g, in the Laplace domain the harmonic degree n = 2 LNs take the form where L 2 stands for any of (h 2 , l 2 , k 2 ), L f is the "fluid limit" ofL 2 (s) (i.e., the value attained for s → 0), the Maxwell relaxation time is and is a positive non-dimensional constant. Note that g is a function of a and ρ, since for the homogeneous sphere g = 4 3 πGρa, where G is the universal gravitational constant. After some algebra, (29) can be cast in the form where for a tidal forcing, the fluid limits for degree n = 2 are h f = 5 2 , l f = 3 4 and k f = 3 2 (see e.g., Lambeck, 1988) and where we have defined From Eq. (32), the LNs in the time domain can be immediately computed analytically through an inverse Laplace transformation: while for an external forcing characterized by a step-wise time-history, the LNs L that yields from which the time derivative of L (H) 2 (t) is readily obtained: In Figure 1a, the dotted curves show the h 2 (blue) and the k 2 (red) tidal LN of harmonic F1 degree n = 2 obtained by a configuration of ALMA 3 that reproduces the Kelvin sphere (the parameters are given in the Figure caption). The LNs, shown as a function of time, are characterized by two asymptotes corresponding to the elastic and fluid limits, respectively, and by a smooth transition in between. The solid curves, obtained by the analytical expression given by Eq. (36), show an excellent agreement with the ALMA 3 numerical solutions. The same holds for the time-derivatives of these LNs, considered in Figure 1b, where the analytical LNs (solid lines) are computed according to Eq. (37).
The frequency response of the Kelvin sphere for a periodic tidal potential can be obtained by setting s = iω in Eq. (29), which after rearranging gives: which remarkably depends upon ω and τ only through the ωτ product. Therefore, a change in the relaxation time τ shall result in a shift of the frequency response of the Kelvin sphere, leaving its shape unaltered. Using Eq. (38) in (26), the phase lag turns out to be: where it is easy to show that for frequency the maximum phase lag φ = φ max is attained, with By using Eq. (38) into (27), for the Kelvin sphere the quality factor is which at ω = ω 0 attains its minimum value In Figure 2a, the dotted curve shows the phase lag φ as a function of the tidal period T = F2 2π/ω, obtained by the same configuration of ALMA 3 described in the caption of Figure 1. The solid line corresponds to the analytical expression of φ(T ) which can be obtained from Eq. (39), showing once again an excellent agreement with the numerical results (dotted). Figure 2b compares numerical results obtained from ALMA 3 for Q with the analytical expression for Q K (T ) obtained from (42). By using in Eq. (40) the numerical values of ρ, a and µ assumed in Figures 1 and 2, the period T 0 = 2π/ω 0 is found to scale with viscosity η as so that for η = 10 21 Pa · s, representative of the Earth's mantle bulk viscosity (see e.g., Mitrovica, 1996;Turcotte & Schubert, 2014), the maximum phase lag φ max 41.9 • and the minimum quality factor Q min 1.5 are attained for T 0 3 kyr, consistent with the results shown in Figure 2.
Community-agreed Love numbers for an incompressible Earth model
Due to the relevance of viscoelastic Love numbers in a wide range of applications in Earth science, several numerical approaches for their evaluation have been independently developed and proposed in literature. This ignited the interest on benchmark exercises, in which a set of agreed numerical results can be obtained and different approaches and methods can be crossvalidated. Here we consider a benchmark effort that has taken place in the framework of the Glacio-Isostatic Adjustment community (Spada et al., 2011), in which a set of reference viscoelastic Love numbers for an incompressible, spherically symmetric Earth model has been derived through different numerical approaches, including viscoelastic normal modes, spectralfinite elements and finite elements. This allows us to validate our numerical results by implementing in ALMA 3 the M3-L70-V01 Earth model described in Table 3 of Spada et al. (2011), which includes a fluid inviscid core, three mantle layers with Maxwell viscoelastic rheology and an elastic lithosphere, and comparing the set of LNs from ALMA 3 with reference results from the benchmark exercise.
n , k (f ) n ), both for the loading F3 and tidal cases, computed by ALMA 3 for the M3-L70-V01 Earth model in the range of harmonic degrees 2 ≤ n ≤ 250. The elastic and fluid limits have been simulated in ALMA 3 by sampling the time-dependent LNs at t e = 10 −5 kyrs and t f = 10 10 kyrs, respectively. Reference results from Spada et al. (2011), represented by solid lines in Figure 3, are practically indistinguishable from results obtained with ALMA 3 over the whole range of harmonic degrees, demonstrating the reliability of the numerical approach employed in ALMA 3 . Figure 4 shows time-dependent LNs h n (t), l n (t) and k n (t), for both the loading and tidal F4 cases, computed by ALMA 3 for harmonic degrees 2 ≤ n ≤ 5 and for t between 10 −3 and 10 5 kyrs, a time range that encompasses the complete transition between the elastic and fluid limits. Also in this case, numerical results obtained by ALMA 3 (shown by symbols) are coincident with the reference LNs from Spada et al. (2011), represented by solid lines.
Viscoleastic Love numbers for a PREM-layered Earth model
In this last benchmark, we compare numerical results from ALMA 3 with reference viscoelastic LNs for a realistic Earth model which accounts for an elastically compressible rheology, in order to assess its importance when modeling the tidal and loading response of a large planetary body. In the context of Earth rotation, the role of compressibility has been addressed by Vermeersen et al. (1996); the reader is also referred to Sabadini et al. (2016) for a broader presentation of the problem and to Renaud & Henning (2018) for a discussion of the effects of compressibility in the realm of planetary modelling.
Here we focus on numerical results recently obtained by Michel & Boy (2021), who employed Fourier techniques to compute frequency-dependent viscoelastic LNs for periodic forcings both of loading and tidal types. They have adopted an Earth model with the elastic structure of PREM (Preliminary Reference Earth Model, Dziewonski & Anderson, 1981) and a fully liquid core, and replaced the outer oceanic layer with a solid crust layer, adjusting crustal density in such a way to keep the total Earth mass unchanged. Following Michel & Boy (2021), we have built a discretised realization of PREM suitable for ALMA 3 with a fluid core and 28 homogeneous mantle layers, which has been used to obtain the numerical results discussed below. Figure 5 compares elastic Love numbers obtained by Michel & Boy (2021) in the range of F5 harmonic degrees between n = 2 and n = 10, 000 with those computed with ALMA 3 . The largest difference between the two sets of LNs can be seen for h n in the loading case (Figure 5a), where the assumption of incompressibility leads to a significant underestimation of deformation across the whole range of harmonic degrees. Incompressible elasticity leads to an underestimation also of the k n loading LN (Figure 5b), although the differences are much smaller and limited to the lowest harmonic degrees. Conversely, for the tidal response (Figures 5c and 5d) the two sets of LNs turn out to be almost overlapping, suggesting a minor impact of elastic compressibility on tidal deformations.
In Figure 6 we consider a periodic load and compare viscoelastic tidal LNs h 2 and k 2 com-F6 puted with ALMA 3 with corresponding results from Michel & Boy (2021). Consistently with the elastic case, we see that the incompressibility approximation used in ALMA 3 generally results in smaller modeled deformations across the whole range of forcing periods. The largest differences are found on |h 2 | (Figure 6a) and reach the ∼ 20% level in the range of periods between 10 5 and 10 6 days, while on |k 2 | (6b) the differences are much smaller, reaching the ∼ 10% level in the same range of periods. Similarly, for the phase lags (Figures 6c and 6d) we find a larger difference for h 2 than for k 2 , with the phase lag being remarkably insensitive to compressibility up to forcing periods of the order of 10 4 -10 5 days.
Examples of ALMA 3 applications
In this Section we consider four applications showing the potential of ALMA 3 in different contexts. First, we will discuss the k 2 tidal Love number of Venus, based upon a realistic layering for the interior of this planet. Second, we shall evaluate the tidal LNs for a simple model of the Saturn's moon Enceladus, in order to show how an internal fluid layer can be simulated as a low-viscosity Newtonian fluid rheology and how a depth-dependent viscosity in a conductive shell may be approximated using a sequence of thin homogeneous layers. Third, we will evaluate a set of loading LNs suitable for describing the transient response of the Earth to the melting of large continental ice sheets. As a last example, we will demonstrate how ALMA 3 can simulate the tidal dissipation on the Moon using two recent interior models based on seismological data. While these numerical experiments are put in the context of state-of-the-art planetary interior modeling, we remark that they are aimed only at illustrating the modeling capabilities of ALMA 3 .
Tidal deformation of Venus
The planet Venus is often referred to as "Earth's twin planet", since its size and density differ only by ∼ 5% from those of the Earth. These similarities lead to the expectation that the chemical composition of the Earth and Venus may be similar, with an iron-rich core, a magnesium silicate mantle and a silicate crust (Kovach & Anderson, 1965;Lewis, 1972;Anderson, 1980). Despite these similarities, there is a lack of constraints on the internal structure of Venus. Therefore, its density and rigidity profiles are often assumed to be a re-scaled version of the Preliminary Reference Earth Model (PREM) of Dziewonski & Anderson (1981), accounting for the difference in the planet's radius and mass, as in Aitta (2012). One of the main observational constraints on the planet's interior, along its mass and moment of inertia, is its k 2 tidal LN. The current observational estimate of k 2 for Venus is 0.295 ± 0.066 (2 × formal σ), and it has been inferred from Magellan and Pioneer Venus orbiter spacecraft data (Konopliv & Yoder, 1996). However, due to uncertainties on k 2 , it is not possible to discriminate between a liquid and a solid core (Dumoulin et al., 2017).
Here we use ALMA 3 to reproduce results obtained by means of the Venus model referred to as T hot 5 by Dumoulin et al. (2017), based on the "hot temperature profile" from Armann & Tackley (2012), having a composition and hydrostatic pressure from the PREM model of Dziewonski & Anderson (1981). The viscosity η of the mantle of Venus is fixed and homogeneous; the crust is elastic (η → ∞), the core is assumed to be inviscid (η = 0) and the rheology of the mantle follows Andrade's law (see Table 1). The parameters of the T hot 5 model have been volumeaveraged into the core, the lower mantle, the upper mantle and the crust. The calculation of k 2 is performed at the tidal period of 58.4 days (Cottereau et al., 2011). In the work of Dumoulin et al. (2017), k 2 is computed by integrating the radial functions associated with the gravitational potential, as defined by Takeuchi & Saito (1972), hence the simplified formulation of Saito (1974) relying on the radial function is employed. The method is derived from the classical theory of elastic body deformation and the energy density integrals commonly used in the seismological community. One of the main differences between their computation and the results presented here is the assumption about compressibility, since Dumoulin et al. (2017) use a compressible planetary model, while in ALMA 3 an incompressible rheology is always assumed. In Figure 7, the two curves show the k 2 tidal LN corresponding to Andrade creep parameters F7 α = 0.2 and α = 0.3 as a function of mantle viscosity for the tidal period of 58.4 days. Each of the vertical red segments corresponds to the interval of k 2 values obtained by Dumoulin et al. (2017) for discrete mantle viscosity values η = 10 19 , 10 20 , 10 21 and 10 22 Pa·s, respectively, and for a range of the Andrade creep parameter α in the interval between 0.2 and 0.3. The grey shaded area illustrates the most recent observed value of k 2 according to Konopliv & Yoder (1996) to an uncertainty of 2 × formal σ. Figure 7 shows that the k 2 values obtained with ALMA 3 for the T hot 5 Venus model fit well with the lower boundary of the compared study for each of the discrete mantle viscosity values if an Andrade creep parameter α = 0.3 is assumed, while for α = 0.2 the modeled k 2 slightly exceeds the upper boundary of Dumoulin et al. (2017).
The tidal response of Enceladus
The scientific interest on Enceladus has gained considerable momentum after the 2005 Cassini flybys, which confirmed the icy nature of its surface and evidenced the existence of water-rich plumes emerging from the southern polar regions (Porco et al., 2006;Ivins et al., 2020). These hint to the existence of a subsurface ocean, heated by tidal dissipation in the core, where physical conditions allowing life could be possible, in principle (for a review, see Hemingway et al., 2018). The interior structure of Enceladus has been thoroughly investigated in literature on the basis of observations of its gravity field (Iess et al., 2014), tidal deformation and physical librations (see, e.g.Čadek et al., 2016), setting constraints on the possible structure of the ice shell and of the underlying liquid ocean (Roberts & Nimmo, 2008), and on the composition of its core (Roberts, 2015). Lateral variations in the crustal thickness of Enceladus have been inferred in studies about the isostatic response of the satellite using gravity and topography data as constraints (seeČadek et al., 2016;Beuthe et al., 2016;Cadek et al., 2019) and in works dealing with the computation of deformation and dissipation (see Souček et al., 2016Souček et al., , 2019Beuthe, 2018Beuthe, , 2019. Indeed, from all the above studies, it clearly emerges that a full insight into the tidal dynamics of Enceladus could be only gained adopting 3D models of its internal structure. While a thorough investigation of the signature of the interior structure of Enceladus on its tidal response is far beyond the scope of this work, here we set up a simple spherically symmetric model with the purpose of illustrating how the LNs for a planetary body including a fluid internal layer like Enceladus can be computed with ALMA 3 , and how a radially-dependent viscosity structure can be approximated with homogeneous layers. We define a spherically symmetric model including an homogeneous inner solid core of radius c = 192 km (Hemingway et al., 2018), surrounded by a liquid water layer and an outer icy shell, and investigate the sensitivity of the tidal LNs to the thickness of the ice layer, along the lines of Roberts & Nimmo (2008) and Beuthe (2018). In our setup, the core is modeled as a homogeneous elastic body with rigidity µ c = 4 × 10 10 Pa and whose density is adjusted to ensure that, when varying the thickness of the ice shell, the average bulk density of the model is kept constant at ρ b = 1610 kg·m −3 . Since in ALMA 3 a fluid inviscid rheology can be prescribed only for the core, we approximate the ocean layer as a low viscosity Newtonian fluid (η w = 10 4 Pa·s). The ice shell is modeled as a conductive Maxwell body whose viscosity profile depends on the temperature T according to the Arrhenius law: where E a is the activation energy, R g is the gas constant, T m is the temperature at the base if the ice shell and η m is the ice viscosity at T = T m . Following Beuthe (2018), we use E a = 59.4 J/(mol · K), η m = 10 13 Pa·s and T m = 273 K, and assume that the temperature inside the ice shell varies with radius r according to where r b is the bottom radius of the ice shell and T s = 59 K is the average surface temperature. Since in ALMA 3 the rheological parameters must be consant inside each layer, we discretize the radial viscosity profile given by Eq. (45) using a onion-like structure of homogeneous spherical shells. To assess the sensitivity of results to the choice of discretization resolution, we perform three numerical experiments in which the thickness of ice layers is set to 0.25, 0.5 and 1 km. The ice and water densities are set to ρ i = 930 kg·m −3 and ρ w = 1020 kg·m −3 , respectively, while the ice rigidity is set to µ i = 3.5 × 10 9 Pa, a value consistent with evidence from tidal flexure of marine ice (Vaughan, 1995) and laboratory experiments (Cole & Durell, 1995). Figure 8a shows the elastic tidal LNs h 2 , l 2 and k 2 for the Enceladus model discussed above as a function of the thickness of the ice shell. The elastic tidal response is strongly dependent on the ice thickness, with the h 2 LNs decreasing from ∼ 0.090 for a 10 km-thick shell to ∼ 0.015 for a 50 km-thick shell. It is of interest to compare these results with elastic LNs obtained by Beuthe (2018) in the uniform-shell approximation. It turns out that the h 2 LN shown in Figure 8a is slightly smaller than corresponding results from Beuthe (2018), with relative differences between the 5-10% level, consistently with their estimate of the effect of incompressibility. Figure 8b shows the real and imaginary parts of the h 2 tidal LN as a function of the thickness of the ice layer for a periodic load of period T = 1.37 days, which corresponds to the shortest librational oscillation of Enceladus (Rambaux et al., 2010). As discussed above, for this numerical experiment we implemented in ALMA 3 a radially-variable viscosity profile by discretizing Eq. (45) into a series of uniform layers. Solid and dashed lines in Figure 8b show results obtained with a discretization step of 0.5 km and 1.0 km, respectively; we verified that with a step of 0.25 km the results are virtually identical to those obtained with a step of 0.5 km. The effect of the discretization is evident only on the imaginary part of h 2 , where a coarse layer size of 1 km leads to a significant overestimation of Im(k 2 ) if the ice shell is thinner than ∼ 15 km. By a visual comparison of the results of Figure 8b with Figure 4 of Beuthe (2018), we can see that the imaginary part of h 2 is well reproduced, while the real part is underestimated by the same level we found for the elastic LNs; this difference is likely to be attributed to the incompressibility approximation adopted in ALMA 3 .
Loading Love numbers for transient rheologies in the Earth's mantle
Loading Love numbers are key components in models of the response of the Earth to the spatiotemporal variation of surface loads, including the ongoing deformation due to the melting of the late Pleistocene ice complexes (see e.g., Peltier & Drummond, 2008;Purcell et al., 2016), the present-day and future response to climate-driven melting of ice sheets and glaciers (Bamber & Riva, 2010;Slangen, 2012), and deformations induced by the variation of hydrological loads (Bevis et al., 2016;Silverii et al., 2016). Evidence from Global Navigation Satellite System measurements of the time-dependent surface deformation point to a possible transient nature of the mantle in response to the regional-scale melting of ice sheets and to large earthquakes (see, e.g., Pollitz, 2003Pollitz, , 2005Nield et al., 2014;Qiu et al., 2018). Here, it is therefore of interest to present the outcomes of some numerical experiments in which ALMA 3 is configured to compute the time-dependent h loading Love Number assuming a transient rheology in the mantle. Numerical estimates of h n (t) and of its time derivativeḣ n (t) would be needed, for instance, to model the response to the thickness variation of a disc-shaped surface load, as discussed by Bevis et al. (2016).
In Figure 9 we show the time evolution of the h n (t) loading LN for n = 2, 10 and 100, F9 comparing the response obtained assuming the VM5a viscosity model of Peltier & Drummond (2008), which is fully based on a Maxwell rheology, with those expected if VM5a is modified introducing a transient rheology in the upper mantle layers. An Heaviside time history for the load is adopted throughout. In model VM5a-BG we assumed a Burgers bi-viscous rheological law in the upper mantle, with µ 2 = µ 1 and η 2 /η 1 = 0.1 (see Table 1), while in model VM5a-AD an Andrade rheology (Cottrell, 1996) with creep parameter α = 0.3 has been assumed for the upper mantle. For n = 2 (Figure 9a) the responses obtained with the three models almost overlap. Indeed, for long wavelengths (by Jean's rule, the wavelength corresponding to harmonic degree n is λ = 2πa where a is Earth's radius) the response to surface loads is mostly sensitive to the structure of the lower mantle, where the three variants of VM5a considered here have the same rheological properties. Conversely, for n = 10 (Figure 9b) we see a slightly faster response to the loading for both transient models in the time range between 0.01 and 1 kyr. For n = 100, the transient response of VM5a-BG and VM5a-AD becomes even more enhanced between 0.01 and 10 kyr. It is worth to note that, for times less than ∼ 10 kyr, the two transient versions of VM5a almost yield identical responses, suggesting that an Andrade rheology in the Earth's upper mantle might explain the observed vertical transient deformations in the same way as a Burgers rheology. The differences between the three models are more evident in Figure 10, F10 where we use ALMA 3 for computing the time derivativesḣ n (t) (this option was not available in previous versions of the program). Compared with the Maxwell model, the transient ones show a significantly larger initial rate of vertical displacement, that differ significantly for Burgers and Andrade. The three rheologies provide comparable responses only ∼ 0.1 kyrs after loading. We shall remark, however, that the incompressiblity approximation employed in ALMA 3 has a significant impact on the h n Love Number, as we discussed in Section 4.3, so the results shown above must be taken with caution, and a more detailed analysis of the impact of compressibility on the time evolution of LNs would be in order.
Tidal dissipation on the Moon
The Moon is the extraterrestrial body for which the most detailed information about the internal structure is available. In addition to physical constraints from observations of tidal deformation (Williams et al., 2014), seismic experiments deployed during the Apollo missions (Nunn et al., 2020) provided instrumental recordings of moonquakes which allowed the formulation of a set of progressively refined interior models (see, e.g. Heffels et al., 2021).
In this last numerical experiment, we configured ALMA 3 to compute tidal LNs for the Moon according to the two interior models proposed by Weber et al. (2011, W11 hereafter) and Garcia et al. ( , 2012. Profiles of density ρ and rigidity µ for models W11 and G12 are shown in Figure 11, with the most notable difference being that the former assumes F11 an inner solid core and a fluid outer core, while the latter contains an undifferentiated fluid core. We emphasize that model G12 includes 70 rheological layers in the mantle and crust, demonstrating the stability of ALMA 3 with densely-layered planetary models. For both models, we assumed a Maxwell rheology in the crust and the mantle, with a viscosity of 10 20 Pa·s. A more realistic approach has been followed by Nimmo et al. (2012), who have modelled the Moon's Love numbers and dissipation adopting an extended Burgers model for the mantle, which also accounts for transient tidal deformations (Faul & Jackson, 2015). Such rheological model is not incorporated in the current release of ALMA 3 , but it can be implemented by the user modifying the source code in order to compute the corresponding complex rigidity modulus µ(s). The fluid core has been modeled as a Newtonian fluid with viscosity 10 4 Pa·s while in the inner core, for model W11, we used a Maxwell rheology with a viscosity of 10 16 Pa·s, a value within the estimated ranges for the viscosity of the Earth inner core (Buffett, 1997;Dumberry & Mound, 2010;Koot & Dumberry, 2011). Following the lines of Harada et al. (2014Harada et al. ( , 2016 and Organowski & Dumberry (2020), we defined a 150 km thick low-viscosity zone (LVZ) at the base of the mantle and computed the k 2 tidal LNs as a function of the LVZ viscosity for a forcing period T = 27.212 days.
For both W11 and G12 models, Figure 12 shows the dependence on the LVZ viscosity of the F12 k 2 tidal LN (Figure 12a), of its phase lag angle (12b) and of the quality factor Q (12c). With the considered setup, for a LVZ viscosity smaller than 10 15 Pa·s the tidal response of the two models is almost coincident, while for higher viscosities model G12 predicts a stronger tidal dissipation. Shaded gray areas in frames (12a) and (12c) show 1-σ confidence intervals for experimental estimates of k 2 (Williams et al., 2014) and Q (Williams & Boggs, 2015). With both models we obtain values of k 2 within the 1-σ interval for an LVZ viscosity smaller than about 5 × 10 15 Pa·s; interestingly, for that LVZ viscosity the G12 model predicts a quality factor Q within the measured range, while model W11 would require a slightly higher LVZ viscosity (10 16 Pa·s). Of course, a detailed assessment of the ability of the two models to reproduce the observed tidal LNs would be well beyond the scope of this work, and several additional parameters potentially affecting the tidal response (as e.g. the LVZ thickness or the core radius) would need to be considered.
Conclusions
We have revisited the Post-Widder approach in the context of evaluating viscoelastic Love numbers and their time derivatives for arbitrary planetary models. Our results are the basis of a new version of ALMA 3 , a user friendly Fortran program that computes the Love numbers of a multi-layered, self-gravitating, spherically symmetric, incompressible planetary model characterized by a linear viscoelastic rheology. ALMA 3 can be suitably employed to solve a wide range of problems, either involving the surface loading or the tidal response of a rheologically layered planet. By taking advantage of the Post-Widder Laplace inversion method, the evaluation of the time-domain Love numbers is simplified, avoiding some of the limitations of the traditional viscoelastic normal mode approach. Differently from previous implementations (Spada, 2008), ALMA 3 can evaluate both time-domain and frequency-domain Love numbers, for an extended set of linear viscoelastic constitutive equations that also include a transient response, like Burgers or Andrade rheologies. Generalized linear rheologies that until now have been utilized in flat geometry like the one characterizing the extended Burgers model (Ivins et al., 2020) could be possibly implemented as well modifying the source code, if the corresponding analytical expression of the complex rigidity modulus is available. Furthermore, ALMA 3 can compute the time-derivatives of the Love numbers, and can deal with step-like and ramp-shaped forcing functions. The resulting Love numbers can be linearly superposed to obtain the planet response to arbitrarily time evolving loads. Numerical results from ALMA 3 have been benchmarked with analytical expressions for a uniform sphere and with a reference set of viscoelastic LNs for an incompressible Earth model (Spada et al., 2011). The well-known limitations of the incompressibility approximation in modeling deformations of large terrestrial bodies have been quantitatively assessed by a comparison between numerical outputs of ALMA 3 and viscoelastic LNs recently obtained by Michel & Boy (2021) for a realistic, compressible Earth model. The versatility of ALMA 3 has then been demonstrated by a few examples, in which the Love numbers and some associated quantities like the quality factor Q, have been evaluated for some multi-layered models of planetary interiors characterized by complex rheological profiles and by densely-layered internal structures. Here, µ is the elastic rigidity, η is the Newtonian viscosity, µ 2 and η 2 are the rigidity and viscosity of the transient element in the bi-viscous Burgers rheology, respectively. In the Andrade rheological law, α is the creep parameter while Γ(x) is the Gamma function.
Rheological law
Complex rigidity µ(s) Dumoulin et al. (2017), while the grey shaded area represents the most recent observed value of k 2 and its 2σ uncertainty according to Konopliv & Yoder (1996). to discretization steps for the ice shell of 0.50 and 1.00 km, respectively. Please note that Im(k 2 ) has been multiplied by a factor of 10 to improve readability. with the VM5a viscosity model by Peltier & Drummond (2008) and with two variants that assume Burgers (VM5a-BG) or Andrade (VM5a-AD) rheologies in the upper mantle layers. quality factor (c) as a function of the LVZ viscosity, for a forcing period T = 27.212 days. Blue and red curves correspond to the Moon models by Weber et al. (2011) and Garcia et al. ( , 2012 shown in Figure 11. Shaded areas in frames (a) and (c) correspond to the 1-σ confidence intervals for measured values of k 2 and Q according to Williams & Boggs (2015).
|
2022-07-09T15:20:02.828Z
|
2022-07-07T00:00:00.000
|
{
"year": 2023,
"sha1": "ae5996a24bd9f0158b4180cd2a8f07504cb44938",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "3d1d49fa65048623ea6654736f7e348c55b139d1",
"s2fieldsofstudy": [
"Geology",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
246515193
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Land Use Diversity on Daytime Social Segregation Patterns in Santiago de Chile
Latin American cities are known for their high levels of marginality, segregation and inequality. As such, these issues have been the subject of substantial discussions in academia, with the predominant approach being the study of residential segregation, or what we call “nighttime segregation”. Another dimension of urban sociability, related to labor, is what we call “daytime segregation”, which has been far less studied. This article makes an original methodological contribution to the measurement of non-residential or daytime segregation based on data from mobility surveys. It seeks to explain this segregation measurement according to the diversity and distribution of land uses, as well as other characteristics of the built stock, such as land price and built-up density. We measured daytime social mix in urban spaces, and we show how it highly relates to land use diversity in a Latin American megacity, such as Santiago, Chile. We found that land use diversity plays a key role in enhancing the daytime social diversity of urban spaces, contributing to generate a more heterogeneous city and social gatherings during working days. This research is not only a contribution to the understanding of sociability patterns in cities but is also a contribution to public policy and the work of urban planners, as it informs the development of more diverse and integrated cities, which is a key tool for strengthening democracy, the exchange of ideas, the economy and social welfare.
Introduction
Segregation, marginality and urban inequality are topics that have dominated theoretical discussions on urban issues in Latin America [1], especially after the first few years of the turn of the century. It was at this point when the spatial dimension of urban poverty began to be highlighted, focusing on spatial concentration, the reproduction of inequalities and the isolation effects on the most vulnerable social groups [2][3][4].
Residential segregation (RS) is the degree to which two or more social groups live separately from each other in an urban space [5]. Sabatini and company [3], defined it in opposite terms, as the degree of spatial proximity or territorial clustering of households belonging to the same social group, regardless of whether this is defined in terms of ethnicity, age, religious preferences or socioeconomic status. In the Anglo-Saxon world, the study of segregation has been mainly related to racial differences [6] and, in the case of Latin America, to income differences among inhabitants [7]. The debate has not only focused on the definition of the concept but also on the methodologies used to measure and represent it [8,9]. RS has been studied in its different dimensions and manifestations: (a) physical proximity between the residential spaces of different social groups [22]; (b) social homogeneity of the different territorial subdivisions into which a city is structured [23]; and (c) concentration of social groups in specific residential areas [3]. In this article, we understand residential segregation as "the geographic agglomeration of families of the same social status or category, however the latter is defined, socially, racially or otherwise" [23].
In the case of Latin American cities, there has been a wide discussion about the methods and techniques that have been used to measure segregation [8,9,24], where the debate has focused on the stratification method, the measurement of segregation and the problem of scale and its connection with social problems, among others [23], proposed three dimensions in the case of a Latin American city, two objective or measurable ones, that is, (a) the tendency of a group to concentrate in certain areas and (b) the formation of socially homogeneous areas, and a subjective one, that is, (c) people's perception of the inhabitants of segregated neighborhoods [22].
The different approaches agree that the social structure of the city of Santiago is marked by the presence of a "high-income cone", which spatially concentrates social groups of higher socioeconomic status located at the northeastern geographic area of the city in the form of a cone [36], starting from the center and extending eastward. Moreover, studies on the recent evolution of spatial segregation show that the scale of segregation has been reduced, while the isolation of low-income groups, subjects of state social housing policies in Chile, has increased [3,7,28,29].
Other researchers have focused on the evolution of social structures in urban spaces, especially in the case of Santiago, as one of the cities most exposed to productive restructuring processes in Chile [3,7,25,26,28,29,37,38]. This research studied the spatial structure resulting from the restructuring processes of our economy, reaffirming the hypothesis of the moyennisation of the city, in accordance with international literature. However, this research was general and not focused on any specific class or other dimensions and their particularities. In the case of Santiago and others [26,39,40], studied residential mobility patterns but focused on centripetal residential movements [41] or on issues linked to gentrification processes [42], leaving out other urban growth trends [43].
These different approaches to the study of population distribution in space-or segregation-show an important bias, which is to consider the place of residence as the essential location of the population in the territory. This is a static vision of a city, which does not recognize the true urban condition [44]. As the scholar Mongin states, a city oscillates between an object city and a subject city. The same author states that the initial meaning of the urban condition is the possibility of diverse relations (corporal, scenic and political), as a place that shapes infinite practices and has a public connotation. For this reason, a static vision of segregation only from the point of view of the place of residence is not capable of embracing the complexity of a city from its most polymorphic condition.
Therefore,patterns of daily mobility in a city are important and should be considered as an important part of sociability, since they define what people could do and what they have done with their resources and opportunities at a given time and in a given context [45]. Considering the work of Shareck [46], we consider daily mobility patterns as structured by key locations, such as place of residence or location of work or school [47]. Mobility patterns have spatial and temporal dimensions and include factors such as the possibility of mobility, the spatial dispersion and form of travel, the degree of restriction, the flexibility and spontaneity of travel, the types of activities performed and the characteristics of the places where activities are carried out [46].
There is also a branch of studies that provide qualitative approaches to the measurement of problems related to daily mobility and socioeconomic origin in Latin America, focusing, for instance, on the experience, duration and modes of travel in Santiago [48,49]. For the city of Montevideo, Aguiar [50], used mixed methods to broadly characterize daily mobility, concluding that higher socioeconomic groups have, in general, greater mobility than lower groups, which, when added to gender and age factors, revealed that, in this Latin American city, the inequalities of origin limit the possibilities of daily mobility. Furthermore, Ureta [51], focused mainly on a group of residents of a low-income sector of Santiago to characterize their daily mobility, complementing this qualitative information with the city's Mobility Survey database.
Furthermore, Dannerman and others [15], measured segregation during working hours using the Call Detail Record (CDR) database, where each mobile phone pings or the data connection is linked to cellphone towers. The spatial unit of measurement was Voronoi tessellations around each tower. In this study, the home location was assumed to be the most frequented tower at night and the work location as the tower with more pings during working hours. The main limitation of this study was the size of the geographic unit in which segregation was measured. This research applied a community detection algorithm, which divided the city of Santiago into six macrozones, larger than the municipal subdivision (the scale at which residential segregation in the city has been commonly studied). Measuring segregation in large geographic areas is more likely to show a higher level of overall integration than measuring it in smaller geographic units [2]. Another approach in the same line is that conducted by Li and others [52]. This work develops a methodology to measure urban segregation based on the socio-spatial daily experience of individuals in Hong Kong. Compared to traditional segregation measures, the proposed estimator is not limited to measuring residential segregation but recognizes and evaluates segregation as a dynamic process that unfolds in the daily life routines of individuals in a society and depends on the different ways in which individuals or social groups use urban space.
Therefore, some studies have associated the concept of daily mobility and activity space with the experience of social segregation, isolation or exclusion of individuals in urban space. Activity space, which encompasses the space that individuals visit and use when engaging in everyday activities [47], Golledge and Stimson captures the physical environment in which exposure and potential social interactions can take place [52]. Taking the above into consideration, we consider daytime segregation as "the level of geographic agglomeration of people of different social status at their place of visitation or work".
Finally, given the breadth of the discussion and the approaches to residential and nighttime segregation, it is important to go beyond the literature, which has focused on the quantitative and qualitative aspects of the segregation of residential spaces but has not expanded on measuring segregation associated with other forms of daily sociability in workplaces, study and commercial activities. Although there have been approaches that seek to study the issues of daily mobility and socioeconomic stratification, these have mostly used qualitative methodologies, and although the quantitative approach proposed by Dannermann, presented an innovative community detection algorithm, methodological limitations arose in terms of the scale of the measurement of daytime segregation. By delving into the study of "daytime" segregation and its linkage with the characteristics of urban morphology, such as the diversity of land uses, built density and property values, this study aimed to better understand the position and relationships that different social groups have within a city. In addition, the first approach is proposed to understand which urban factors are decisive in promoting greater social diversity within the city during its day-to-day functioning.
The Role of Land Use Diversity
During the middle of the 20th century, urban planning based on ideas of modern movement dominated the main capitals of the world. Such an urban paradigm was based on the predominance of the automobile as a means of transportation and on land use planning, differentiating activities between home, leisure, commerce, and work [53,54]. The separation by land use was thought of to organize the urban fabric, thus providing a solution to the chaotic combination of uses, architectural styles and high-density street life of pre-modern cities [53,55]. However, the urbanist and social activist Jane Jacobs strongly criticized the modern urbanism paradigm in her book The Death and Life of the Great American Cities, arguing that problems associated with territorial dispersion, the dominance of the automobile, the destruction of pedestrian neighborhood life and the insecurity derived from the zoning of segregated uses would become a serious problem for daily life in a city. The avid discussion generated around these different views on urban planning developed into very different ways of thinking and experiencing a city and influenced future generations of planners.
Based on her own experience, contrary to the modern planning approach, Jacobs described how "cities work in real life, because this is the only way to learn what principles of planning and what practices in rebuilding can promote social and economic vitality in cities, and what practices and principles will deaden these attributes" [55] (4 p) The concept of urban vitality emerged to describe the bustling social and economic exchange that Jacobs observed on the streets of lower Manhattan in New York City during the 1960s. According to Jacobs, daily life in the streets is at the very core of what urbanity is and, to guarantee it, a certain set of requirements should be promoted. She proposed a set of four basic generators of diversity as conditions that would result in vibrant districts and neighborhoods [54] The first of the conditions proposed by Jacobs for an urban area to be vital is the diversity of land uses, or what she called mixed primary uses, which, according to Jacobs "must serve more than one primary function; preferably more than two" [55] (152 p). She proposed that a simultaneous combination of residences, offices and commerce, among other functions, is fundamental for urban vitality. The other conditions posited by Jacobs are an urban grid of small blocks-as opposed to the modern mega-block-which facilitates spontaneous meetings and crossings; a need for aged buildings interspersed with new construction in the urban grid; and a concentration of people, residences and buildings dense enough for spontaneous contact to occur [54][55][56].
Recent studies have approached Jacobs' ideas and her definition of urban vitality using empirical methods with a special focus on land use diversity as one of the key factors. The research by Delclòs-Alió and Miralles-Guasch, focused on empirically obtaining the variables proposed by Jacobs for urban vitality and posing a synthetic urban vitality index for the city of Barcelona. Furthermore, the research done by Xia, and others, on five megacities in China studied the relationship between urban vitality-measured by small business transaction data and nighttime lighting-and contemporary compact city characteristics, such as mixed land use and high density, finding a significant positive spatial autocorrelation between urban land use intensity and urban vitality [32]. Moreover, the study by Li, and others, for the city of Shangzhen in China focused on measuring the relationship between morphology and urban vitality [52]. They found that a dense street grid, small to medium-sized blocks and the diversity and intensity of construction and land use are beneficial to urban vitality. These morphological metrics encourage and extend urban vitality and serve to promote urban sustainability and fight inefficient and disorderly urban sprawl [52].
Another specific precedent for the measurement of a land use diversity indicator was presented by Frank and others, who proposed the use of the entropy indicator to measure land use diversity, with the purpose of introducing it as one of the variables for an urban walkability index [57]. The authors considered five types of land use to measure diversity: residential, commercial, leisure (including restaurants), offices and institutions (including schools and community organizations). Subsequently, the previously mentioned study [54], adapted this indicator and used it as a variable to estimate the urban vitality index following Jane Jacobs' approaches, adding a category for other land uses and calculating the land use diversity based on the Shannon Index, used mainly in ecology to measure species diversity. Finally, other authors studied urban vitality in Santiago and concluded that this type of measurement broke the traditional scheme of analysis of the capital city of Chile and showed other patterns of urban space organization [32].
This study measured land use diversity, built-up density and land values to explain the phenomenon of segregation and social integration during daytime, assuming that a greater diversity of socioeconomic groups during the day is an indicator related to urban vitality, understanding it from the perspective of diversity and social exchange as proposed by Jacobs.
Materials and Methods
The present study employed a quantitative method focused on data analysis of various sources to study daytime segregation patterns-measured using socioeconomic and mobility data from the latest Origin-Destination Survey (EOD, as per its Spanish acronym) in Santiago-and their possible association with urban characteristics. These characteristics include the diversity and proportion of land uses-measured using the database of the Internal Revenue Service (SII, as per its Spanish acronym)-while also incorporating other control variables, such as building density and land values, obtained from the SII database and the Real Estate Registrar (CBR, as per its Spanish acronym) database, respectively. We measured associations from linear regression models and spatial clustering models to analyze agglomeration patterns, inherent to this type of phenomena [58,59]. The geographic unit of analysis corresponds to the EOD survey zoning (EOD zone), to which information from the 2017 Census is cross-tabulated, as well as other territorial variables to enrich the available explanatory variables.
We chose this methodological approach with the aim to replicate the classic residential (nighttime) segregation measurements, in this case with the entropy indicator, using the latest available daily mobility data. At the same time, we decided to measure the association with land use diversity and other built environment characteristics, given the important relationship of such indicators in the urban vitality literature (see Section 2.2).
Definitions
Based on the literature review and available datasets, we define the key concepts for our methodology as follows: • Mobility: transportation across the city from an origin point to a destination point, with a purpose and a commuting mode.
•
Daily mobility: commuting across the city daily. • Nighttime segregation: a measurement of different social groups' geographical separation or lack of mixture in a determined geographical unit. This is measured using residential socio-economic characteristics and is also known as residential segregation.
•
Daytime segregation: a measurement of different social groups' geographical separation or lack of mixture in a determined geographical unit, measured during daytime hours, with socioeconomic data linked to mobility information. • Social diversity: the exact opposite to social segregation, meaning a measure of the degree of social mixture between different groups in a determined geographical unit. The entropy indicator is the measurement used in this paper.
•
Land use diversity: a measurement of the mixture of land uses present in a determined geographical unit.
Measuring Daytime Segregation
Using the EOD mobility survey database for the city of Santiago, we estimated the socioeconomic daytime segregation during working hours for those areas declared as travel destinations. The socioeconomic classification of respondents was based on their survey answers concerning household income and housing expenditure.
First, the household base of the EOD mobility survey was classified into three socioeconomic groups (High, Middle and Low groups). For this, we worked with the EOD database at the household level and divided it into terciles according to total household income. Subsequently, the following adjustment was applied to control for housing affordability and actual disposable income: all households in the High group that spent more than 30% of their income on housing were moved to the Middle group, while all households in the Middle group that spent more than 50% on housing were moved to the Low group. Subsequently, this information was transferred to the respondent database and to the EOD survey trip database. Finally, the trips were grouped by destination zone using the normal working day expansion factor, considering a minimum of 5 trips per destination zone and excluding those zones with fewer trips. Using the expansion factor, the number of people considered totaled 91,313.
For the calculation of daytime segregation, entropy was estimated using Geo Segregation Analyzer software. This indicator accounts for the possibility of encountering someone from another group for each destination EOD zone that received more than 10 people in a normal working day. In order to evaluate daytime segregation, the original value is inverted and scaled, remaining between 0 and 1. Thus, 1 was perfect segregation (presence of only one group), and 0 was perfect integration (perfect balance between all groups).
Measuring Land Use Diversity
From the SII cadastral base, urban land uses were divided into seven types: (1) commerce, (2) facilities (including recreation, accommodation, worship and parking lot facilities), (3) education and culture (these are considered of single use in the SII base), (4) residential, (5) industrial (including warehouses), (6) offices (including public administration) and (7) health. Using the ArcMap's Spatial Join tool, the information concerning surface area by land use contained in the SII blocks belonging to the consolidated urban area of Santiago was transferred to the EOD zones to standardize the geographic unit of analysis. Adapting the methodology of Delclòs-Alió and Miralles-Guasch, the land use diversity indicator was calculated as the sum of the proportion of each land use multiplied by the natural logarithm of that proportion by use. The value obtained was then scaled between 0 and 1, so that 1 was perfect land use diversity and 0 was land use diversity, or only one type of land use.
Given that the EOD zones are not homogeneous in terms of surface area, they tend to be larger in size in the urban periphery, and at the same time, the indicators of segregation (entropy) and land use diversity (Shannon index) measure the possibility of finding another group within a given area. In EOD zones with a larger surface area, the diversity or integration in both indicators tends to increase since they consider a greater surface area. This is a limitation in this study due to the geographic scale of the available mobility data (EOD).
Statistical Analysis
Our analysis aimed to establish a statistical explanation for the apparent relationship between daytime segregation and land use, which, together with other variables related to urban morphology, such as built-up density and land value, could explain why citizens of different socioeconomic groups travel to certain areas, influencing daytime segregation.
This study used the daytime segregation calculation as the dependent variable, and we evaluated the following linear regression models: Model 1: The first model is a univariate linear regression using the land use diversity index as the independent variable and the segregation index as the dependent variable, as indicated by the following formula: Daytime segregation = ß0 + ß1 × Land Use Diversity Model 2: The second multivariate model evaluates the relationship between the builtup area (measured in 1000 m²) for each use and the segregation index, according to the following formula: Day Segregation = ß0 + ß1 × Land Use Diversity + ß2 × Commercial Area + ß3 × Facilities Area + ß4 × Education and Culture Area + ß5 × Residential Area + ß6 × Industrial Area + ß7 × Office Area + ß8 × Health Area Model 3: In this model, the land price expressed in development units (UF, as per its Spanish acronym) per square meter of land is added as a control variable.
Daytime Segregation = ß0 + ß1 × Land Use Diversity + ß2 × Commercial Area + ß3 × Facilities Area + ß4 × Education and Culture Area + ß5 × Residential Area + ß6 × Industrial Area + ß7 × Office Area + ß8 × Health Area + ß9 × Land Price Model 4: In addition to the variables already evaluated in the previous models, builtup density is added as a control variable, measured in 1000 m² per hectare of land.
Model 5: The same variables as those in Model 4 are used, but a filter of a minimum of 50 trips per EOD zone is applied.
Spatial Statistical Analysis
The phenomenon of daytime segregation, derived from the floating population, is theoretically related to the spatial agglomeration patterns of daily trip destinations. Despite this, a quantitative review of the data and their spatialization (Figures 1-3 ) revealed two phenomena: a certain territorial dispersion and, at the same time, a concentration of integrated daytime zones in Santiago's central business district (CBD). In order to understand these concentration patterns, test their statistical significance and support the explanation derived from the linear statistical model, a spatial statistical analysis was performed.
The first analysis performed was the application of Moran's I, which comprehensively measures the variation of spatial autocorrelation between close neighboring values, indicating whether it exists or not and what the sense of the mentioned phenomenon of the spatial autocorrelation is.
Subsequently, the local downscaling of Moran's I, the local indicator of spatial association (LISA), was applied, specifically the Anselin Local Moran's I, for the analysis of clusters and outliers. This descriptive analysis indicates whether local association (clusters) exists and where it is located, specifying statistically significant groupings between neighbors, with high-high, low-low, low-high and high-low values.
Spatial Distribution of Daytime Segregation
In general terms, the daytime segregation index (Figure 1) showed downtown Santiago and some pericentral sectors with high daytime integration; these integrated sectors also extend to peripheral industrial sectors, such as Quilicura, Renca, Cerrillos, San Bernardo and Calera de Tango. The greatest daytime segregation was concentrated in the northeastern cone, in Vitacura, Las Condes, La Reina, Ñuñoa and parts of Providencia (sectors that, in general, are the ones with the highest segregation of high income in the Santiago Metropolitan Area). In addition, we observed a clustering of segregated areas in the foothills of Peñalolén, La Florida and Puente Alto. However, there were also scattered areas of high segregation, without a clear pattern of concentration, in pericentral areas.
The distribution of the land use diversity indicator ( Figure 2) showed a center-periphery pattern, where the center and immediate pericenter of the municipality of Santiago concentrate a high diversity of land uses, extending toward the northeastern cone and toward major transportation axes, such as Vicuña Mackenna Avenue, Gran Avenida and Providencia Avenue. As expected, in general terms, we observed a low land use diversity in the periphery, except for sectors with industrial centers and university campuses. We finally reviewed the cartographies of all variables' distributions in the city (Figure 3).
What Is the Relevance of the Diversity of Activities?
Based on the statistical analysis (Table 1), we found that land use diversity significantly reduced daytime segregation in the five models evaluated. Likewise, commercial, education and culture, residential and industrial uses alone reduced daytime segregation, even when controlling for the surface area of the different uses, land price and built-up density. Moreover, the area of the facilities used increased daytime segregation, but it was not statistically significant in Model 3 when controlling for land price (UF/m 2 ), the latter variable being significant and increasing daytime segregation. As for R2, there was an important change when adding UF/m 2 , explaining 0.15.
Model 4 showed that when adding built-up density, this variable was significant, as it decreased segregation, while the rest of the variables did not show major changes, except for a drop in the land use diversity coefficient. Model 5 used the same variables as those in Model 4 but with a minimum number of 50 trips; education/culture, residential and industrial uses were no longer significant, but the meaning and significance of land use diversity, commerce, land price and built-up density were maintained. In this model, the R2 was 0.20 when the n-trips filter with 552 observations was applied. 13.409 *** (df = 10; 541) Note: * p < 0.1; ** p < 0.05; *** p < 0.01.
Spatial Statistics
The first result of the Moran's index revealed the spatial clustering of the daytime segregation phenomenon, which means that beyond that observed in Figure 1, the phenomenon was indeed concentrated, not at the same level as the other variables, such as income and daytime segregation, but enough to appear clustered with a significance level of 99%. However, the low Moran's index (0.1) and the relatively high Z-Score (12.2) showed a non-normal distribution, with a bias toward integrated values and much lower segregated values, which were nevertheless concentrated (Figure 4).
To analyze such inference in greater detail, we applied the local downscaling of Moran's index, the local indicator of spatial association (LISA), specifically the Anselin Local Moran's I index (analysis of clusters and outliers). This analysis was applied to the variable daytime entropy, that is, to the variable that measures social integration (inverse to segregation). Therefore, the high-high values refer to clusters of integrated daytime residents with significant spatial autocorrelation. This was the case for downtown Santiago and other areas such as Quilicura, the center of Maipú and the center of Puente Alto. It is also worth mentioning that the presence of a large low-low cluster, that is, an area segregated during the day, in the eastern cone of the city of Santiago, corresponding to the municipalities of Providencia, Las Condes and Vitacura ( Figure 5). This northeastern part of Santiago is an area with high land values and high-class residents, being one of the most nighttime-segregated areas of the city, a phenomenon formally called self-segregation [3].
Discussion
This article makes a methodological contribution to the measurement of non-residential or daytime segregation based on data from mobility surveys. It seeks to explain this segregation measurement according to the diversity and distribution of land uses, as well as other characteristics of the built stock, such as land price and built-up density.
Our findings indicate that, in the case of the city of Santiago, daytime segregation is significantly reduced by the diversity of land uses, when analyzing the destination of daily trips. Likewise, urban characteristics (such as built-up density and built-up areas for commercial use) reduce segregation, while land price increases it. Our research yielded convincing results: the diversity of land use explains between 9% and 13% of the social diversity in the studied area during the day. This indicates that the places in the city where it is possible to find a greater diversity of land uses are the ones that attract the greatest diversity of people from different socio-economic backgrounds during the day.
As for the spatial clustering of the daytime segregation phenomenon, we found a concentration of integrated zones in the historic center of the city (municipality of Santiago and surrounding areas). Moreover, we found clustering in urban sub-centers, such as the area around bus stop 14 in La Florida, the center of the municipality of Maipú and the industrial zone of Quilicura. In addition, the eastern cone sector of Santiago showed a concentration of daytime segregation, being an area that is traditionally segregated at night (RS), which also concentrates the highest land value in the city (Figure 4). This is consistent with the results of the linear models, which indicate that daytime segregation significantly increases with increasing land value. Furthermore, the spatial statistics' analysis shows that, during daytime, it attracts daily commuters but remains a highly segregated area (a low-low entropy or social mixture cluster in Figure 5). Considering that it is an area with general land use diversity low in its periphery, but high in its most centric areas, as well as dominated by office and residential land uses (Figure 3), we conclude that this is an area that mainly attracts high-class residents of different areas within the northeastern cone.
Although measured on a different geographic scale, the findings of Dannerman et al. (2018) for Santiago de Chile are similar to ours regarding the high-rent eastern cone of Santiago, a highly segregated area that attracts residents from within the area. Furthermore, our results for central Santiago converge with such research by the finding of a low daytime segregation area. However, the same study found high segregation (concentrating low-rent commuters) in the southeastern area of Santiago as opposed to our spatial analysis results, which found a non-significant concentration in most of its area and a significant low segregation zone in its center (frontier of La Florida and Puente Alto municipalities, Figure 5). Our contribution beyond such research is to relate these findings with land use diversity and to find a significant correlation with daytime segregation patterns.
Compared to the Hong Kong study [52], our study finds an external explanation for the daytime segregation phenomenon rather than an individual-based one. Li and others presented statistical modeling and findings based on personal characteristics, such as age, gender, and education, as well as based on car ownership and public/private housing residence. However, our study proposes a place-based explanation of daytime segregation using the above-mentioned urban characteristics to explain why an area could attract travelers from different income levels.
The limitations of this study mainly relate to data availability. It should be noted that the information on socioeconomic groups and daily mobility corresponds to survey data from 2012, which does not necessarily reflect the current reality of the city. In addition, the household income and expenditure data from the survey present important implications. However, the main limitation is related to the geographic unit available. This corresponds to EOD (Origin-Destination Survey) zones, which tend to be homogeneous in terms of population and, therefore, tend to be larger in size in the urban periphery. This is a limitation, given that in larger areas, the values of land use diversity and social integration tend to increase. Furthermore, this methodology is thought to be applied in large cities. It could hardly be applied to small cities with less than 500,000 inhabitants because of the amount of data observations, which is equivalent to the number of geographic zones. In smaller cities, a fewer number of zones would be derivate in much less robust statistical modeling. For instance, the city of Copiapó in Chile, with almost 154,000 inhabitants, has 69 EOD zones, while, in our case study, Santiago, with over 7 million inhabitants, presents 866 zones.
Further research should collect and analyze better quality travel databases with a greater number of observations, incorporating mass-use technologies, such as GPS, for greater accuracy of the results and greater flexibility in the geographic scale of the analysis. Other avenues in this line of research could focus on incorporating other types of data that measure urban vitality, such as daily shopping transaction data, nighttime lighting, and floating population.
Conclusions
Critical reflection on sociability and its relationship with urban characteristics contribute in a more complex and comprehensive way to the understanding of the challenges of democratic social cohesion. The cities of today imply movements, flows, exchanges and daily mobility, and they can capture the possibilities of an individual to socialize in environments different from those of his or her place of residence. Therefore, it is important to try to capture this complexity and not to retain the static vision of residential segregation, as if one's place of residence is the only form of sociability for individuals.
Most studies on this topic focus on the spatial characteristics of workplaces, especially the size or geographic scope, which are often considered important predictors of isolation or social exclusion, but they rarely consider the social characteristics of the activity space. In this paper, we attempted to relate the three aspects, that is, flow, place characteristics (diversity of activities) and the socioeconomic status of individuals, to the case of a Latin American city. Therefore, the patterns found show similarities and differences with nighttime segregation research, which has been extensively studied.
This research is not only a contribution to the understanding of sociability patterns in cities but also a contribution to public policy work, as it informs the development of more diverse and integrated cities, which is a key tool for strengthening democracy, the exchange of ideas, the economy and social welfare. Therefore, we suggest that local governments, through land use planning and the design of public spaces, promote public policy measures to promote the diversity of activities in their city, which will increase daytime social diversity. Data Availability Statement: SECTRA, the Origin-Destination Survey, can be found at http://www.sectra.gob.cl/encuestas_movilidad/encuestas_movilidad.html (accessed on 30 March 2021). SII, the land use cadaster, can be found at: https://www4.sii.cl/mapasui/internet/#/contenido/index.html (accessed on 30 March 2021).
|
2022-02-04T16:31:22.235Z
|
2022-01-31T00:00:00.000
|
{
"year": 2022,
"sha1": "d181e768ad8f8af9bc82226cc0bd704ecd1cd4f6",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-5309/12/2/149/pdf?version=1644310437",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "df514b40d912560738424cd3fcd728ba7205a2bf",
"s2fieldsofstudy": [
"Sociology",
"Geography"
],
"extfieldsofstudy": []
}
|
216111295
|
pes2o/s2orc
|
v3-fos-license
|
Precision Medicine in Childhood Asthma: Omic Studies of Treatment Response.
Asthma is a heterogeneous and multifactorial respiratory disease with an important impact on childhood. Difficult-to-treat asthma is not uncommon among children, and it causes a high burden to the patient, caregivers, and society. This review aims to summarize the recent findings on pediatric asthma treatment response revealed by different omic approaches conducted in 2018–2019. A total of 13 studies were performed during this period to assess the role of genomics, epigenomics, transcriptomics, metabolomics, and the microbiome in the response to short-acting beta agonists, inhaled corticosteroids, and leukotriene receptor antagonists. These studies have identified novel associations of genetic markers, epigenetic modifications, metabolites, bacteria, and molecular mechanisms involved in asthma treatment response. This knowledge will allow us establishing molecular biomarkers that could be integrated with clinical information to improve the management of children with asthma.
Introduction
Asthma is the most prevalent chronic disease in children and youth [1]. Globally, its prevalence reaches 11.7% and 14.1% in children aged 6-7 and 13-14 years old, respectively [2]. Chronic respiratory symptoms of asthma are also remarkably common among children and young adults. Indeed, the pediatric prevalence of recent wheeze exceeds 20% in different regions of Europe, North America, Australia, and Latin America [3]. Additionally, the prevalence of severe asthma, defined by night asthma symptoms as well as the frequency of severe wheezing episodes, surpasses 7.5% in many regions throughout the world [3]. Moreover, the burden of asthma, measured as disability-adjusted life years, reaches one of the highest values in 10-14-year-old children, highlighting the impact of asthma on the quality of life of children [1]. Besides that, asthma also has an outstanding economic burden that increases with disease severity [4].
International guidelines for the management of pediatric asthma based on clinical factors have been established in order to control the symptoms and to reduce the risk of future associated complications, with pharmacological treatment as a key role in achieving asthma control [5]. Reliever medication aims to discharge the symptoms during worsening episodes of the disease known as exacerbations [5]. Short-acting beta-agonists (SABAs) are one of the most common reliever medications in children due to rapid-onset bronchodilation mediated by the activation of β 2 adrenergic receptors [5,6]. When SABA monotherapy is insufficient to prevent symptoms, controller medication is recommended to reduce airway inflammation, improve the control of the symptoms, and prevent exacerbations and impaired lung function [5]. The most commonly used controller medications are inhaled corticosteroids (ICSs), which have an anti-inflammatory and immunosuppressive effect on lung tissue by interacting with the glucocorticoid receptor [5,7]. As asthma severity increases, the ICS dose may be increased or combined with other controller therapies, such as long-acting beta-agonists (LABAs) or leukotriene receptor antagonists (LTRAs) [5]. While LABAs share the mechanism of action of SABAs, their chemical structure favors long-onset bronchodilation [6]. LTRAs cause both bronchodilation and an anti-inflammatory response as a result of the antagonism effect on the leukotriene receptor [8]. Children with severe asthma require add-on therapies such as oral corticosteroids (OCSs), which are similar to ICSs, except for the systemic effects derived from oral administration. As a consequence, the incidence of adverse reactions increases with OCS treatment, which limits its application to the most severe cases [5,9].
Despite following asthma management guidelines, some patients still have sustained symptoms even under treatment with high-dose ICSs or OCSs [10]. Personalized medicine has recently emerged aimed to select the most appropriate therapy for each patient based on identifying different asthma phenotypes and endotypes [11]. Several clinical procedures are used for this purpose, such as induced sputum and bronchial brushing to characterize atopic asthma [11] or nasal cytology to identify a rhinosinusitis phenotype related to asthma and aspirin sensitivity [12]. However, given the different mechanisms underlying asthma and treatment response, guidelines based only on clinical features are likely limit the success of the treatment [13]. In the last few decades, the development of molecular techniques has led to high-throughput analyses at different biological layers known as omics [13]. The integration of omics data with clinical features and laboratory parameters contributes to a better definition of asthma endotypes and therefore, to select the most appropriate therapy [13]. Moreover, the integration of multiple omics approaches in childhood asthma has revealed new disease mechanisms and has arisen as a viable option moving forward for precision medicine [14,15].
In the last year, the role of different omics in asthma treatment response has been recapitulated in different reviews [16][17][18][19][20][21]. This review aims to provide an update on the latest findings in omic studies of pediatric asthma treatment response. A literature search using different combinations of keywords was therefore conducted using PubMed (Table S1). Studies were eligible if they met the following inclusion criteria: (1) omic studies of treatment response focused on children and youth with asthma, (2) publication date between 1 January 2018 and 31 December 2019, and (3) studies written in English. The criteria to exclude studies were: (1) lack of assessment of asthma treatment response, (2) publications focused on individuals without asthma, (3) studies that did not apply omic approaches, (4) studies focused on animal models, and (5) manuscripts reporting literature reviews, editorials, or opinion articles. A three-stage screening was performed to select eligible studies based on the adequacy of (1) the title, (2) the abstract, and (3) the full text of the manuscripts. All the manuscripts were reviewed by at least three independent authors.
Pharmacogenomics
Pharmacogenomics involves the study of genetic variation across the genome and its role in treatment response. Most genetic studies of complex diseases like asthma have focused on single nucleotide polymorphisms (SNPs). An SNP involves the variation of one nucleotide in the DNA sequence with a population frequency higher than 1%. Due to linkage disequilibrium (LD) and coinheritance patterns of the polymorphisms, millions of genetic variants can be inferred from genotyping arrays of hundreds of thousands of SNPs. Thus, genome-wide genetic variation can be studied without any prior hypothesis by means of genome-wide association studies (GWAS) [22]. In the last few years, whole-genome sequencing (WGS)emerged as a high-resolution method to study both common and rare genetic variation. Although the WGS approach detects genetic variation not tackled by genome-wide genotyping arrays, its use is still largely limited due to economic constraints [23].
In contrast to the past, when most pharmacogenomic studies of childhood asthma were focused on European-descent populations [16,17], within the reviewed period, the pharmacogenomic studies of childhood asthma have analyzed two underrepresented populations with a high burden of asthma and failure in treatment response [24]: African Americans from the Study of African Americans, Asthma, Genes and Environments (SAGE) and Hispanic/Latinos from the Genes-Environment and Admixture in Latino Americans (GALA II) ( Table 1, Table S2). In these two studies (GALA II and SAGE), genomic studies of SABA treatment response were carried out by means of GWAS, based on both genotyping arrays and WGS data [25,26], and also by performing admixture mapping [26]. The latter approach is a type of analysis that can be applied to admixed populations in order to identify genomic regions in which local ancestry is associated with a trait, based on the differences in allele frequency of the SNPs depending on their ancestral background [24]. These studies focused on the change in lung function due to SABA administration, which is known as bronchodilator drug response (BDR) [25,26], and also on ICS response, analyzing as an outcome the presence/absence of severe asthma exacerbations [27]. In the first study, Spear et al. [26] conducted a GWAS of BDR in 949 African Americans with asthma from SAGE using genotyped and imputed data. The following population-specific genome-wide significant association of the SNP rs73650726 was found on 9q21 (β ± standard error [SE] for the A allele: −3.8 ± 0.66, p-value = 7.69 × 10 −9 ). Interestingly, according to the 1000 Genomes Project data, this SNP is only present in African-admixed populations at approximately 9% frequency, but not in Europeans or Asians. Moreover, a trans-ethnic meta-GWAS across 2779 African American and Hispanic/Latino children and young adults identified genome-wide association of three SNPs with SABA response-rs7903366 (β ± SE for the T allele: 1.23 ± 0.22, p-value = 3.94 × 10 −8 ); rs7070958 (β ± SE for the A allele: −1.24 ± 0.23, p-value = 4.09 × 10 −8 ); and rs7081864 (β ± SE for the A allele: 1.23 ± 0.22, p-value = 4.94 × 10 −8 ). These SNPs, which are almost in complete linkage disequilibrium (r 2 > 0.95), are located in the protein kinase cGMP-dependent 1 (PRKG1) gene and act as expression quantitative trait loci (eQTL) of the PRKG1 gene in lung tissue. Interestingly, PRKG1 is involved in the nitric oxide/cGMP signaling pathway, participates in the relaxation of smooth muscle, and acts as a key modulator of airway inflammation in response to SABA [28,29]. Moreover, PRKG1 has been previously associated with lung function and asthma susceptibility [30,31]. Nonetheless, neither the specific-population nor the shared-population SNPs replicated in independent African American and Hispanic-Latino studies. Additionally, an admixture mapping analysis failed to reveal any genomic regions where local ancestry was associated with BDR [26].
Another study also analyzed the same study populations, carrying out WGS on a subset of 1441 patients with asthma that represented the extreme values of the distribution of BDR among African Americans, Puerto Ricans, and Mexicans [25]. While no genome-wide significant association for BDR was found in the specific-population analyses, two SNPs near DNAH5 were significantly associated with BDR in a trans-ethnic meta-analysis-rs17834628 (OR for the A allele: 1.67, 95% confidence interval [CI]: 1.29-2.16, p-value = 1.18 × 10 −8 ) and rs35661809 (OR for the G allele: 1.59, 95% CI: 1.20-2.10, p-value = 3.33 × 10 −8 ). DNAH5 encodes a protein with ATPase activity, which is involved in a protein complex associated with the microtubules. Remarkably, this gene is involved in allergic sensitization, lung function, and immunoglobulin E (IgE) serum levels [32][33][34]. Besides, the combined effect of common and rare variants in three specific-population loci (1p13.2 and 11p14.1 in Mexicans and 19p13.2 in African Americans) showed a genome-wide significant association for BDR. Moreover, two shared-population loci (4q13.3 and 8q22.1) were significantly associated with SABA response as well.
The third study performed on the GALA II and SAGE studies focused on ICS response and included a subset of 1347 Hispanic/Latino and African American individuals treated with ICSs, which were combined in a meta-GWAS. In this study, the presence/absence of asthma exacerbations was analyzed as a proxy of ICS response [27] (Table 1, Table S2). Asthma exacerbations were defined by emergency room (ER) visits, hospitalizations, or OCS use due to asthma symptoms in the last 12 months while the patient was treated with ICSs. A suggestive association was found for the SNP rs5995653 from the intergenic region APOBEC3B-APOBEC3C, which showed evidence of replication in 1697 European children with asthma (OR for the A allele: 0.76, 95% CI: 0.62-0.93, p-value = 7.52 × 10 −3 ). Although this SNP did not reach the genome-wide significance threshold in a meta-analysis across all populations (OR for the A allele: 0.70, 95% CI: 0.61-0.81, p-value = 3.31 × 10 −7 ), the A allele was consistently associated with better ICS response measured as the change in the forced expiratory volume in the first second (FEV 1 ) after six weeks of ICS treatment (OR: 2.16, 95% CI: 1.26-3.70, p-value = 4.91 × 10 −3 ). APOBEC3B and APOBEC3C, genes that have not been previously associated with asthma, encode subunits of a cytidine deaminase, a protein with an RNA editing function that has an important role in the immune response to several viruses by restricting their replication. Moreover, Hernandez-Pacheco et al. [27] carried out replication analyses of the genomic regions associated with ICS response in prior GWAS focused on Europeans and Asians. The SNP rs62081416 near L3MBTL4-ARHGAP28 was found to be associated with ICS response in African-admixed children (OR for the A allele: 2.44, 95% CI: 1.63-3.65, p-value = 1.57 × 10 −5 ). Remarkably, both L3MBTL4 and ARHGAP28 have been associated with post-bronchodilator lung function [32].
Epigenomics
Epigenetics involves the study of the mechanisms that regulate gene expression without modifying the DNA sequence, including DNA methylation (DNAm), microRNA (miRNA) regulation, and histone modifications. These mechanisms are heavily affected by environmental exposures and are essential for cell differentiation processes. Today, epigenetic changes can be analyzed through the whole genome (i.e., epigenomics) by means of high-throughput techniques. The most studied field in epigenetics is DNAm patterns, which consist of the methylation of a cytosine base that occurs at higher frequencies in regions where the cytosine is followed by a guanine in the 5 -3 direction (CpG sites) [35][36][37].
During the reviewed period, two epigenome-wide association studies (EWAS) conducted by Wang et al. analyzed the association of CpG sites methylation status with treatment response in childhood asthma ( Table 2, Table S3). Both evaluated the effects of ICS response on DNA methylation patterns in peripheral blood cells (PBCs) [38,39]. In the first study [38], treatment response was measured by the following two outcomes: (1) the absence of severe asthma exacerbations defined by ER visits or hospitalizations and (2) the absence of OCS use, both of them related to asthma symptoms in the last year despite ICS therapy. A relative hypomethylation of the CpG site cg00066816 near IL12B showed a protective effect for severe exacerbations after false discovery rate (FDR) adjustment (q-value = 0.028) in a meta-analysis across non-Hispanic whites from the Childhood Asthma Management Program (CAMP, n = 154), Europeans from the Children, Allergy, Milieu, Stockholm, Epidemiology (BAMSE, n = 72), and Hispanic/Latinos from the Genetic Epidemiology of Asthma in Costa Rica Study (GACRS, n = 168). Hypomethylation of cg00066816 and the absence of severe exacerbations was shown to be specific to patients treated with ICSs, since it was not observed in European children treated with placebo (standardized coefficient: −3.051, p-value = 0.002). Additionally, hypomethylation of cg00066816 was associated with lower IL12B expression in blood cells in Europeans (Pearson coefficient [ρ] = 0.34, p-value = 0.01), although this result was not confirmed in non-Hispanic white children from CAMP. IL12B encodes a subunit of two cytokines (IL-12 and IL-23) involved in the immune response and airway hyperresponsiveness, whose expression levels have been related to the response to corticosteroids in bronchial biopsies from asthma patients [40][41][42]. Moreover, in the same study, Wang et al. found 13 CpG sites that were significantly associated with the absence of OCS use (q-value < 0.05) in a meta-analysis across non-Hispanic whites from CAMP and Costa Ricans from GACRS (n = 322). An interaction analysis identified that hypermethylation of cg04256470 near CORT-CENPS was associated with the absence of OCS use specifically in patients treated with ICSs (standardized coefficient: 2.322, p-value = 0.02). Interestingly, relative hypermethylation of cg04256470 was associated with higher CORT expression in CAMP (ρ = 0.2, p-value = 0.045) [40][41][42]. CORT encodes the peptide cortistatin, which has a role in the anti-inflammatory process through the hypothalamic-pituitary-adrenal axis and regulates endogenous corticosteroids [43,44].
The second EWAS of ICS response analyzed the change in FEV 1 after eight weeks with ICS treatment in non-Hispanic white children from CAMP (n = 152) [39]. Relative hypermethylation of cg27254601 from BOLA2 was associated with lung function improvement (standardized coefficient: 3.598, p-value = 0.0005) and with an increased expression of BOLA2 (ρ = 0.25, p-value = 0.02). BOLA2 encodes a protein involved in the maturation process of iron-sulfur containing proteins [45]. Gene expression levels of BOLA2 in airway cells differ from patients with asthma and healthy individuals [43], and some intronic variants were associated with eosinophil levels and lung function [31,46,47]. Furthermore, hypermethylation in PBCs of the OTX2 gene, previously found to be hypomethylated in nasal cells from good OCS responders [48], was found to be nominally associated with an improvement in FEV 1 after ICS treatment (standardized coefficient for cg15607672: 2.123, p-value = 0.036). OTX2 encodes a transcription factor that mainly acts in nervous tissue.
Of note, only one study has performed miRNA profiling to investigate its association with treatment response without prior hypothesis. Kho et al. evaluated the role of 754 circulating miRNAs on the use of OCSs in the past 12 months in serum samples from non-Hispanic white children with asthma from CAMP before initiating an ICS treatment (n = 153) [49]. From the 125 miRNAs that remained after quality control, a total of 12 miRNAs were associated with the risk of exacerbations, defined as the need for more than one steroid burst in the past year because of asthma despite being treated with ICSs (Table 3, Table S4). The miR-206 showed the strongest association, with its serum expression levels being higher in non-exacerbators compared to exacerbators (OR: 0.60, 95% CI: 0.42-0.83, p-value = 0.004). Moreover, these miRNAs were included in a predictive model for asthma exacerbations. A combined model based on clinical features and three of these circulating miRNAs levels (miR-206, miR-146b-5p, and miR-720) better predicted asthma exacerbations in children treated with ICSs (area under the receiver characteristics curve [AUC] = 0.81) than a model that only included clinical parameters (AUC = 0.67). Interestingly, these three miRNAs have been related to asthma physiopathology in cell and animal model studies [49] and a previous study associated two of them-miR-146b-5b and miR-206-with baseline FEV 1 /FVC [50]. Moreover, this study revealed four biological pathways regulated by the three miRNAs, which included two that had been previously related to asthma: "inactivation of GSK3 by AKT causes accumulation of b-catenin in alveolar macrophages" (q-value = 0.017) and "NF-κβ signaling pathway" (q-value = 0.08) [49]. Table 3. Summary of the main findings of the only epigenomic study focused on the association of circulating miRNA with inhaled corticosteroid (ICS) response in childhood asthma [49].
Transcriptomics
Transcriptomics is the study of the set of all RNA transcripts by high-throughput methods such as RNA sequencing (RNA-seq) or microarrays. Within the reviewed period, the vast majority of the transcriptomic studies of treatment response in pediatric asthma focused on ICSs. A microarray analysis of peripheral blood mononuclear cells (PBMCs) from Taiwanese children with asthma revealed that patients with poor asthma control show specific transcriptomic patterns associated with glucocorticoid signaling and immune response when compared to other children with asthma [51]. Moreover, two studies applied system biology approaches to investigate transcriptomics of ICS response in non-Hispanic white children from CAMP. Qui et al. [52] analyzed gene expression networks in immortalized B-cell lines from 145 children treated with ICSs. Subjects were classified as good (n = 47) or poor (n = 48) ICS responders based on changes in post-FEV 1 % after being treated with ICSs for two months. Good responders showed enrichment in immune response and proapotosis corticosteroid-induced pathways, whereas poor responders had an enrichment in antiapoptosis pathways. Two transcription factors (TFs), NFKB1 and JUN, showed remarkable differential regulation between both groups. The effect of these TFs on the expression of nine downstream genes was evaluated by TF silencing. CEBPD (regulated by NFKB1) was overexpressed in good responders compared to poor responders while TMEM53 (regulated by JUN) showed the opposite effect. Nonetheless, the lack of validation of the other downstream genes might be because this assay was performed only in a reduced number of subjects and other TFs may simultaneously coregulate the expression of these genes. NFKB1 encodes a subunit of a transcription regulator (NFKB) of multiple biological pathways. Dysregulation of NFKB1 was associated with inflammatory diseases and inadequate immune cell development. JUN encodes a protein that regulates gene expression by directly interacting with the DNA sequence and also has been related to macrophage activation [53].
McGeachie et al. [54] conducted a multiomic analysis in 104 non-Hispanic white children with asthma from CAMP treated with budesonide. Treatment response was evaluated by the steroid responsiveness endophenotype (SRE), a composite phenotype that predicted ICS responsiveness. The SRE index, genome-wide genotyping data, and the response of immortalized lymphoblastoid cells to dexamethasone were integrated to build a steroid response network. A total of seven genes associated with steroid response were identified by this system biology approach, and four of them were selected for in vitro validation analysis. The knockdown of one of these genes, FAM129A, reduced dexamethasone response (p-value < 0.001) in lung epithelial cells. Interestingly, this gene encodes for a protein involved in an apoptosis pathway [55]. Thereby, FAM129A could enhance the anti-inflammatory effect of ICSs.
Katayama et al. [56] performed the only transcriptomic study that evaluated LTRA response in children. A total of 107 children aged 6-48 months were recruited during an acute wheezing episode and were followed-up to seven years. A weighted gene co-expression network analysis (WGCNA) [57] was performed to identify subsets of heavily correlated genes (denominated modules) involved in LTRA response. The WGCNA of gene expression in leucocytes identified a module of 145 co-regulated genes correlated with acute wheezing. This module was enriched in genes involved in interferon signaling pathways, inflammation, and antiviral response. Moreover, this module showed a positive correlation with lung function and LTRA treatment, as well as a negative correlation with vitamin D levels at seven years and with the number of exacerbations during the follow-up period. This module also predicted future LTRA medication with an AUC of 0.81 (95% CI: 0.67-0.96). The gene with the strongest association with LTRA treatment was TRIM22 (p-value = 4.91 × 10 −3 ), which encodes a protein involved in antiviral response that is regulated by an interferon pathway [58].
Metabolomics
Metabolomics aims to profile the whole metabolite composition (metabolome) in biological samples. High-throughput analytical techniques, such as mass spectrophotometry, nuclear magnetic resonance, or spectroscopic methods, allow the characterizing of metabolites in invasive samples like blood and in non-invasive ones such as exhaled air (breathomics). This recent omic has been successfully applied for the profiling of asthma [59][60][61] and may contribute to understanding asthma treatment response.
Kelly et al. aimed to evaluate the interaction of age and 501 serum metabolites on BDR after albuterol administration [62]. Blood samples were obtained from children with asthma from CAMP at three time points, with mean ages of 8.8 (n = 560), 12.8 (n = 563), and 16.8 (n = 295), respectively. A total of 39 metabolites, mainly lipids, showed a nominal interaction with age on BDR, being the strongest interaction observed for the 2-hydroxyglutarate (β = −0.004, p-value = 1.77 × 10 −4 ). Results were evaluated for replication in 320 Hispanic children with asthma from GACRS, with a mean age of 9.1 years. In this case, 12 of 615 metabolites showed a significant interaction with age on BDR, also including 2-hydroxyglutarate (β = −0.015, p-value = 0.018). However, the results of the 2-hydroxyglutarate did not survive after multiple comparison adjustments both in CAMP (q-value = 0.089) and GARS (q-value = 0.997).
Moreover, Kelly et al. also conducted a multiomic study of lung function in Hispanic/Latino children with asthma (n = 325) [63]. In an integrative approach to identify modules of coregulated gene transcripts and metabolites, a total of 25,060 transcripts and 8185 metabolites from whole blood were clustered using WGCNA. Four transcript modules and five metabolite clusters were found to be related to lung function after adjustments for confounders (p-value ≤ 0.05) and interactions among seven of them were found. Interestingly, one transcriptomic module, enriched in asthma-related miRNAs, was associated with BDR, and also with a lipid metabolomic module. ORMDL3, a gene extensively studied in pediatric asthma that has a role in sphingolipid biosynthesis [64][65][66], was identified as a hub gene of this transcriptomic module. Based on genotype data, the SNP rs8079416 within ORMDL3 was found to be an eQTL of ORMDL3 in this population (p-value = 6 × 10 −4 ) and was also associated with 165 of the 537 lipids included in the metabolomic module. These findings were followed up for replication in 207 children with asthma from CAMP. Both the association of the miRNAs module with BDR (p-value = 0.027) and the role of rs8079416 as an eQTL of ORMDL3 (p-value = 5.2 × 10 −10 ) were validated. Therefore, the relationship between ORMDL3 expression, microRNA regulatory motif, and sphingolipid metabolism likely have a role in BDR in pediatric asthma.
Microbiome
The composition of the microbial communities, or microbiota, is conditioned by both intrinsic and extrinsic factors [67][68][69]. Microbial exposure is essential to the development of the immune system, and differential changes in the microbiota have been associated with allergic diseases [70,71]. Indeed, dysbiosis in microbial communities and the presence of bacterial pathogens in different body sites (e.g., lung, gut, or tonsils) have been related to the development of allergic diseases, likely due to dysregulation of the host immune response [72,73]. The development of next-generation sequencing (NGS) techniques has allowed characterizing the microbiota from its genetic make-up, known as the microbiome. The diversity and abundance of the microbiome can be assessed by targeted sequencing techniques or metagenomic approaches. To date, the bacterial microbiome has been extensively studied by targeted sequencing of the 16S ribosomal RNA gene (16S rRNA), a prokaryotic marker whose hypervariable regions contribute to the taxonomic classification of bacteria at genus level [74,75].
A longitudinal study conducted by Zhou et al. [76] aimed to identify changes in nasal microbiota related to the risk of asthma exacerbations despite ICS therapy. The nasal bacterial microbiome was characterized by means of 16S rRNA sequencing in 214 European children with mild-moderate persistent asthma treated with low doses of ICSs as part of a clinical trial. Nasal swabs were collected at the time of well-controlled asthma (randomization) and during the first loss-of-asthma-control episode. Children with a nasal microbiome dominated by Corynebacterium and Dolosigranulum at the time of well-controlled asthma had fewer episodes of early loss of asthma control (p-value = 0.005) and longer times to develop at least two episodes (p-value = 0.03) when compared to those children with a nasal microbiota dominated by Staphylococcus, Streptococcus, or Moraxella. Moreover, during the first loss of asthma control episode, Streptococcus became the most prevalent dominant genus in the nasal microbiome (p-value = 0.001). Additionally, bacterial richness and total bacterial load were significantly higher during asthma control loss than in the well-controlled time point (p-value = 4 × 10 −4 and p-value = 4 × 10 −5 , respectively). Furthermore, a higher relative abundance of Corynebacterium was associated with a lower risk of suffering from asthma exacerbations requiring OCS use (OR: 0.92, 95% CI: 0.89-0.94, p-value = 0.04). Finally, the switch of dominant genera from Corynebacterium + Dolosigranulum to Moraxella was associated with the highest risk of OCS use (p-value chisq-test = 0.04). Corynebacterium is the most abundant commensal bacteria in the nasal microbiome from healthy individuals, and its relative abundance is decreased in patients with asthma [77,78]. Moreover, Moraxella, Streptococcus, and Haemophilus are bacterial pathogens more common in the nasal microbiome of patients with asthma compared to healthy individuals [78][79][80], and Moraxella has also been associated with a higher risk of asthma exacerbations [81].
Discussion
Between 2018 and 2019, a total of 13 studies of different omics have investigated asthma treatment response in children. In detail, three pharmacogenomic, three epigenomic, three transcriptomic, one metabolomic, one microbiome, and two integrative omic studies evaluated the responsiveness to SABAs, ICSs, or LTRAs in children with asthma. These pharmacological therapies are the most commonly used therapies in the daily management of childhood asthma. However, due to the increase of biological medications as add-on therapies in severe asthma, further studies are necessary to understand the mechanisms underlying treatment response of these novel treatments.
Remarkably, in recent years, the number of pharmacogenomic, epigenomic, and metabolomic studies that included ethnically diverse minority populations such as African Americans and Hispanic/Latinos have increased compared to the past [25][26][27]38,63,82,83]. However, studies in populations of Asian ancestry remain still scarce [51]. To move toward precision medicine, further efforts need to be made by the research community in establishing international collaborations with an ethical racial/ethnic representation. Indeed, pharmacogenomic studies can benefit from performing trans-ethnic meta-analysis, since this approach improves signal detection by comparing the effects on different LD structures across populations of diverse ancestries.
While single omics have contributed to revealing some insights on the basis of treatment response, in many cases, they focus on single-marker approaches, limiting its predictive power and clinic transferability. As an example of success, system biology has been applied to transcriptomics [52] and metabolomics [63] data to identify multiple co-expressed markers and the underlying biologic pathways they are involved in. Further validation should also be sought by ex-vivo studies and directed to the development of panels of biomarkers with clinical applicability. Moreover, many studies have investigated whole tissues with a heterogeneous composition of multiple cell types [38,39,49,51,63], but more initiatives to refine specific cell-type patterns in disease-relevant tissues are required. Besides, while replication attempts are a common practice among pharmacogenomic studies, this strategy should expand to other omics, especially when sample sizes are limited. Furthermore, different definitions of treatment response have been considered across studies, such as BDR, asthma exacerbations, change in FEV 1 after ICS treatment, asthma control, or a composite measurement of several variables. While these definitions may reflect different underlying mechanisms involved in the response to asthma therapies, the lack of more homogeneous definitions limits the comparisons of the findings across studies.
Notably, proteomic and metagenomic studies of treatment response have been scarcely investigated. Regarding the bacterial microbiome, the resolution at species-level is yet to be achieved by sequencing the whole 16S rRNA gene. Additionally, the contribution of the virome and mycobiome to asthma treatment response remains unexplored. This could be examined by the application of metagenomic studies performing shotgun sequencing. Additionally, since the lung microbiome is difficult to sample, the nasal, the salivary, or even the gut microbiome could be examined as non-invasive alternatives to the lung microbiome due to the relationship between the upper and the lower airways [84,85] and the gut-lung axis [72]. Although proteomics and breathomics have been used for profiling of asthma [60,86,87], during the reviewed period, no study has been focused on treatment response in children with asthma. Breathomics and sputum or salivary proteomics could be straightforwardly applicable in the clinics in terms of non-invasiveness, which provides an outstanding benefit in the management of pediatric asthma.
In conclusion, while omics studies have provided insight into the biological mechanisms involved in treatment response, further efforts are required to refine predictive markers with clinical relevance. The integration of clinical features with multiple omic data will be promising in the management of pediatric asthma in order to provide markers that improve the quality of life of children with asthma.
Conflicts of Interest:
The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
|
2020-04-23T09:07:17.392Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "7e59cc646cb8561d46e3085563fee0594771c08a",
"oa_license": "CCBY",
"oa_url": "https://europepmc.org/articles/pmc7215369?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e280677158891ebfacf2f7648bd1095542dd493",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
43538142
|
pes2o/s2orc
|
v3-fos-license
|
A Cost-Effective Design for Combining Sensing Robots and Fixed Sensors for Fault-Tolerant Linear Wireless Sensor Networks
Wireless sensor networks (WSNs) can be used to monitor long linear structures such as pipelines, rivers, railroads, international borders, and high power transmission cables. In this case a special type of WSN called linear wireless sensor network (LSN) is used. One of the main challenges of using LSN is the reliability of the connections among the nodes. Faults in a few contiguous nodes may cause the creation of holes which will result in dividing the network into multiple disconnected segments. As a result, sensor nodes that are located between holes may not be able to deliver their sensed information which negatively affects the network's sensing coverage. Sensing robots can be used to overcome these faults and enhance the coverage. Sensing robots provide additional sensing coverage and restore the connectivity among disconnected segments. This paper develops a cost-effective design for robot-assisted fault-tolerant LSN. In this design the numbers of fixed sensor nodes and mobile sensing robots are determined such that the required reliability and sensing coverage levels are maintained while the total cost is minimized. The paper covers both uniformly and randomly deployed sensors LSNs.
Introduction
One of the important applications of LSN [1] is monitoring pipeline infrastructures such as gas, water, and oil pipelines [2,3]. Such infrastructures could extend for thousands of meters in varying environments many of which are remote and usually unforgiving locations such as underwater [4]. Using LSN to monitor and in some cases control some aspects of these infrastructures has become important. Having a fault-tolerant LSN that would monitor and relay condition changes and environmental properties around the infrastructure allows the operators and management to make accurate decisions regarding maintenance, adjustments, replacements, and failure recovery activities.
Other applications of LSN are railroad/subway monitoring, AC power line monitoring, river monitoring, and border monitoring [1]. LSNs used for monitoring these structures consist of a number of sensor nodes. Each sensor node is equipped with sensing elements, a battery, a wireless communication device, a low-capability processor, a small memory, and a small storage. The sensors are lined through the structure to provide three main functions: sensing, information relay, and information filtering. The sensing function is to observe for any interesting changes in or around the monitored structure. The information relay helps forward observed data to the main station using multihop communication. The filtering function filters out and aggregates multiple pieces of sensed data into smaller messages. This will optimize communication, processing, and routing as well as the power needs thus extending the life of the network.
There are several reasons why new frameworks, protocols, and architectures are needed for different categories of LSNs as detailed in [1]. Such reasons include (1) increased routing efficiency, (2) increased network robustness and reliability, (3) improved location management algorithms, and (4) increased network security. As a result, different 2 International Journal of Distributed Sensor Networks aspects of LSN have been investigated recently by some researchers. We developed a number of efficient routing protocols [2] and a distributed topology discovery algorithm [5] for LSN. Zimmerling et al. studied energy-efficient and localized power-aware routing in LSN [6,7]. Noori and Ardakani proposed an analysis to characterize the traffic load distribution over a randomly deployed LSN [8]. Liu et al. provided an algorithm for an energy-balanced data gathering for LSN [9]. Li provided energy-efficient node replacement schemes in LSN to balance network load and extend network lifetime [10]. Finally, Martin and Paterson developed a lightweight key redistribution mechanism for LSN [11].
One of the main challenges facing LSN is losing connectivity in the network [12]. This is due to faults in neighboring and consecutive nodes. Nodes can fail due to battery exhaustion, hardware failures, and natural or intentional damages. These consecutive faulty nodes form holes which may cause the LSN to be divided into multiple disconnected segments. Some of these segments will become isolated; thus, they cannot transfer their sensed data to the main station. As a result the isolated segments will not provide any sensing coverage. Connectivity and coverage are generally considered very important issues in wireless sensor networks [13][14][15][16] and this issue is more challenging in LSNs due to its unique characteristic of limited availability of connection paths that can be used to overcome faults.
In this paper, we investigate the use of sensing robots (SRs) to enhance the reliability of LSNs. We develop and validate an analytical model to find the number of SRs needed to maintain highly reliable and fault-tolerant LSNs. In addition, this paper develops a methodology for cost-effective design. In this design the numbers of fixed sensor nodes and mobile sensing robots are determined such that the required reliability and sensing coverage levels are maintained while the total cost is minimized.
Our model is suitable for both uniformly deployed LSNs and randomly deployed LSNs. In uniform LSNs, the distances between any two neighboring nodes are equal, while in random LSNs, these distances are not. A uniform deployment may occur on pipeline infrastructures or power lines where sensors are distributed at equal distance from each other over the pipes or power towers. An example of random deployment occurs when sensors are dropped on site using an airplane or a moving vehicle [17] along a path to monitor a river bank [18] or a border [19]. We previously investigated different types of LSN applications and deployments in [1].
In the rest of this paper we cover the related work in Section 2. Section 3 provides background information about LSNs. The reliability and coverage issues of LSNs are covered in Section 4. In Section 5, we discuss using SRs and in Section 6 we develop a model to estimate the number of SRs needed for high reliability in uniform LSNs. A cost-effective design that combines both fixed and sensing robots in uniform LSNs is discussed in Section 7, while Section 8 provides examples and experiments. We extend our framework for randomly deployed LSNs in Section 9. In Section 10 we conclude the paper.
Related Work
There is some effort to utilize robots for enhancing deployments, operations, maintenance, and the life span of WSNs. Mobile robots were investigated for WSN deployments in normal situations [20] and in disaster situations [21]. Fletcher et al. [22] and Houaidia et al. [23] developed some robot-assisted relocation strategies of sensors for coverage enhancements for randomly deployed WSNs. Tong et al. [24] developed a strategy for reclaiming nodes with low or no power supply, replacing them with fully charged ones and bringing the reclaimed nodes back to an energy station for recharging. Mei et al. [25] developed and analyzed some algorithms to determine which best robot to replace a faulty node in large-scale static sensor networks. Some research effort was also dedicated for developing mechanisms for WSN localizations using mobile robots [26,27]. These studies were done mainly for two-dimensional WSNs that are deployed at random.
Unlike in two-dimensional WSN in which sinks can be reached through multiple existing paths which can be utilized to enhance the reliability, very few alternate paths are usually available in LSNs [28]. Few nodes faults can significantly drop the network connectivity and coverage. We proposed to use mobile sensors to enhance the coverage in LSNs [29]. In this paper, we develop a framework for a cost-effective design for robot-assisted fault-tolerant LSN. This framework shows that using mobile sensor robots can not only enhance the coverage of LSNs but also it can reduce the cost on some configuration cases.
Linear Wireless Sensor Networks (LSNs)
Wireless sensor nodes can be installed on a linear structure to observe the status of the structure and its surrounding environment. These nodes can be deployed uniformly or randomly to form an LSN. In both types of LSNs, the nodes are distributed to cover the monitored structure. Sensing nodes usually have limited transmission compatibilities where each node can communicate with only a few neighboring nodes [3]. Nodes collaborate among themselves in both sensing and communication functions. Multihop communication is used to transfer the sensed and control data across the linear structure and to/from the main control station. Wireless networks can solve some of the reliability problems of current wired networks technologies that are used to monitor linear structures [2]. For example, WSNs can still function even when some nodes are disabled. Faults in sensor nodes can be easily tolerated using other available nodes to cover the faulty ones. Using dense LSNs with a high number of nodes and/or using wide wireless transmission ranges, the network can maintain connectivity and the sensed and control data can be transmitted through the network to its destination even with the existence of some node faults. For example, in the uniform LSN in Figure 1, each node can communicate with two nodes to the left and two nodes to the right. If for example nodes 3 and 5 are damaged, node 4 can still send its sensed data through node 2 or 6. The maximum number of neighboring nodes that each node can communicate with each side can be defined as a maximum jump factor (MJF). For example, MJF in Figure 1 is 2; thus, node 4 can communicate with a maximum of two nodes on each side.
Each senor node on a linear structure is usually equipped with a transceiver, a processor, a battery, memory, and small storage in addition to one or more sensing elements. Power consumption is critical to the life span of LSNs. Linear structures need to be monitored throughout their life span which could extend to tens of years. Therefore, the associated LSNs should also be long lived. Unlike wired networks where the power is not at all a constraint as the wires deliver power to the different sensor nodes, LSN designers have to consider power as one of the main constraints in the wireless system. Power in a sensing node is consumed by every element on it. A transceiver consumes power waiting for a signal and when transmitting or receiving data. Sensing elements also consume power and the processor as well. Careful scheduling of these resources is needed to optimize power consumption. Although increasing transmission range offers better communication reliability, the nodes will consume more energy. A dynamic configuration for the wireless transmission range can provide better power management. An example of this configuration of a uniform LSN is in Figure 2. Here, nodes 3 and 5 have failed. Therefore, the wireless range for node 4 is increased to reach nodes 2 and 6, while other nodes use a smaller transmission range to reduce the power consumption. The sensed data is transferred through the line to the main station in either direction or both if both directions are connected to the main station. Connecting both directions to the main station will double the communication reliability [3,4]
Faults and Coverage in LSN
Sensor nodes are usually initially distributed to cover the whole linear structure in terms of both sensing and communication coverage. A node senses interesting events and this observation is reported to the main station using multihop communication. Sensors in LSN nodes can have different sensing range capabilities. In a uniform LSN where nodes are distributed at uniform distances, let us define the distance between two neighboring nodes and + 1 as one distance unit. Then an LSN with healthy nodes can provide 100% sensing coverage for the monitored linear structures if the node sensing range (NSR) is one distance unit. This coverage is reduced if there are some faulty nodes. However, if NSR is larger than d (one distance unit), then some node faults can be easily tolerated without reducing the percentage of the sensing coverage. For example, if NSR is 2 distance units. This means that the area between two neighboring nodes and + 1 is monitored twice by both nodes. In this case, this area will be still under monitoring even if node or node + 1 is faulty.
LSN is usually designed to provide a full sensing coverage. However, after some time, there will be some faults among the nodes. Let us consider that we have a one-segment network with homogeneous nodes. Let us also assume that there is a uniform distance between each two nodes. In addition, nodes have a specific MJF. Let us define a term Hole ( ) as the number of contiguous faulty nodes in the LSN. Each hole can be of size of one or more nodes. For example, in Figure 3, we have two holes: 1 and 2 . The first one is with size 1 where only node 2 is faulty while the second hole, 2 , is of size 3 where nodes 5, 6, and 7 are faulty. We use ( 1 ) = 1 and ( 2 ) = 3 where is the size function of a hole. In addition, we can use ( ) = [ : ] to specify the ID of first faulty node, , and the ID of last faulty node, , in . Thus, we have ( ) = − + 1.
A special type of hole is a disconnecting hole (DH), which disconnects the network into two segments that cannot directly communicate with each other using an extended wireless range. In other words, the size of a DH is larger than or equal to the MJF used for the nodes in the LSN, (DH) ≥ MJF. For example, if the MJF of the LSN in Figure 3 is 2, then 2 is DH while 1 is not. This is because ( 2 ) = 3 which is larger than the MJF, while ( 1 ) = 1 which is less than the MJF.
There are different types of faults in a one-segment LSN with homogenous nodes distributed uniformly over the linear structure. Assume that both ends of the LSN are connected to the main station. The faults have different impacts on the communication and sensing coverage. For example, if there are multiple holes that are all not disconnecting holes, then the LSN will function normally. Although there will be some holes with sizes smaller than MJF, any sensed information can be transferred to the main station through the left or right side of the network. Any hole can be jumped over by extending the wireless range of the node that forwards the sensed data to the MJF to reach the next functioning node. Forwarding direction Forwarding direction Z(DH 1 ) = 2 Figure 4: LSN with MJF = 2. LSN is divided into two segments by a DH. Each one will communicate in a different direction. The sensing coverage in each hole may or may not be affected. This depends on the size of the hole and NSR. Let ( ) be the size of the uncovered sensing area in : For example, if ( ) is 3 and NSR is 4, then ( ) is 0. In another, if ( ) is 6 and NSR is 3, then ( ) is 4. If there are nodes and holes in an LSN, then the sensing coverage percentage, , is In another fault case, there are multiple holes; only one of them is a disconnecting hole. In this case the LSN will continue to function. This one DH will divide the LSN into two segments: one on the left side of the DH and the other is on the right side. As the size of the DH is larger than or equal to MJF then this DH will disable information exchange between both segments as shown in Figure 4. In this case, all healthy nodes on the left side of the DH will use a right-to-left forwarding direction, while all healthy nodes on the right side will use a left-to-right forwarding direction to communicate with the main station. The sensing coverage of LSN can be calculated using (1) and (2). In the example shown in Figure 5, the size of DH is 3, while NSR is 4; then the healthy nodes 6 and 10 cover the whole area of DH. In another example (see Figure 6) the size of DH is 5, while NSR is 4. In this case, there are 2 distance units that are not covered by both healthy nodes 6 and 12.
In another fault case, there are multiple holes; only two of them are disconnecting holes. In this case, the LSN will be divided into three segments. Let the DHs be DH 1 and DH 2 . The first segment is before DH 1 , the second segment is between DH 1 and DH 2 , and the third segment is after DH 2 as shown in Figure 7. In this figure, DH 1 is from node 5 to node 6 while DH 2 is from node 11 to node 13. The first segment covers nodes 1 through 4, the second covers nodes 7 through 10, and the third covers nodes 14 through 17. All nodes in the first segment can communicate with the main station in the right-to-left direction, while all nodes in the The segment between DH 1 and DH 2 is not connected.
third segment can communicate with the main station in the left-to-right direction. However, healthy nodes in the second segment cannot communicate with the main station at all. In another case, there are multiple holes with more than two disconnecting holes. Here we assume that there are DHs: DH 1 through DH . Thus, the LSN will be divided into +1 segments. With the existence of DHs, all nodes starting from the first node in DH 1 through the last in DH will not be able to communicate with the main station. As a result, the areas from DH 1 to DH will be uncovered. The effect of this type of fault on the network coverage is similar to having a large disconnected hole, DH 1, that extends from the first node in DH 1 to the last node in DH in the network.
Equations (1) and (2) are to find the uncovered areas as well as the coverage percentage due to this large DH.
Using Sensing Robots
Mobile sensing robots (SR) can be used to enhance the communication and sensing coverage in LSNs. SRs are equipped with all the components a sensing node has including sensors, processor, memory, storage, and wireless transmitter/receiver in addition to some specialized robot devices. Each SR can move linearly over the LSN to cover a DH and provide sensing and communication coverage as shown in Figure 8. When placed at a DH, the SR provides connectivity in the disconnected segments in addition to sensing coverage. If MJF is 2, then one SR can cover a DH with size up to 3 nodes. Generally each SR can cover a DH with a maximum size of (2 * MJF − 1). If the size of a DH is larger than (2 * MJF − 1) then two or more SRs are needed as shown in the equation for the number of SRs: Multiple SRs can be distributed across the linear structure to provide recovery from any DHs. They can also be reallocated to different locations as new DHs occur to maintain best possible sensing coverage. To understand the importance of SR reallocations assume that the current status of the LSN is as shown in Figure 8. MJF is 2 while only two SRs are available and they are already used to cover two existing DHs as shown in Figure 8. Using these two SRs, the whole LSN is covered.
International Journal of Distributed Sensor Networks Now assume that after some time, two more DHs occurred in the LSN as shown in Figure 9, thus making four DHs. The new DHs will reduce the coverage significantly as all the area (14 nodes) from node 2 to 15 will be uncovered. The assigned SRs are reallocated to cover DH 1 and DH 4 instead of DH 2 and DH 3 . The result is shown in Figure 9, where the reallocation resulted in leaving only 5 nodes from 7 to 11 without coverage compared to 14 nodes earlier.
Let be the number of available SRs and let be the number of needed SRs to cover all DHs. The best strategy to use for SR reallocation in case we need more SRs than available (i.e., > ) is to complete the coverage such that the smallest disconnected segment is left out. That is, we need to find the shortest segment that needs ( − ) SRs without placing any SRs while placing the available SRs in the remaining DHs. The main reason of not covering the shortest segment is to minimize the uncovered area and to maximize the coverage area.
Estimating the Number of SRs
For good network coverage, it is important to know the number of SRs needed to cover possible DHs in the LSN. In this section, we develop an analytical model that provides a way to estimate the number of SRs needed to maintain high LSN coverage. Given an LSN with uniformly distributed nodes and a specific MJF, each node has a chance to independently fail with probability within a certain period of time . can be the time to a periodic full maintenance or the time that the LSN is needed to live. To know the number of needed SRs ( ), we first need to know the expected number of holes that may occur and their sizes within time . Knowing the number and sizes of the holes is essential to estimate the number of SRs to cover these holes. To know about the expected holes, let us first find the probability equation that a healthy node in the LSN has a hole directly in front of it with exactly size . We name this equation ( ): Here, as increases, the chance to have a healthy node with a hole with size in front of it also increases. As there are nodes in the LSN and all nodes other than the last nodes that cannot have a hole of size in front of them have the same chance to have a hole in front of it, then the number of holes of size in the LSN is By replacing (f ) with (4), we have Here, as the LSN size increases the chance to have more holes in the LSN also increases. Now, let (n, f, MJF) be the number of needed SRs to cover all expected holes with size . We have where is the number of SRs needed to cover a hole with size . We can replace with (3): For any realistic value for , it is usually a very low probability to have holes with sizes larger than a certain number in an LSN. Therefore, we can consider that we have a variable V that represents the maximum size of possible holes. Let ( , , MJF) be the expected number of total SRs needed to cover all disconnected holes. We have To validate the above equations, a number of simulation experiments were conducted covering different LSN configurations as listed in Table 1. These LSN configurations have different number of nodes ( ), node fault rate ( ), maximum jump factor (MJF), and node sensing range (NSR). The table also lists the expected numbers of needed SRs for all LSN configurations. These numbers are calculated using (9).
A set of simulation experiments were conducted for the LSNs with the configurations listed in Table 1. The experiments covered a setup without the reallocation strategy (WRS) with replaced sensor nodes, where is calculated from (9) and listed in Table 1. In addition, the experiments were conducted to find the impact of using different numbers of SRs on the coverage using, − 4, − 3, − 2, − 1, , and + 1 SRs. The main purpose of these experiments is to find the impact of using different number of SRs on the coverage and to validate whether using calculated SRs will be enough to maintain high coverage. In other words, this will investigate whether using less than SRs will decrease the expected coverage while using more than SRs will not increase the expected coverage significantly. For each simulation experiment, 10,000 different faulty LSN cases were generated randomly to cover different types of faults for each situation. The average of these 10,000 cases was calculated as an expected result for each experiment. The results are shown in Figure 10. As we can see, using SRs ensures high coverage for all selected network configurations. The expected coverage is between 97% and 98% in all cases. This is generally a good coverage range. On the other hand, the expected coverage results of using less than SRs significantly dropped as we use less SRs. At the same time, using more SRs, more than , will slightly enhance the expected coverage of the LSN. These results validate the developed equations.
The coverage percentage range of 97% to 98% when using SRs is obtained due to the existence of holes that are not disconnecting holes in the network. These holes will only slightly reduce the coverage. However, the SRs will cover most of the disconnecting holes; thus, the coverage will not drop significantly.
Cost Effective Design
In the previous section we developed an analytical model to find the numbers of SRs needed to maintain high coverage for LSNs. In the section, we develop a technique to design cost effective LSNs that combine both fixed sensor nodes and SRs. Let us assume that a linear area needs to be monitored with a LSN. For this LSN, a high coverage needs to be maintained for a time period . can be the time to a periodic full maintenance or the time that the LSN is needed to live. The LSN has a combination of fixed sensors and SRs. Let us assume that each fixed node has a chance to independently fail with probability within the period and SRs are designed to be highly reliable and of high quality. Let us assume that the cost of each fixed sensor node is one unit and the cost of each SR is units. In other words, the cost of each SR is times more than the cost of a fixed sensor. This is mainly due to the mobility feature of the SRs and the additional costs of needed functional units. The objective here is to find how many fixed sensor nodes and SRs are needed to maintain high coverage during the period such that the total cost of the LSN is minimized. To find the needed numbers of fixed sensor nodes ( ) and SRs ( ), we need to first find the minimum number of fixed nodes needed to have full connectivity in the LSN ( min ). Full connectivity is obtained when we have MJF as 1. Let the length of the area to be monitored be . Assume that the maximum wireless range for each node is . This means that each fixed sensor can communicate with one node on either side if each fixed node is placed at the maximum distance of from each other. This means that the minimum number of fixed sensor nodes needed to maintain a full connectivity is For full coverage we can use LSNs with 1 ⋅ min , 2 ⋅ min , or ⋅ min nodes where is the MJF. Each of these options needs a different number of fixed nodes and SRs to maintain high coverage within the period . Let the total cost of fixed sensors be COST and let the total cost of sensing robots be COST : The number of SRs, , needed for each option can be found using (9). The total cost (TC) of each LSN option can be found by TC ( min , , MJF) = COST + COST .
We can calculate the total cost of the different options from 1 to for MJF where is the minimum MJF value to have a full LSN coverage during the period using only fixed sensor nodes without using any SRs. These different deigns may have min , 2 ⋅ min , 3 ⋅ min , . . ., and ⋅ min fixed sensor nodes with 1, 2, 3, . . ., and MJF, respectively. These options need different numbers of SRs to maintain high coverage. These SR numbers can be calculated from (9). As we double the MJF value, the value of NSR will also double due to the increase of the density of the fixed sensor nodes. We can find the total cost of all these design options with different numbers of fixed nodes and mobile nodes from (12). The option with the minimum total cost is the most cost effective to be considered for the LSN.
Example and Experiments
As an example, assume that fixed sensor nodes will be used to form a LSN. Each node has as 0.25 during a period . We assume that is high since is a long period. Assume that International Journal of Distributed Sensor Networks 7 (9), while the total costs (TC) are calculated from (12).
As we can see the best option when the total cost of each SR is 10 times the cost of a fixed sensor node, then the best cost effective design is to use 400 fixed sensor nodes and 20 SRs. In this case, MJF will be 2. The best option is when the total cost of each SR is 20 times the cost of a fixed sensor node; then the best cost effective design is to use 600 fixed nodes and 8 SRs. The best option is when the total cost of each SR is 50 times the cost of a fixed sensor node; then the best cost effective design is to use 800 fixed nodes and 3 SRs. These best cost effective designs in all three different fixed nodes and SR cost factor will provide high levels of coverage and some cost saving comparing to use more fixed sensor nodes to maintain the high coverage. The cost saving is around 50%, 36.7%, and 20.8% for = 10, = 20, and = 50, respectively. This shows that using sensing robots in a fixed sensor node network can provide some cost savings in some cases.
To validate the different LSN designs listed in Table 2, two sets of simulation experiments were conducted. The experiments simulate coverage, where each LSN is initially deployed to provide full coverage. However, after some time, some faults will occur in some nodes thus reducing the coverage in the LSN. Here the LSN needs maintenance to regain full coverage again. The simulations were developed using a Java program, where experimental situations were created with 10,000 LSNs that were generated randomly to have different types of faults for each situation. The results are the averages measured from all these faulty LSNs. The first set was to check if all possible designs will provide high coverage with the existence of faults. The results are shown in Figure 11, where all solutions maintain high coverage with the existence of faults. Therefore, any design can be used; however, these designs have different costs.
The second set was conducted to compare using the cost effective design with varying numbers of SRs. The main purpose of this experiment is to find the impact of using different numbers of SRs on the coverage and to validate whether using the calculated SRs will be enough to maintain high coverage. In other words, will using less than SRs decrease the expected coverage while using more than SRs not increase the expected coverage significantly. The experiments were conducted for = 10, = 20,
and
= 50. The results are shown in Figures 12, 13, and 14 for = 10, = 20, and = 50, respectively. As we can see, using SRs ensures high coverage for all selected network configurations. The expected coverage is between 96.5% and 99% in all cases. This is generally a good range. On the other hand, the expected coverage using less than SRs significantly drops. At the same time, using more SRs than will slightly enhance the expected coverage of the LSN. These results validate the developed model.
Randomly Deployed LSN
In the previous sections we mainly developed and validated the cost-effective design framework for robot-assisted faulttolerant uniformly deployed LSNs. However, sensor nodes for LSN can be deployed in a random fashion in some practical scenarios. One example of such a deployment is when the sensor nodes are dropped from an aircraft, ground vehicle, or underwater vehicle along a given path [17]. One example of this deployment can be used to form an LSN that monitors the area surrounding a long pipeline in unattended area to provide some sort of fast monitoring to prevent some external intentional and nonintentional threats [4]. The deployed sensors can report, for example, about any movement, sound, or hazards around the area of the pipeline for fast safety actions including sending security and emergency vehicles to the area [3].
Another example is when an autonomous underwater vehicle drops underwater sensor nodes for sea-boarder monitoring [30]. In these examples, although the aim is to deploy sensor nodes uniformly to cover the complete linear structure, the nodes are most likely deployed randomly with varying distances between each two neighboring nodes. Actual landing locations of sensor nodes may deviate from their targeted locations because of environmental factors such as winds or waves or geographic factors such as slopes and valleys. In this type of deployment, some sensor nodes may be damaged. As a result the network connectivity and sensing coverage capabilities may be significantly impacted. In this section we customize the framework to include randomly deployed LSNs. Two possible solutions for this problem are either to deploy more redundant fixed sensor nodes or to use a combination of fixed sensor nodes and reliable mobile sensor robots as we propose in this paper. In the latter case, a cost-effective design is needed to determine the number of fixed sensor nodes and mobile sensor robots needed in the network.
Assume that there is a plan to deploy a node for each 1 meters distance to form a fully connected LSN with a minimum number of nodes while there is a change for some errors in the deployment with the maximum of ± meters where | | < 1 . This means that the maximum distance between any two neighboring nodes is 1 + 2 meters, while the minimum distance between any two neighboring nodes is 1 − 2 meters. These distances are formed when one of the two neighboring nodes is placed at a position with the error of + from its targeted location while the other is placed at a position with the error of − from its targeted location. If the maximum wireless range for each node is , then we need to have 1 + 2 ≤ for full connectivity. To minimize the number of used sensor nodes, we can have . This equation can be used to calculate the planned distance, 1 . We have 1 = − 2 . This means if one node is deployed after each 1 meters and there is no fault in any of the nodes, then the network will be fully connected. Here in this case, we can consider that MJF is 1. Let the length of the area to be monitored be . This means that the minimum number of fixed sensor nodes needed to maintain full network connectivity is Now, to make MJF equal to 2, we need to have 3 nodes placed in the range of meters. These 3 nodes can be deployed with some random errors of the range of ± . We do not care about the location of the middle node as both side nodes can reach it regardless of the deployment error. However, the first node can be deployed with the error of − while the last one can be deployed with the error of + . Thus, the maximum distance between the first and the last one will be 2 2 + 2 where 2 is the planned deployment distance between each bode. This means that to maintain the MJF of 2 in the network, we need to have 2 2 + 2 ≤ . To minimize the number of deployed nodes in the network, we need to have 2 2 + 2 = . This calculation maintains the connectivity of MJF of 2 in the network even if there are some random deployment errors. Similarly for MJF > 2, we do not care about the deployment of the middle nodes but the location of the first and the last one in each meter. Thus, MJF⋅ MJD +2 ≤ and for minimizing the number of nodes on the network we need to have MJF ⋅ MJD + 2 = . Thus, for different values of MJF, the planned distance between any two neighboring nodes can be calculated as The number of needed nodes for different MJF configurations can be calculated as As an example, let us consider that there is a need for a LSN to monitor an area of length 75,000 meters. The wireless communication range, , of the nodes is 400 meters while is 0.25. The error in the deployment is in the range of ±19 meters. The planned deployment distance and the number of International Journal of Distributed Sensor Networks 9 (16) needed sensors for different MJF configurations are shown in Table 3. The needed number of SRs, , to maintain high reliability can be found from (9) while total cost can be found by The cost of each fixed sensor node is one unit and the cost of each SR is units. In Table 3, is equal to 20 units. As we can see from Table 3, the most cost-effective configuration is when we use the MJF of 3. In this case, we need to use 625 fixed nodes that are placed each 120 meters and 8 SRs. This configuration only costs 785 units.
To validate the result, a simulation experiment was conducted to measure the sensing coverage of the most cost-effective configuration LSN with the MJF of 3. In this experiment, we measure the coverage with varying numbers of available SRs including less and more than the number needed. While the needed SRs are 8 based on Table 3, we experimented with numbers from 1 SR to 10 SRs. The simulation was developed using a Java program, where experimental situations were created with 10,000 LSNs that were generated randomly to have different types of faults and node deployment distances for each situation. The final results are the averages measured from all these LSNs. The sensor nodes in the experiments were distributed randomly using Gaussian distribution. This distribution represents a good model for generating random deployment WSN and used in many research papers in the field [17,[31][32][33].
The result is shown in Figure 15, where we see that using 8 SRs ensures high coverage. The expected coverage is around 97%, which is generally a good percentage. On the other hand, the expected coverage using less than 8 SRs significantly drops. At the same time, using more SRs than 8 will slightly enhance the expected coverage of the LSN. This result validates the developed model. Two more sets of experiments were conducted to measure the impact of the number of SRs in the coverage on two LSN configurations with the MJF of 2 and with the MJF of 4. Both configurations provide good coverage if the right numbers of SRs are used, 21 and 3, respectively. However, both solutions cost more than the LSN with the MJF of 3.The results are shown in Figures 16 and 17. As the two figures show, using less than the calculated SRs can significantly drop the expected coverage, while using more than the calculated SRs will slightly enhance the coverage.
Conclusion
This paper discussed the use of sensing robots to create fault-tolerant LSNs by enhancing the communication and coverage reliability of these networks. A reallocation strategy of the sensing robots was developed to improve the coverage caused by node faults. An analytical model was developed to estimate the number of sensing robots needed to provide high coverage. In addition, a cost-effective design for fault-tolerant LSN with fixed sensors and sensing robots was developed for both uniformly and randomly deployed LSNs. The developed model provides an effective solution for enhancing the coverage in the presence of faults. This makes the proposed model suitable for critical applications for infrastructures located in difficult areas or harsh terrains that cannot be easily reached such as underwater pipelines that extend for hundreds of miles at depths that could exceed hundreds of meters. Incorporating the model at the LSN design stage will allow the LSN designers to have an accurate estimate of the number of nodes and SRs needed to maintain a high coverage during the lifespan of the LSN while minimizing the total cost.
|
2018-04-03T05:36:18.381Z
|
2014-03-01T00:00:00.000
|
{
"year": 2014,
"sha1": "a0b1545411565948ba85866d1447043aa9a24858",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1155/2014/659356",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "1e9cc522b182e99f937d15910951296ec1ea87d3",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
233455911
|
pes2o/s2orc
|
v3-fos-license
|
Extracellular vesicles from monocyte/platelet aggregates modulate human atherosclerotic plaque reactivity
Abstract Extracellular vesicles (EVs) are emerging as key players in different stages of atherosclerosis. Here we provide evidence that EVs released by mixed aggregates of monocytes and platelets in response to TNF‐α display pro‐inflammatory actions on endothelial cells and atherosclerotic plaques. Tempering platelet activation with Iloprost, Aspirin or a P2Y12 inhibitor impacted quantity and phenotype of EV produced. Proteomics of EVs from cells activated with TNF‐α alone or in the presence of Iloprost revealed a distinct composition, with interesting hits like annexin‐A1 and gelsolin. When added to human atherosclerotic plaque explants, EVs from TNF‐α stimulated monocytes augmented release of cytokines. In contrast, EVs generated by TNF‐α together with Iloprost produced minimal plaque activation. Notably, patients with coronary artery disease that required percutaneous coronary intervention had elevated plasma numbers of monocyte, platelet as well as double positive EV subsets. In conclusion, EVs released following monocyte/platelet activation may play a potential role in the development and progression of atherosclerosis. Whereas attenuating platelet activation modifies EV composition released from monocyte/platelet aggregates, curbing their pro‐inflammatory actions may offer therapeutic avenues for the treatment of atherosclerosis.
INTRODUCTION
Extracellular vesicles (EVs) are cell-borne particles that contain a complex biological cargo composed of nucleic acids, proteins and lipids. First described by Wolf in 1967, EVs were reported to have prothrombotic functions (Hargett & Bauer, 2013). Since then, EVs have been described to have several properties with functional consequences in physio-pathology processes, including modulation of the adaptive immune response (Raposo et al., 1996), tumorigenesis (Pucci et al., 2016), and coagulation (Wei et al., 2018). The majority of work conducted so far with EVs has focused on identifying markers of the cell of origin; however we have proposed that their composition and hence properties would vary to reflect the environment surrounding the cell source (Dalli et al., 2013).
Atherosclerosis is the most prominent and common cause of cardiovascular diseases responsible for ∼50% of all deaths in Europe (Nichols et al., 2012). A phenomenon identified in atherosclerosis and other vascular diseases is formation of leukocyte and platelet aggregates within the circulation. In particular, increased numbers of monocyte/platelet aggregates are reported in patients with atherosclerosis (Furman et al., 1998(Furman et al., , 2001 or atherothrombosis (Neumann et al., 1999), and animal models have unravelled some of the molecular mechanisms underlying this inter-cellular interaction (Del Conde et al., 2005;Patko et al., 2012). While presence of these aggregates has been used as a possible biomarker, there is lack of investigation about their possible function especially in relation to the EVs which they can release. This could be relevant, since EVs can cause endothelial dysfunction, vascular calcification, unstable plaque progression, rupture and thrombus formation .
In the context of plaque formation and destabilization, studies have focused on vesicles released from the plaques. For example, atherosclerotic plaque EVs expressed surface antigens of leukocyte origin (including major histocompatibility complex classes I and II), and promoted T-cell proliferation (Mayr et al., 2009). In terms of EVs effects once added to the plaque, there is in vivo evidence for monocyte EVs to promote leucocyte adhesion to post-capillary venules and T-cell infiltration within the plaque (Hoyer et al., 2012). The majority of these studies have been conducted with murine models and in vitro cellular assays. However a better assessment of the inflammatory processes in human atherosclerosis can be attained through organ culture approaches, rather than using less complex experimental settings.
On these bases, we hypothesized that monocyte/platelet aggregates could produce EVs with specific pro-inflammatory effects and that the composition of these vesicles could vary, and their properties tempered, by regulating platelet activation. Thus, we defined the composition of these EVs and their biological functions once added to endothelial cells and human atherosclerotic plaques. Finally, we detected specific subsets of EVs in patients affected by coronary artery disease. Altogether these data suggest that EVs could represent long-term effectors of monocyte/platelet aggregates that exacerbate plaque activation and that antiplatelet therapies could reduce their generation and downstream properties.
. Monocyte purification and flow cytometry characterization
All healthy volunteers gave written, informed consent to blood collection and the procedure was approved by the Queen Mary Ethics of Research Committee (QMERC2014.61). Blood (30 ml) was drawn using a 19G butterfly needle with tourniquet applied and anticoagulated with 0.32% w/v sodium citrate. To inhibit platelet activation, Iloprost (2 μM; stable prostacyclin analogue; Sigma-Aldrich, Gillingham, UK) was added to whole blood prior to cell separation. In separate experiments, aspirin (30 μM; Sigma-Aldrich) or clopidogrel (3 μM; Tocris, Bristol, UK) were used. Blood was centrifuged at 150 × g for 20 min and platelet rich plasma (PRP) removed and replaced with PBS+1 mM EDTA. Unless otherwise indicated, all experiments have been performed with PBS without calcium and magnesium. Following another centrifugation step, RosetteSep cocktail (15028, StemCell Technology, Vancouver, Canada) was added (50 μl/ml of blood) and samples rested at room temperature for 20 min. Blood was then diluted 1:1 with PBS+1μM EDTA and layered over 15 ml Histopaque 1077 (Sigma-Aldrich, Gillingham, UK), centrifuged for 20 min at 1200xg room temperature to separate monocytes from other cells. The monocyte layer was harvested and washed at 300xg for 10 min. Following another washing step, the monocyte pellet was re-suspended in phenol red-free RPMI (Gibco, Waltham, US) and the concentration adjusted as needed.
For peripheral blood mononuclear cell (PBMC) isolation, whole blood was centrifuged at 130 × g for 20 min and plasma removed. For every 30 ml of whole blood, erythrocytes were depleted by sequentially layering 10 ml PBS followed by 8 ml of 6% w/v dextran (high molecular weight, Sigma-Aldrich, in PBS) and gently inverting. After 15 min, the leukocyte-rich fraction was collected and layered over Histopaque 1077 and centrifuged for 30 min 450 × g at room temperature to separate granulocytes from PBMC. PBMCs were washed once by centrifuging at 300 × g and re-suspended in RPMI for further use. Blood aliquots (100 μl) were stimulated with TNF-α (50 ng/ml; T0157-10UG, Sigma-Aldrich) or vehicle (PBS) for 1 h at 37 • C. After stimulation, erythrocytes were lysed with lysing reagent kit (6602764, Beckman Coulter) and samples prepared for IS x analysis.
For platelet isolation, PRP was processed into washed platelets (WP) by addition of 2 μg/ml Iloprost and 0.02 U/ml apyrase (M0398S, NEB), prior to centrifugation at 1000xg for 10 min. After one wash, platelets where counted, and concentration was adjusted to 3 × 10 8 /ml before stimulating.
. Fluorescent microscopy analysis of monocytes and platelets
Isolated monocytes containing platelets were spotted on Alcian blue-coated glass slides and fixed in cold 4% paraformaldehyde (4 • C, 30 min). After fixation, cells were washed with PBS and then blocked in PBS with 0.2% BSA (for surface staining) or PBS with 0.1% Triton and 0.2% BSA (T-PBS; for intracellular staining) for 30 min at room temperature shaking. Following blocking, monocytes and platelets were incubated with primary specific antibodies against Annexin A1 (AnxA1; 5 μg/ml; clone 1B, in house generated) and Gelsolin (GSN;
. Generation and isolation of monocyte/platelet and plasma EVs
Monocyte or platelet preparations, freshly prepared and with or without TNF-α stimulation, were centrifuged at 4400 × g at 4 • C for 15 min, followed by a second centrifugation at 13,000 × g at 4 • C for 2 min to remove remaining contaminants (e.g. apoptotic bodies). EVs were enriched by centrifuging at 20,000 × g at 4 • C for 30 min, supernatants removed, and finally pellets re-suspended in filtered sterile PBS. An identical procedure was used for plasma samples. Full ethical approval was obtained from the Research Ethics Committee at The Beacon Hospital, Dublin, Ireland for the collection of blood samples from patients prior to cardiac angiogram (Study Reference Number: BEA0121). The study was carried out in accordance with the World Medical Association's Declaration of Helsinki. All patients gave informed written consent.
Scanning Electron Microscopy (SEM) of monocyte/platelet aggregates and EVs. Protocol was performed as previously published (Annaz et al., 2004). Briefly, monocyte/platelet aggregates (2 × 10 5 ) were plated on glass coverslips stimulated with 50 ng/ml of TNF-α for 30 min, fixed in cold 4% paraformaldehyde (4 • C, overnight) to stop the EV release. Later samples underwent secondary fixation in 1% osmium tetroxide buffered in 0.1 M sodium cacodylate for 1 h at room temperature, followed by three 5 min washes in 0.1 M sodium cacodylate. Samples were then stained in 1% tannic acid buffered in 0.05 M sodium cacodylate, followed by two 5-min washes in 0.1 M sodium cacodylate and then dehydrated in graded series of ethyl alcohol (20-100%). The samples were air-dried, mounted on aluminium stubs using conductive carbon tape, gold sputter coated for 30 s and imaged with FEI Inspect F Scanning Electron Microscope.
Proteomic analysis of EVs
EVs derived from monocytes treated with TNFα in presence or absence of Iloprost were pelleted at 20,000xg for 30 min, resuspended in 20 μl ice cold RIPA buffer containing protease inhibitor (Sigma Aldrich). Protein content from 5 distinct EV preparations was measured by spectrophotometry (Nanodrop 2000, ThermoFisher Scientific, Waltham, USA) selecting Protein A280 program and 50 μg of proteins were used for trypsin digestion. Mass spectrometry analysis of the proteins obtained from two technical replicates of EVs was performed on tryptic digests obtained using the Filter Aided Sample Preparation protocol as previously described (Wiśniewski et al., 2009). EVs proteome profile was determined by LC-MS/MS analysis as previously described in the methods section. All data and materials have been made publicly at the PRIDE (Perez-Riverol et al., 2019) Archive (EMBL-EBI) with the dataset identifier PXD014325.
. Experiments with human umbilical vein endothelial cells (HUVEC) and human aortic endothelial cells (HAoEC)
Umbilical cords that were kindly donated by the midwifery staff of the Maternity Unit, Royal London Hospital (London, UK) with an approved protocol (East London & The City Local Research Ethics Committee reference 05/Q0603/34 ELCHA). HUVEC were isolated by collagenase digestion and used up to passage 4. HAoEC were donated by Dr Claudio Raimondi (William Harvey Research Institute) and used up to passage 6.
. Experiments with the human atherosclerotic plaque
Isolation and ex-vivo culture. All five patients presented clinical and angiographic evidence of atherosclerosis prior to undergoing carotid or femoral endarterectomy; they gave written informed consent. The study was approved by the Ethics Committee of St. Vincent's University Hospital in Dublin, and in accordance with the International guidelines and Helsinki Declaration principles. Surgical atherosclerotic plaque samples were harvested in physiological saline. After dissection, they were stimulated in 24-well plates for 24 h at 37 • C, 5% CO 2 in RPMI with 0.1% exosome depleted Foetal Bovine Serum with the different EV subsets (10 × 10 6 per well), isolated from monocyte-platelet aggregates subsequent to stimulation with vehicle, TNF-α, or iloprost+TNF-α as described above. After 24 h incubation, tissues samples and supernatants were collected and snap frozen in liquid nitrogen for subsequent analysis by mass spectrometry analysis and multiplex ELISA assay.
2.8.1
Multiplex ELISA analysis of plaque and endothelial cell supernatants GM-CSF, IL-6 and IL-8 concentrations from HUVEC or HAoEC conditioned media or GM-CSF, IFN-γ, IL-1β, IL-4, IL-6, IL-10, IL-13, MIP-1α and TNF-α concentrations from the centrifuged conditioned media of the human plaques were measured by enzyme immunoassay using commercially available human 96 well-plate multiplex kit for tissue culture samples (MSD, Gaithersburg, USA) according to the manufacturers' guidelines. Cytokine release from plaque was normalized by the total content of proteins measured in the supernatant by Nanodrop at 280 nm. The same cytokines plus MCP-1 instead of TNFα were quantified in the monocyte conditioned media following removal of EVs by centrifugation.
STATISTICAL ANALYSIS
All statistical analysis and graphing were performed in GraphPad Prism 6 Software, IDEAS 6.2 for Image Stream Plots and FlowJo V6 for LSRFortessas Plots. Data are expressed as mean ± standard error (SEM) unless stated differently. Analyses applied to the different experimental data are indicated in each figure legend. A P value of < 0.05 was considered significant to reject the null hypothesis.
F I G U R E Iloprost controls platelet, but not monocyte, activation.
[Whole blood aliquots or monocytes isolated using the RosetteSep purification protocol were incubated with or without 2 μM Iloprost (PGI 2 ), 30 μM Aspirin or 3 μM P2Y 12 inhibitor. CD14 and CD41 were used as markers for monocytes and platelets, respectively, for flow cytometry and Imagestream x analyses.
. Monocyte-derived EVs are regulated by aggregation with platelets
In order to determine the functional relevance of monocyte derived EVs in the context of atherosclerosis, an enriched population of monocytes was first prepared from human blood using a negative selection procedure. Flow cytometry analysis demonstrated a high degree of monocyte/platelet aggregates whereby 56.5±5.1% of CD14 + events were also positive for the platelet marker CD41 (n = 10; Figure 1a and 1b). This phenomenon was also visualized by IS x where representative images indicate presence of single monocytes, single platelets as well as monocyte/platelet aggregates (Figure 1c). In order to determine whether plateletmonocyte interactions were dependent on platelet activation, we introduced Iloprost; a stable analogue of prostacyclin, Aspirin or the P2Y 12 inhibitor clopidogrel in our isolation protocol. Addition of all three drugs during the isolation procedure impacted CD41 MFI expression values on the aggregates (Figure 1e), with no effect on CD14 levels ( Figure 1d). Notably, aggregation was not detected when monocytes were analyzed after lysing whole blood (Figure 1d,e and Figure S1). When monocytes were isolated by density gradient isolation, following platelet rich plasma (PRP) removal, the formation of monocyte/platelet aggregates was minimal, yet aggregates could be obtained following stimulus addition, especially with plateletactivating factor (PAF; Figure S1). Furthermore, increased aggregate formation correlated with a reduction of free platelets in the samples suggesting a degree of sequestration to form the mixed aggregates ( Figure S1). Similar results were also observed when whole blood was stimulated with the same concentrations of TNF-α and PAF ( Figure S1). These data together suggest that monocyte/platelet aggregate formation results from the RosetteSep kit isolation procedure.
A degree of monocyte and platelet activation consequent to the purification procedure was confirmed by cell surface expression of P-selectin compared to cells that were analyzed after whole blood cell lysis (Figure 1f). When cells were incubated with the P2Y 12 inhibitor, P-selectin levels remained similar to baseline suggesting an involvement of this receptor in the aggregate formation. In Cells (1 × 10 6 /ml) were incubated with vehicle (V) or TNF-α (50 ng/ml), in presence or absence of Iloprost (2 μM; PGI 2 ), Aspirin (30 μM) or P2Y 12 inhibitor (3 μM) for 60 min. EV generation in cell-free supernatants was quantified following Bodipy staining for (b) total vesicles. (c) Percentage of inhibition of total EV release upon stimulation with TNF-α in presence of PGI 2 , Aspirin or P2Y 12 inhibitor. (c) Monocyte CD14 + EVs concentrations in response to TNF-α stimulation and % of inhibition of CD14 + EVs release upon stimulation with TNF-α in presence of PGI 2 , Aspirin, P2Y 12 inhibitor. (e) Platelet CD41 + EVs concentrations and % of inhibition of CD41 + EVs release upon stimulation with TNF-α in presence of PGI 2 , Aspirin, P2Y 12 inhibitor (f). (g) Double positive CD14 + /CD41 + vesicles concentrations and % of inhibition of CD14 + /CD41 + EVs release upon stimulation with TNF-α in presence of PGI 2 , Aspirin, P2Y 12 inhibitor. (*P < 0.05, **P < 0.01, ***P < 0.001; one-way ANOVA post Bonferroni test, mean ± SEM, n = 3-5 distinct preparations)] comparison, P-selectin levels were only marginally decreased when aggregates were treated with Iloprost or Aspirin, suggesting these drugs may have an alternate mechanism for attenuating the interaction between the two cell types. Since monocyte/platelet aggregates are typical of several cardiovascular settings, including atherosclerosis (see Discussion), we decided to exploit this enriched monocyte preparation to study formation and properties of EVs generated in these cell-to-cell crosstalk settings.
For a more detailed analysis of EVs, we implemented a validated protocol where fluorescence triggering of EVs (labelled with BODIPY-FITC) allows a better identification by IS x (Headland et al., 2015). Using a double gating strategy for staining with CD14 + and CD41 + , EVs from platelets, monocytes and a subset bearing both markers were monitored, both in the presence and absence of TNF-α and drugs ( Figure 2). TNF-α addition to monocytes almost doubled the number of total EVs compared with unstimulated cells (n = 5, P < 0.01) (Figure 2a). Addition of Iloprost reduced TNF-α induced EV release by approximately 15%, whereas Aspirin and the P2Y 12 inhibitor reduced the total number of EVs by ∼50% and 30%, respectively (Figure 2b). Similarly, Aspirin and the P2Y 12 inhibitor had a greater effect on platelet CD41 + /CD14 -EVs (more than 80% inhibition), while Iloprost reduced numbers by around 45% (Figure 1f). Comparatively, Iloprost had a more pronounced effect on both TNF-α stimulated CD14 + /CD41and CD14 + /CD41 + EV subsets (∼30% and 20%, respectively) than Aspirin and the P2Y 12 inhibitor (both reducing CD14 + /CD41by ∼20% and CD14 + /CD41 + by ∼10%) (Figure 2d,h). Having established its predominant modulation on those particular subsets of EVs, further experiments were carried out only upon addition of Iloprost.
To understand if detection of double positive EVs (Figure 2g) was an artificial result of the swarming effect, a known problem during EV analysis by flow cytometry (Libregts et al., 2018), serial dilutions of EV samples isolated from monocyte/platelet aggregates treated with TNF-α were analyzed. Whilst total numbers of Bodipy positive EVs were reduced due to the dilution of the samples, percentages of the monocyte, platelet and more importantly double positive EVs were not affected ( Figure S2), indicating swarming did not occur in our experiments.
Prior to testing the functional activity of these EVs, their physical characteristics were studied by nanoparticle tracking analysis, Western blot analysis as well as scanning and transmission electron microscopy (SEM, TEM). SEM analysis confirmed aggregation of monocytes and platelets (Figure 3a) and the release of EVs upon stimulation with TNF-α (Figure 3b). TEM revealed a similar distribution of size for the EVs isolated by monocyte and platelet aggregates (Figure 3c). No particular difference in size could be observed in EV released by monocyte/platelet aggregates upon application of different stimuli (Figure 3c-d and Figure S3).
F I G U R E Monocyte/platelet EVs activate HUVEC in vitro.
[EVs were collected from isolated monocyte/platelet aggregates, isolated platelets or THP1 cells incubated with vehicle (V) or TNF-α (50 ng/ml) for 60 min, in presence or absence of Iloprost (2 μM; PGI 2 ) for 60 min. HUVEC were incubated with the different EV sets (10 × 10 6 /ml) overnight. Cells were stained for flow cytomentry analysis and supernatants collected and analysed for cytokine release. Experiments performed by nanoparticle tracking analysis confirmed that vesicles produced in these settings ranged between 50 and 500 nm in diameter (Figure 3e-f) with a similar mode (around 108 nm) (Figure 3g). Furthermore, Nanoparticle tracking analysis performed on labelled EVs confirmed expression of CD14 on the majority of them, while a smaller proportion were found to express CD41 ( Figure 3h). These data are in line with the IS x analysis (Figure 3l and 3m).
To check the quality and the purity of the EV preparations, Western blot analysis of the different subsets of isolated EVs were performed side-by-side with both platelets and monocyte/platelet aggregate extracts (Figure 3i). Here, the ISEV approved Calnexin as well as Galectin-9 (Gal-9) were used as a negative control. The latter was chosen since, as described later, it was not identified in our proteomics results. TSG101 was used as a known EV marker and AnxA1, a recognized marker for both monocytes and membrane-spawning EVs. As expected, results showed EVs do not contain any Gal-9 or Calnexin, but were positive for CD9, AnxA1 and TSG101 (Figure 3i). On the other hand, CD9, Gal-9 and AnxA1 were detected in monocyte/platelet aggregates, while platelets seem to express only Gal-9 and CD9 (Figure 3i). Moreover, none of the cell types were enriched in TSG101, the EV marker ( Figure 3i). These data suggest no contamination in the EV preparation due to the isolation method.
. EVs differentially activate HUVEC and HAoEC
Since EVs from different cellular sources can activate endothelial cells (Kuravi et al., 2019;Wang et al., 2011), a major cellular player in blood vessel angiogenesis and plaque formation (Aharon et al., 2008;Dalvi et al., 2017), we queried whether EVs derived from monocyte/platelet aggregates could impact HUVEC reactivity. An overnight protocol was applied, testing initially a concentration-range of 1 to 20 EVs per endothelial cell (data not shown). These experiments, combined with published data (Tang et al., 2016;Wang et al., 2011), indicated that a ratio of 10 EVs/Cell was optimal for our experimental approach. Indeed, microscopy imaging showed a significant uptake of these EVs by cells after 24 h incubation with TNF-α (Figure 4a). We next measured IL-6 release, a pivotal cytokine associated with atherosclerosis (Libby & Rocha, 2018). Cell incubation with EVs released by TNF-α stimulated monocyte/platelet aggregates augmented significant IL-6 levels to a similar extent as the positive control recombinant TNF-α (10 ng/ml) (Figure 4b). When EVs were produced in the presence of Iloprost, a significantly lower amount was quantified (Figure 4b). Furthermore, no significant IL-6 release was observed when HUVEC were stimulated with EV subsets isolated from either platelets or the monocytic cell line (THP-1; utilised as a surrogate for pure monocytes with no platelets present). These data suggest a distinct pro-inflammatory effect of EVs generated from enriched monocyte/platelet preparations in response to conditions chosen to mimic vascular inflammation.
Next, we quantified markers of endothelial cell activation; ICAM-1 and VCAM-1, which were significantly upregulated in response to recombinant TNF-α (10 ng/ml) used as a positive control (Figure 4c-d). Similar increases were recorded when HUVEC were stimulated with EVs isolated from TNF-α activated monocyte/platelet preparations (Figure 4c-d). Of note, only negligible amounts of residual TNF-α were detected in any of the vesicle preparations used (110.65±10.52 fg/m; n = -4; data not shown). When EVs were generated in the presence of TNF-α+Iloprost, more modest responses were observed, with minimal changes in ICAM-1 and VCAM-1 expression (Figure 4c-d). When HUVEC were treated with the same concentrations (10 × 10 6 ) of washed-platelet EVs or THP-1 derived EVs, levels of ICAM-1 and VCAM-1 were modulated to a much lower extent; only EVs from TNF-α-activated THP-1 cells were able to significantly increase ICAM-1 expression (Figure 4c-d). These data suggest a synergistic role of monocyte/platelet aggregates in releasing functional EVs upon TNF-α stimulation (Figure 4c-d).
Comparable results were obtained when monocyte/platelet derived EVs were added to HAoEC. In this set of experiments, a significant increase in both ICAM-1 and VCAM-1 cell-surface expression was quantified only when cells were treated either with 10 ng/ml of TNF-α or with EVs isolated from monocyte and platelet aggregates stimulated with TNF-α, in absence of Iloprost ( Figure S4). The same held true for the release of IL-6 ( Figure S4).
. EVs trigger differential activation of human atherosclerotic plaque
Having confirmed that EVs derived from monocyte/platelet aggregates can activate endothelial cells, we next tested if they might be a functional determinant in atherosclerosis. Thus, we assessed their function on an atherosclerotic plaque using an ex-vivo organ culture protocol (Figure 5a and b; Table S1). Herein we compared an overnight incubation with EVs generated from different cellular activation protocols, using the same concentration of EVs, as described in the previous section, to mimic settings of vascular inflammation. Then, we quantified cytokines and proteins released in the supernatants from the plaque fragments. Cytokine multiplex analyses revealed that treatment of the plaques with EVs released by monocyte/platelets stimulated with TNF-α, augmented concentrations of TNF-α, IL-6, IL-13, IFN-γ and GM-CSF in the culture media (Figure 5c and Supplementary Table S2). As stated above, negligible amounts of residual TNF-α were detected in any of the vesicle preparations used. When EVs were generated in the presence of Iloprost, a much milder regulation of the general cytokine response was noted (Figure 5c). These findings confirm the acquisition of a pro-inflammatory phenotype of EVs not only in vitro but also ex vivo when monocyte/platelet preparations were stimulated with TNF-α. Such an effect was markedly attenuated when EVs were generated by Iloprost+TNF-α treatment, a finding corroborated by further quantification of IL-6 and IL-13 in the supernatants (Figure 5d and e). Of importance, the use of 0.1% FBS in the culture media to enable the viability of plaque fragments did not affect the experimental outcome, as showed by the cytokines release, where basal levels were significant lower than when plaques where treated with the subsets of EVs.
Characterization of monocyte EV subsets revealed differential protein expression associated with regulation of vascular inflammation and plaque formation. The experimental data presented so far are indicative of different pharmacodynamic properties of EVs obtained with TNF-α-treated monocyte/platelet preparations as compared to vesicles generated following treatment with Iloprost+TNF-α. In order to verify if these effects were mediated by differences in EV composition, we performed a proteomic characterization of TNF-α and Iloprost+TNF-α EVs. This set of experiments identified 681 proteins in EVs by LS-MS/MS (Table S3), of which 32 proteins were significantly altered (P < 0.05) when comparing TNF-α EVs to Iloprost+TNF-α EVs (Figure 6a). Of these, 19 proteins were upregulated and 13 downregulated following cell incubation with Iloprost ( Figure 6b). Moreover, proteins uniquely expressed were also identified: 10 proteins for TNF-α EVs and only two for Iloprost+TNF-α EVs (Figure 6a). Of interest, we detected AnxA1, which is a faithful marker for membrane-spawn vesicles (Jeppesen et al., 2019). Gelsolin (GSN), which was augmented in Iloprost+TNF-α EVs, was an interesting hit as it is involved in actin filament assembly and organization (Sun et al., 1999), hence described to maintain the cytoskeleton structure in arteries (see Discussion).
Next, and to further validate these data, we confirmed the relative abundance of a selected group of proteins by Western blotting and IS x . To this end, equal numbers of monocyte/platelet EVs of each subset were loaded and immunostained for GSN, AnxA1 and HSPB1 employing ATCB as a loading control (Figure 6c). The blots confirmed that GSN was enriched in Iloprost+TNFα EVs (Figure 6c-d), whereas HSPB1 and AnxA1 were mildly regulated across the two EV subsets (Figure 6c-d). IS x analyses revealed that GSN and AnxA1 were also detected of the surface of the EVs (Figure 6e), visualizing a selective enrichment of GSN in EVs isolated from monocytes stimulated with Iloprost and TNF-α (Figure 6f), with no major changes for AnxA1 surface detection. Figure 2 and incubated with vehicle (V) or TNF-α (50 ng/ml), in presence or absence of Iloprost (2 μM; PGI 2 ) for 60 min. Human atherosclerotic plaque fragments were incubated with the reported EVs (10 × 10 6 /ml) overnight. Supernatants were collected and used for ELISA analysis. When immunogold labelling experiments with anti-AnxA1, GSN or CD41 antibodies were analyzed by TEM, EVs expressing both markers could be identified (Figure 6g-r). These EVs were heterogeneous in their size, with both larger as well as smaller particles positive for the markers.
F I G U R E Monocyte/platelet EVs activate human atherosclerotic plaque ex-vivo. [Monocyte were obtained as in
To determine the cellular source of these exemplar proteins, surface and intracellular staining of human monocyte and platelet aggregates was performed and analyzed by microscopy. While AnxA1 was selectively expressed, and to a high abundance, by monocytes (Figure 7a and b), the majority of GSN seemed expressed by platelets both intracellularly and on their surface (Figure 7b); only a small amount was associated to monocytes likely because of the adherent platelets. Similar results were also obtained by Western blotting when the same proteins were investigated in platelet or monocyte (the latter containing residual platelets) lysates. Loading of decreasing concentrations of monocyte and platelet whole lysates revealed GSN to be highly expressed in platelets; in line with the immunofluorescence results, only a small amount was detected in monocyte lysates, possibly because of platelet contamination (Figure 7c). Conversely, monocyte lysates contained a consistently higher amount of AnxA1 with the characteristic 38 kDa and 34 kDa bands (Figure 7c). Platelet extracts displayed only minimal amounts of the cleaved form of AnxA1 (Figure 7d). These data, together with the immuno-gold labelling TEM results, confirmed the ability of monocyte/platelet aggregates to release EVs bearing both markers of monocyte and platelet origin.
Having established that EVs from monocyte and platelet aggregates are heterogeneous and could promote inflammation in the context of atherosclerosis, we next assessed if similar EV subsets could be detected in patients with atherosclerotic plaques. Herein we analyzed a cohort of 24 patients with coronary artery disease (CAD), 12 of whom required percutaneous coronary intervention (PCI). Baseline characteristic of the patients are reported in Table S5. A significant increase in overall concentrations of EVs (Bodipy + ; Figure 8a) as well EVs from monocytes (CD14 + /CD41 -; Figure 8b), platelets (CD14 -/CD41 + ; Figure 8c) and double positive (CD14 + /CD41 + EVs; Figure 8d) was quantified in the plasma of CAD patients who needed stenting. No difference in size was observed between the two groups of patients when EVs were analyzed both by TEM (Figure 8e-f) and Nanoparticle tracking analysis (Figure 8g). The nature of the EVs was confirmed by Western blot, where EVs from patients expressed EV markers like CD9 and AnxA1 and did not show contamination of Calnexin (Figure 8h). In the same blot, EVs from monocyte/platelet aggregates were loaded as positive control (Figure 8h).
DISCUSSION
In this study we provide evidence that monocyte/platelet-derived EVs are pro-inflammatory and activate endothelial cells and the human atherosclerotic plaque. We identify some subtlety in relation to the mode of activation of the monocyte with particular attention to the presence of an aggregated and/or adherent platelet. Using a pharmacological approach to attenuate platelet reactivity, we could produce EVs with a lower impact on atherosclerotic plaque activation. These different outcomes were not related to physicochemical features of the EVs but rather to their composition as indicated by the proteomic analysis. Similar EVs to the one investigated, bearing markers of both monocytes and platelets, were increased in patients suffering from carotid artery disease needing percutaneous angiography. Since transient aggregates between monocytes and platelets can form in settings of vascular inflammation, these data make us propose that this inter-cellular cross-talk can generate EVs which may extend F I G U R E Proteomic analysis of monocyte/platelet EVs and validation.
[Monocyte were obtained as in Figure 2 and incubated with TNF-α (50 ng/ml), in the presence or absence of Iloprost (2 μM; PGI 2 ) for 60 min, prior to EV purification. Targeted analysis highlighting differences between TNF-α and TNF-α+PGI 2 EVs identified 33 proteins that were significantly altered (Table S3). (a) Venn diagram showing the proteins that are differentially expressed between TNF-α versus PGI2+TNF-α EVs. In the intersection of the diagram are reported the proteins there are significantly altered between the two EVs populations (P < 0.05, red represents up-regulated proteins while blue depicts down-regulated proteins in TNF-α+ PGI2 EVs). In black, we identify proteins that are uniquely expressed in either EVs population.
F I G U R E Elevated pro-inflammatory monocyte/platelet EVs in patients with coronary artery disease.
[EV were isolated with differential centrifugation from plasma samples, re-suspended in PBS and analysed on ImageStream, Nanosight, TEM and Western blot. (a) Concentrations of CD14 + EVs (b), CD41 + EVs (c) and CD14 + /CD41 + EVs were analysed in samples by case status. Representative TEM images of EVs analysed by electron microscopy from patient not in need of percutaneous coronary intervention (e) and that needed the surgery (f). (g) mode of size of EVs. Western blot analyses from monocyte/platelet aggregates and of distinct EV preparations from monocyte/platelet EVs (used as internal control) and from patients, used to detect immunoreactivity for CD9, Annexin A1 (AnxA1), β-actin (ACTB1) and Calnexin. Three distinct EV preparations were tested. Data are from N = 12 patient samples per group and are reported as median ± SEM (*P < 0.05, **P < 0.01, ***P < 0.001; Unpaired Student's t-test). *Monocyte refers to samples containing monocyte/platelet aggregates] the patho-physiological relevance of this event. Clinical management with anti-platelet therapies may have additional beneficial effects through modulation of the quality of EVs released from monocytes. Furthermore, these EV subsets could be exploited as potential biomarkers of worsening of atherosclerosis. Monocyte/platelet aggregates are a feature of vascular inflammation, being identified in several settings, both in man and experimental animals. For instance, following kidney transplantation, addition of a 4-week anti-platelet therapy to immunosuppressive drugs reduced monocyte/platelet aggregates as well as other markers of vascular inflammation (Graff et al., 2005). These aggregates have also been reported in stroke (Franks et al., 2010) and in heart failure (Wrigley et al., 2013). In heart failure, presence of monocyte/platelet aggregates negatively correlated with better prognosis, indicating a role for the aggregates in sustaining damage of the cardiac tissue. Finally, circulating monocyte/platelet aggregates have been detected in hypertension, where an independent predictor for their formation was systemic blood pressure (Gkaliagkousi et al., 2009), and in coronary artery disease. In the latter condition, monocyte/platelet aggregates increase in patients compared to healthy controls (Czepluch et al., 2014;Sarma et al., 2002), an increase quantified to be more than two-fold (Furman et al., 1998). In all these studies, the pro-atherogenic properties of the aggregates has been suggested.
It is important to note that monocyte/platelet aggregates are transient in their association and dissociation. As a relevant example, Furman et al demonstrated that numbers of circulating monocyte/platelet aggregates in patients with acute myocardial dysfunction were higher within the first 4 h of acute coronary symptoms and gradually returned to basal values in the 4-8 h post-infarct period (Furman et al., 2001). In view of the transient nature of the cellular aggregates, we reasoned that EVs could represent a viable way to monitor longer-term effects of aggregate formation, either as a biomarker or as bona fide effectors of pathogenesis. To address this hypothesis, we took advantage of the presence of platelets in the preparations of monocytes purified from human whole blood.
A subtle and sophisticated role for the platelet emerged in these experimental conditions, whereby attenuation of platelet activation (virtually with all three therapies tested) only partially affected platelet adhesion to the monocyte, while reducing the generation of EVs, including the CD14 + /CD41 + double positive subset. Here, we had a system where TNF-α stimulated the monocyte predominantly, but there was a 'co-stimulatory' action attained by the adherent platelets. These data are in agreement with a proposed cross-talk in plaque formation and progression, whereby the platelet adherent to the monocyte favours migration of the leukocyte into the plaque, which would then develop to macrophages (Da Costa Martins et al., 2004;Huo et al., 2003;Mayr et al., 2009). Platelet-delivery of cholesterol could feed forward the process of macrophage differentiation into foam cells (Badrnya et al., 2014). Here we reasoned that one downstream result of platelet/monocyte aggregate formation would be production of proinflammatory EVs.
The genuine nature of the EVs was confirmed through multiple experimental approaches, including Nanosight and electron microscopy analyses, which revealed a similar size of these structures, unaltered by cell exposure to TNF-α or the prostacyclin analogue. In all cases, an average diameter of > 100 nm was quantified suggesting that EVs produced are predominantly formed by membrane-spawn vesicles and not exosomes (Jeppesen et al., 2019). This evidence was corroborated by the proteomic analysis and further validated by Western blot, which identified AnxA1 in all subsets of EVs: this protein is a genuine marker for membrane-derived vesicles (Jeppesen et al., 2019). More interesting to us is the emerging evidence that the same cell can generate EVs which are at least in part different in relation to the stimulus applied or the microenvironment, with previous work focusing on neutrophil-derived EVs (Dalli et al., 2013(Dalli et al., , 2014. Here, we observed that monocyte-derived EVs isolated from mixed platelet/monocyte aggregates in inflammatory conditions (TNF-α) bind to and are internalized by HUVECs. Before exploring further the differences in composition we further tested the potential different effector functions of TNF-α and Iloprost+TNF-α EVs in more complex settings, using an organ culture protocol of the atherosclerotic plaque.
There has been quite some interest in EVs and atherosclerosis, mainly with a focus on vesicles released from the plaque, possibly as a downstream determinant of pathogenic processes operative within the diseased vessel. Several studies showed that EVs are mainly derived from leukocytes and i) are endowed with thrombogenic activities (Leroyer et al., 2007), ii) can increase intra-plaque neovascularization and plaque vulnerability, iii) enhance proliferation of endothelial cells and angiogenesis (Leroyer et al., 2008) through the presence of tissue factor activity (Morel et al., 2006). The leukocyte origin was further confirmed by Mayr et al., who identified the myeloid EV fraction, as well as EVs from smooth muscle cells and erythrocytes. Metabolomics analyses showed an increase in taurine, expression of a monocyte-produced oxidative microenvironment within the atherosclerotic plaque (Mayr et al., 2009). Here we revealed marked modulatory functions of monocyte EVs applied to the plaque with selective changes in specific cytokines. Indeed, the increase in IL-6 is remarkable and agrees with the importance of this cytokine in atherosclerosis (Libby & Rocha, 2018). Exogenously administered IL-6 enhances development of fatty lesions in mice (Huber et al., 1999), while in man this cytokine is enhances endothelial dysfunction and aortic stiffness: rheumatoid arthritis patients treated with anti-IL-6 therapy displayed reduced articular inflammation and decreased endothelial dysfunction (Protogerou et al., 2011). A recent Mendelian randomized study, focusing on the single nucleotide polymorphisms in the IL-6 receptor gene, highlighted loss of function as a viable approach for the prevention of coronary heart disease (Swerdlow et al., 2012). It was of great interest to us that the vesicles generated by the monocyte preparation stimulated with Iloprost+TNF-α displayed a different impact on the plaque, with a blunted cytokine response. We obtained evidence for a broader alteration of the proteome released from the plaque (list of hits deposited under the name PRIDE PXD014325 (http://www.ebi.ac.uk/pride/archive/projects/ PXD014325).
Collectively, these experiments justified an in-depth analysis of the potential differences between TNF-α EVs and Iloprost+TNF-α EVs. While we recognize that structural lipids, lipid mediator precursors (Norling et al., 2011), microRNA and other nucleic acids (Guduric-Fuchs et al., 2012) could vary between the two vesicle type, we analyzed their protein contents. In general a lower number of significantly enriched proteins were detected in Iloprost+TNF-α EVs compared to TNF-α EVs suggesting that attenuation of platelet activation not only reduced the number of CD14 + /CD41and CD14 + /CD41 + EVs, but also modified their actual composition. Addition of Iloprost reduced the number of proteins exclusively identified in monocyte EVs from 10 to 2. Comparison with published proteomic lists revealed interesting overlaps (Table S4). As an example, THP-1 monocytic cells stimulated with lipopolysaccharide yield EVs that contain EEF1B2 (an elongation factor) and PSMC2 (proteasome subunit) (Bernimoulin et al., 2009), two of the proteins uniquely identified here in TNFα EVs. Out of six published studies on platelet EVs, we focused on the two where similar preparation protocols were used (Aatonen et al., 2014;Pienimaeki-Roemer et al., 2015): INA (cytoskeleton component) uniquely identified in Iloprost+TNF-α EVs was identified by Pienimaeki-Roemer et al. in senescent platelet EVs (Pienimaeki-Roemer et al., 2015). In our hands, Iloprost+TNF-α EVs also express PSMC2 as well as AP2M1 (vesicle transporter) and PSMB6 (another proteasome subunit) (Pienimaeki-Roemer et al., 2015). An interesting hit was gelsolin (GSN), more abundant in Iloprost+TNF-α EVs: this protein is anti-inflammatory and detected in resolving inflammatory exudates (Kaneva et al., 2017). Published proteomic analyses have reported gelsolin downregulation in atherosclerotic compared to pre-atherosclerotic coronary arteries (De La Cuesta et al., 2012). A reduction in gelsolin levels can deregulate the cytoskeleton within the human atherosclerotic coronary media layer and switch medial vascular smooth muscle cells from a contractile to a pro-inflammatory synthetic phenotype (De La Cuesta et al., 2013). Furthermore, circulating levels of gelsolin are reduced in patients with a diagnosis of asymptomatic carotid artery plaque (Bhosale et al., 2018). The translational relevance of these new findings derives from the detection of monocyte and platelet EVs, as well as EVs bearing both markers, in CAD patients. Importantly, their numbers were elevated when patients had confirmed atherosclerotic plaque and required surgical intervention.
The role of EVs in atherosclerosis, especially platelet EVs, has been already investigated and described in several reviews (Charla et al., 2020;Oggero et al., 2019;Van Der Vorst et al., 2018). However, as a relevant example, a recent publication by Zaldivia et al. reports changes in platelet-derived EVs in patients with hypertension and renal denervation (Zaldivia et al., 2020). As such, it is relevant to mention that cardiovascular diseases others than atherosclerosis may be characterized by altered EV generation including the vesicles that derive from monocyte/platelet aggregates Zaldivia et al., 2017).
In conclusion, the activating effect of monocyte-derived vesicles on the reactivity of the atherosclerotic plaque reflects the contribution of platelet adhesion. Monocyte/platelet aggregates, accepted as a predictive marker of several cardiovascular pathologies including coronary artery disease, may have longer lasting pathogenic effects through generation of vesicles which may propagate pro-inflammatory actions. Modulation of platelet reactivity could help attenuate the detrimental properties of these vesicles.
A C K N O W L E D G E M E N T S
We thank Mr Joseph Dowdall and Mr Stephen Sheehan, Department of Vascular Surgery, St Vincent's University Hospital for the provision of material. The authors acknowledge the support of the UCD Conway Institute Core Technology mass spectrometry facilities. We would like to thank all of the study participants and all of the Beacon Hospital Staff. We would like to thank Dr. Victoria McEneaney, Research Lead at the Beacon Hospital Research Institute and Tina Coleman, Beacon Hospital Lab Manager. We thank Dr Claudio Raimondi for providing the HAoEC.
DECLARATION OF INTERESTS
None.
AUTHOR CONTRIBUTIONS
Mauro Perretti devised the project, the main conceptual idea, planned and analyzed data and proof outline. Silvia
|
2021-05-01T05:14:49.322Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6a54f8f264d8cdcc6718ab4d71b33f1ea3fa7ad2",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/jev2.12084",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6a54f8f264d8cdcc6718ab4d71b33f1ea3fa7ad2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16651304
|
pes2o/s2orc
|
v3-fos-license
|
Development and Validation of an Instrument for Assessing Patient Experience of Chronic Illness Care
Introduction: The experience of chronic patients with the care they receive, fuelled by the focus on patient-centeredness and the increasing evidence on its positive relation with other dimensions of quality, is being acknowledged as a key element in improving the quality of care. There are a dearth of accepted tools and metrics to assess patient experience from the patient’s perspective that have been adapted to the new chronic care context: continued, systemic, with multidisciplinary teams and new technologies. Methods: Development and validation of a scale conducting a literature review, expert panel, pilot and field studies with 356 chronic primary care patients, to assess content and face validities and reliability. Results: IEXPAC is an 11+1 item scale with adequate metric properties measured by Alpha Chronbach, Goodness of fit index, and satisfactory convergence validity around three factors named: productive interactions, new relational model and person’s self-management. Conclusions: IEXPAC allows measurement of the patient experience of chronic illness care. Together with other indicators, IEXPAC can determine the quality of care provided according to the Triple Aim framework, facilitating health systems reorientation towards integrated patient-centred care.
Introduction
The prevalence of chronic conditions and multimorbidity is rising worldwide [1,2]. This growth has placed increasing demands on existing acute-oriented healthcare systems and has resulted in poor quality of care and deficient patient experiences as a consequence of the fragmentation and lack of coordination in the organisation and delivery of care for people living with chronic diseases [3].
Although many advances have been made in the treatment available to chronically ill patients, most patients are multimorbid with diverse clusters of chronic diseases, and current or potential complex health and social care needs. Management of these needs requires integral and proactive action and, in many cases, these complex patients do not receive the care recommended by evidence-based clinical practice guidelines [4].
Furthermore, there is also a lack of patients' involvement and collaboration in the design and co-creation of health services, especially for those with chronic illnesses. As a consequence, the delivery of effective, high-quality chronic care requires a systemic transformation [5] that goes beyond merely adding new isolated interventions to the existing acute-focused healthcare system [6]. It requires patient engagement, widespread use of quality improvement methods and innovations in chronic care.
Two decades ago, Wagner et al. developed the Chronic Care Model [7], a framework for delivering care to patients with chronic conditions and for guiding quality improvement in chronic care. This model is focused on providing proactive, planned, integrated and patient-centred care. There is evidence that the Chronic Care Model improves clinical outcomes and experiences of chronically ill patients receiving care [8,9,10].
Cramm and collaborators have demonstrated that, over time, quality of care and changes therein translate into more positive experiences for patients with chronic conditions [11]. Therefore, patient experience can, if appropriately measured, indicate the quality of chronic illness care and can provide important information to improve quality of care, patient safety and clinical effectiveness [12].
Consistent with the Chronic Care Model, Glasgow et al. [13] developed the Patient Assessment of Chronic Illness Care scale (PACIC) to assess patient experience with chronic care delivery. This scale has been used internationally among patients with a variety of chronic health conditions, and has been adapted and validated in many countries. In a systematic review, Vrijhoef et al [14] identified it as the most applicable and relevant questionnaire for measuring the quality of integrated chronic care from the patient's perspective. Recently, Singer et al [15,16] developed the Patient Perceptions of Integrated Care survey (PPIC), assessing a six-dimension model of integrated care. PACIC and PPIC focus on the experience between patients and the doctors and nurses who regularly provide their care. These instruments do not incorporate elements related to ICT developments in chronic care and do not directly assess the coordination between health care and social care providers.
Many new integrated care models are built to incorporate a patient's narrative of needs, preferences and expectations [17,18], acknowledging the essential role they have in their own care and the need for truly patientcentred care [19]. However, there is a dearth of accepted metrics [20].
Our group previously developed IEMAC-ARCHO, a selfassessment tool of readiness for chronicity in healthcare organisations [21,22]. Subsequently, the need to develop an instrument to assess patient experience of chronic integrated care was identified for the following reasons: -To incorporate new theories, frameworks and trends in health care, such as the Triple Aim [23], the narratives of 'person-centred coordinated care' [24] or coproduction approaches [25] that are emphasising the importance of patient experience; -To incorporate a broad notion of integrated care, including social care and patient self-management; -To include increasingly popular technological innovations that are transforming the interaction between patients and the system of care; -To consider the epidemiological situation, characterised by high prevalence of chronic conditions and multimorbidity [26]. -To take into account the interaction with a team (or network) of providers instead of focusing on separate professionals (interactions with doctors, nurses, etc.) [27]; -To specifically address the concept of the "patient experience", separating it from that of patient satisfaction considering the approaches and outcomes obtained by Michelle Beattie [28], Cramm [29,30] and Wensing [31] -To complement the aforementioned tool with another that incorporates the patient perspective.
Therefore, the purpose of this paper is to describe the process of development and validation of a new tool to measure self-reported patient experience of integrated chronic care.
Theory and methods
This is a design and validation study of a new instrument to assess the experience of patients with chronic conditions, who, because of their health status, have continuous interactions with social and health care professionals and services. The new tool is theoretically based on the Chronic Care Model and is inspired by patient-centred integrated care approaches. The tool is intended to be used routinely to assess the patients experience of chronic illness care. For this reason, the following characteristics have been prioritised in its design [28]: small size (affordable), focused on the areas which patients consider important (appropriate), elements that support the processes of transformation and attention to chronicity (sensitive), orientation to what happens during the interaction with professionals (relevant), easy to understand (simple), a limited selection of elements (feasibility), suitable in any context (adaptable), and well-founded to ensure its psychometric properties (valid and reliable).
In this study, patient experience is defined as the information that the person facilitates on what has happened (to her) in her continued interaction with the health and social care professionals and services and on how she has lived that interaction and its outcomes. Meanwhile integrated care was conceptualised according to the Chronic Care Model and its subsequent adaptation by the WHO [3,7].
The steps followed in this study are shown in Figure 1.
Literature search: Characteristics that should be analysed
The literature was reviewed to identify existing tools and their characteristics to evaluate patient experience with integrated chronic care. A scoping review was carried out using MEDLINE and Web-of-Knowledge. Only studies in English or Spanish published in the last ten years (until January 2015) were included considering the availability of a previous high quality systematic review conducted by Vrijhoef et al [14]. "Patient Experience" and "Patient Perceptions" with "Integrated Care" or "Chronic Care" were used as search terms. References from retrieved articles were examined to locate further studies. A total of 58 articles were found, their abstracts were revised and 18 of them were fully reviewed and incorporated as relevant sources of information. The results were initially analysed, structured and made available to all members of the research team.
Selection and formulation of reactive items
Based on the previous results, the research team developed, by consensus, a pool of 28 reactive items. This set included a minimum of two items for each characteristic of care identified as relevant. Items were elaborated by the research team in successive work-team sessions considering: patient experience dimensions identified in the literature, as well as IEMAC/ARCHO [21,22] dimensions and interventions.
Content validity: Expert panel
An expert panel was carried out from September to November 2014 using an online survey involving 15 professionals from primary care, public health, social services, management, quality and safety boards and research institutions. The selection of the participants was based on their knowledge and expertise, each having at least 15 years of experience in clinical or managerial positions. They were recruited by personal contact. Experts evaluated: content validity (redundancies, absences, misleading questions), face validity (understanding, friendliness, adequacy, ordinal structure), relevance to justify items' inclusion, adequacy of the type of response scale and the instrument as a whole. The results of their answers prompted some changes in the reactive items to be explored in this new instrument.
Pre-test
Two pilot groups, each of 18 patients with chronic conditions, were conducted in December 2014. Patients were recruited through patient organisations for a variety of health conditions and had different characteristics (age, gender, socioeconomic). These patients evaluated the 28 reactive items of the questionnaire, regarding appropriateness, readability, acceptability and necessary time of response for each item, as well as two possible types of response scales. These patients considered the comprehension of this 28-item questionnaire as very good (4.8/5) as well as appropriate (4.6/5). The average time of response was 15 minutes. As a result, the wording of seven items was modified to improve understanding.
Reliability and validity: Field study
To establish the psychometric properties of the instrument and select those items with the best behaviour, a field study was performed in April-May 2015 with the participation of 350 patients (sample size calculated for a p=q=0.50, 5% error and a level of significance for two queues of 0.05). These were patients older than 16 years of age, with at least one chronic disease, who visited general medicine or nursing consultants at 11 primary care centres of four regional health services in Spain (Catalonia, Madrid, Basque Country and Valencia). Among those who met the inclusion criteria, patients were recruited by the interviewer by random systematic cluster sampling with proportional allocation (K=3). The questionnaire was self-administered, and only at the request of the patient, applied by means of a personal interview. Patients interviewed were informed of the purpose of the study and informed consent was obtained. The demographic and clinical variables of the study were collected in a booklet of data collection (BDC) designed for this study. The interviewers received a briefing on the selection procedure of the patients to be interviewed and on the correct application of the questionnaire. Twelve patients declined to answer.
In the analysis, the ceiling-floor effects and the correlation of each item with the scale total score were considered, where values above 0.30 were acceptable [32]. To establish the construct validity, a preliminary study to determine the unidimensionality of the factors was conducted through the use of an exploratory factorial analysis (EFA) using principal components, with Varimax rotation of the resulting array. To remove factors, criteria were applied using Eigenvalues equal to 1 (calculating previously the statisticians Kaiser-Meyer-Olkin and the Bartlett's test to determine the appropriateness of performed EFA). Factorial loads higher than 0.50 were considered as acceptable [33]. The internal consistency reliability of a first version of the instrument was calculated using the statistical Cronbach's Alpha, assuming acceptable values equal to or greater than 0.70 [32,33]. Subsequently, a confirmatory factorial analysis (CFA) was carried out, using all data and a random selection (N=115) of data from patients receiving health care in different health services, to confirm the hypotheses concerning the underlying structure generated by the exploratory factorial analysis and to rate the 'goodness of fit'. This analysis was performed using the threefactor model that was derived from EFA to verify that the isolated factors finally had not changed their structure and that statistics employed in the exploratory analysis remained satisfactory. To check the measurement model validity, the standardized root mean square residual (SRMR), the Jöreskog-Sörbom goodness of fit index (GFI), the normed fix index (NFI), and the comparative fix index (CFI) were used. Pearson correlations between factors were also calculated to check the factors' independence.
The quality criteria for measurement properties of health status instruments proposed by Terwee et al. [34] were considered to assess the acceptability of the questionnaire's elements.
The ability of the questionnaire to discriminate between isolated dimensions was tested by performing t-test, ANOVA or Chi-Square, because differences in scores were expected based on differences in care delivery.
Ethical approval
The protocol of the study was approved by the Ethics Committee of the University Miguel Hernández, the institution who coordinated the study, and the Madrid Health Service Research Central Commission.
Participants
Three hundred thirty-eight patients responded to the questionnaire (response rate 96.6%). Table 1 depicts their characteristics.
Items analysis
Seven items were ruled out for their ceiling-floor effects. The remaining 21 were included in the subsequent analysis, after verifying that there was acceptable variability in the answers of the patients.
The values of the Alpha of Cronbach were calculated by eliminating each item in an individualised way. No items were ruled out considering these data. The values of the correlations item-total ranged between 0.18 and 0.61. Three items with correlations inferior to 0.30 were ruled out before applying the technique of the exploratory factorial analysis.
Explorative analysis of dimensionality and reliability: Exploratory factor analysis (EFA)
A first factorial solution, with 15 items, joined together in five first order factors, with a principal factor explaining the 51.6% of the variance, and four items with significant saturations in more than one factor. In the following exploratory factorial analysis, 11 items were included, each with factorial saturations higher than 0.5 and commonalities between 0.43 and 0.70. This factorial solution converged in three factors, explaining the 57.5% of the common variance ( Table 2). Based on our observation, Factor 1, named "Productive Interactions", refers to the characteristics and content of interactions between patients and professionals oriented to improve outcomes, for example the professionals who care for me listen to me and ask me about my needs/habits and preferences and they are concerned with my quality of life. Factor 2, named "New Relational Model", refers to new forms of patient interaction with the health care system, through the internet or with peers. Factor 3, named "Patient Self-Management", refers to the ability of individuals to manage their own care and improve their wellbeing based on professional-mediated interventions.
Confirmatory analysis of dimensionality and analysis of reliability: Confirmatory factorial analysis (CFA)
The confirmatory factor analysis (CFA), in the second stage of the study, indicated an acceptable fit to the data. The estimates of the parameters and factor loadings of the model are shown in Table 3 and Figure 2. This figure also shows the optimised model of the questionnaire factorial structure based on confirmatory factor analysis.
Composite reliability and convergent validity
The analysis of the convergent validity was satisfactory. All standardised loads were found to be significant for the respective factor and to be greater than 0.6. The average variance extracted was greater than 0.5 [35]. The composite reliability indexes were greater than 0.7 [36], indicating acceptable reliability for all factors ( Table 4).
Discriminant analysis
In the discriminant analysis, Table 5 shows the inter-correlations between the three factors identified in the analyses. The factors showed acceptable independence to each other. PI with PSM showed higher values.
Internal consistency
The value of the Alpha Chronbach was 0.76 for the whole scale (0.79 in factor 1, 0.56 in factor 2 and 0.63 in factor 3).
Scale scores
The average punctuation in the IEXPAC-11 items was 3.1 points (SD 0.7, IC95% 3.0-3.2). The average punctuation in each factor was 4.0 points (SD 0.9) in factor one, 1.7 (SD 0.9) in factor two and 3.7 (SD 0.9) in factor three. Table 4: Reliability, dimensionality, and convergent validity of each factor of the questionnaire. a Data represents Student t-test values and differences were significant at P = 0.05. b CR: composite reliability. c AVE: average variance extracted. The new tool was named IEXPAC, Instrument for Evaluation of the Experience of Chronic Patients (available online at http://www.iemac.es/iexpac/). Additionally, an item was included for specific cases when a patient is hospitalised, to check the continuity of care during discharge and once the patient returns home. This item (number 12) is not included in the scale aggregate rating. The average score for this item was 2.5 (SD 1.7).
Discussion
The developed scale is an instrument to obtain a reliable and valid measure of the experience of patients with chronic conditions during their interaction with the system of care. IEXPAC has a condensed set of measured items so that it can be used routinely and systematically in care services for assessing whether patients perceive is receiving integrated care, have a positive relationship with the set of professionals which usually interacts with, feels abler to look after their health and, are involved in new ways of non-face-to face interactions. Specifically, IEXPAC assesses patient experience in accordance with the Triple Aim framework. Triple Aim shows the importance of the "experience of care", an inconsistently measured dimension. Most health systems do not regularly assess the experience of care. This instrument could be used online (http://iexpac.es) paper-andpencil or phone to assess patient experience of care in several contexts such as a health centre, health district or health service.
Patient experience represents a unique encompassing dimension that is challenging to measure. Lacking a widely accepted definition [37], we have rooted the development of this scale in the Chronic Care Model theoretical framework, enriched with approaches coming from patient-centred care [17,18] and service coproduction theories [25]. The relationship between patient experience and quality of care is not consistent in the literature. Major studies have rendered different, even opposite, results [12,38,39], whose likely explanation has been analysed by Manary [40]. In many contexts, the established patient satisfaction terminology is being substituted for the rather new 'patient experience', as if they can be used interchangeably. Patient experience with chronic care, as captured with the IEXPAC scale, however, differs significantly from the traditional patient satisfaction with episodes of care. This is why, in developing IEXPAC, we put emphasis on clarifying and delimiting the concept we want to measure, as formulated in our definition of patient experience. In the past, the concept of satisfaction has been used in a bilateral way, capturing the interaction of a patient with a single professional or organization. New measures should be focused on gauging experience with more comprehensive and complex provision models where different organizations are working collaboratively to provide patient-centred care.
The IEXPAC scale has several strengths from an integrated care point of view. First, it assesses experience of care beyond punctual contact, episode or specific setting. It is expected to capture a continuous experience over six months. Second, it considers the health professionals as a team, not solely focused on individual or isolated interaction with physicians, nurses or other staff. In countries like Spain, care delivery is not only focused on "doctors", but on a comprehensive team, where different primary care professionals interact with patients and different providers from hospitals, mental health networks, long term care facilities or social care organisations. Other authors, such as Walker et al have also acknowledged that patients "highly value a sense of all members of the care team being on the same page". Third, the scale captures the relationship between the patient and a system of care, where self-management and social care are also relevant. Fourth, the scale incorporates an active patient role, having a clear orientation towards improving outcomes by means of patients and professionals working together (coproduction). Fifth, IEXPAC is aligned with new evaluative frameworks of population health management based on the Triple Aim vision.
The psychometric analysis of IEXPAC renders three independent factors with items that converge around concepts: productive interactions, new relational model and patient self-management, all with literature supporting their adequacy and soundness [41].
This instrument also has some limitations that should be considered. There is still no data to support whether improvements in the scale ratings are related to better clinical or health-related quality of life outcomes. Furthermore, as patients with different chronic diseases or patients in different settings have distinct experiences with chronic illness care, there may be a need for specific scales for certain chronic diseases or types of complex chronic patients, such as those in home care programmes and those who are assisted by caregivers. Finally, the way the scale is formulated does not allow to attribute responsibilities to a specific care provider at individual level, only to a team of providers or "system of care".
To the best of our knowledge, most national health systems are not capturing the integrated care experience of patients with chronic conditions in a regular, holistic and systematic way. Most countries have a range of measures related to health outcomes and cost or utilization of services, typically included in most national or regional outcome frameworks. There is appropriate to also develop experience of care measures and incorporate them into national and regional-level integrated care models. There is a promising future for these metrics through the commissioning by health and social care authorities and through performance assessments. Relevant initiatives are expected to appear in the coming years in this field of knowledge [42,43]. For example, the area 4 (Ensuring that people have a positive experience of care) in the English NHS Outcome Framework 2015/2016 [44], may include the assessment of experience of care from an integrated care perspective. Tools like IEXPAC should contribute to this aim.
Conclusions
IEXPAC scale measures the experience of patients with chronic conditions in their continued interaction with health professionals and services in regular practice. It can ascertain the quality of care experienced by patients, contributing to the ' experience of care' axis of the Triple Aim, and facilitate the adoption of patient-centred care approaches by health and social care organisations.
Measurement of patient experience may also facilitate the reorientation towards patient-centred care. Presently, with numerous processes of service integration being deployed, this might be of particular importance. It is necessary to generate results that consolidate this type of measurement, showing its correlation with other outcome indicators whose relevance and usefulness are widely accepted in the literature.
Although IEXPAC has yet to prove it can be used in regular practice, it seems that new metrics like this will be welcomed and possibly incorporated into regular performance assessments or commissioning processes of health and social care.
|
2018-04-03T01:06:46.011Z
|
2016-08-31T00:00:00.000
|
{
"year": 2016,
"sha1": "4ae4aa88885eb4e8f181618cf50fd3aba3b9e996",
"oa_license": "CCBY",
"oa_url": "http://www.ijic.org/articles/10.5334/ijic.2443/galley/3297/download/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4ae4aa88885eb4e8f181618cf50fd3aba3b9e996",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247084254
|
pes2o/s2orc
|
v3-fos-license
|
Agent-based Dynamics of a SPAHR Opioid Model on Social Network Structures
Addiction epidemiology has been an active area of mathematical research in recent years. However, the social and mental processes involved in substance use disorders versus contraction of a pathogenic disease have presented challenges to advancing the epidemiological theory of substance abuse, especially within the context of the opioids where both prescriptions and social contagion have played a major role. In this paper, we utilize an agent-based modeling approach on social networks to further explore these dynamics. Using parameter estimation approaches, we compare our results to that of the Phillips et al. SPAHR model which was previously fit to data from the state of Tennessee. Our results show that the average path length of a social network has a strong relationship to social contagion dynamics for drug use initiation, while other pathways to substance use disorder should not be constrained to social network interactions that predate the individual's drug use.
Introduction
Substance use disorder and overdose mortality related to opioid use continues to be a major public health issue in the United States [1]. There are signs that the COVID-19 epidemic has only exacerbated the problem, especially with regard to risk of overdose death [2,3]. Illicitly manufactured fentanyl, a synthetic opioid that is up to 50 times more potent than heroin and up to 100 times stronger than morphine, is behind much of this mortality. Some recent fentanyl analogs are estimated to be even stronger -up to 10,000 times more potent than morphine in the case of Carfentanil. These are often found mixed with other drugs, particularly heroin, cocaine, and methamphetamine, and made into pills which resemble other prescription opioids. Taken together, it is estimated that these non-methadone synthetic opioids account for 73% of all opioid-involved deaths and are the most common drugs involved in overdose deaths of any sort [3,4,5,6].
There have been numerous simulation and conceptual modeling studies targeting some form of opioid misuse despite a noted lack of financial support for modeling studies on the opioid crisis from public health organizations [7,8,9]. Among simulation models, the most frequent approach is compartmental modeling, followed by Markov models, system dynamics models, and agentbased models. However, a recent review of opioid simulation models found that fewer than half presented model equations or provided access to model code and documentation, making it impossible to adequately interpret the findings, reproduce the results, or meaningfully establish how differences in mechanistic structure can lead to different qualitative conclusions [9].
While careful, well-justified mechanistic development and overall transparency is important in any modeling study, there are also trade-offs related to model complexity. This is particularly true for agent-based models (ABMs). Complex, detailed ABMs offer a high degree of realism that is attractive to policy makers and can provide a virtual laboratory for testing management strategies. On the other hand, they can be challenging to parameterize from data and lose generality and tractability to structural analysis that can be critical for advancing theory [10,11]. If the goal is to inform theory and advance mathematical results relating to model structure, parsimony is critical, and minimalist ABMs can be powerful tools for forging a connection between key, individual-level behaviors and population-level phenomena. This understanding can then be used to form mathematical models of population dynamics based on the individual-level mechanisms.
Our study takes this reductionist approach to agent-based modeling in order to study the role of social network structure on the theoretical results of an ordinary differential equation (ODE) compartmental model for opioid use disorder. The current US opioid epidemic is driven by a combination of prescribing practices and social factors [12,13,5]. Incorporating both of these mechanisms into an ODE compartmental model results in a different model structure than typically seen in infectious disease and exclusively sociallydriven drug use disorder settings, with the result that typical approaches for analysis relying on R 0 no longer apply [12,5]. The effect of social networks in this mathematical setting has yet to be explored, but there is plenty of evidence that it plays a key role. Multiple studies reveal a connection between friend and family opioid use and opioid use initiation, and there are strong arguments in favor of applying social contagion theory to opioid use disorder [14,15,16,13].
Given the importance of prescription opioids and fentanyl to the current state of the US opioid epidemic, our social-network ABM study is based on a recent, data-driven ODE model which focuses on the interconnected dynamics of both of these factors. This Phillips et al. model, like many ODE models, inherently assumes the well mixing principle. A population is described as well-mixed if every individual in the population interacts with all of the others, but models often assume that the well-mixing principle reasonably describes phenomena in populations that may only be approximately well mixed. However, these models may break down upon consideration of populations whose social network structure significantly deviates from wellmixed [17]. In this study, we will seek to determine the relative influence of social network based contagion on the spread of illicit-and prescriptionsourced opioids [13].
The remainder of this paper is organized as follows. First, we briefly describe the Phillips et al. model and our mathematical approach to considering it as the mean-field, population-level model for an individual-level stochastic process. Next, we describe the construction of several social network models that are used for comparison purposes in our methods, including how vertices are to be removed and added in during the course of a model simulation. We then describe our procedure for comparing ABM parameterization to that of the Phillips et al. ODE model. Results are presented showing how different social network metrics are related to substance use disorder outcomes, with average path length showing the strongest relationship. We then use a parameter estimation procedure to show that prescription opioid based heroin and fentanyl initiation must be independent of the social network in order to reproduce the results of Phillips et al., with direct rates of substance use disorder (from the S class directly into A and H) left to adjust for the model's social network structure. Finally, we relate average path length directly to the value of the S to H rate, suggesting that social network dynamics almost exclusively affect social contagion dynamics whereby susceptible individuals acquire a heroin or fentanyl use disorder directly, and not through the use of prescription opioids first.
Methods
The agent-based model (ABM) developed in this study is based on a study from Phillips et al. [5] which described a five-class, SPAHR model for prescription-and illicit-based opioid use disorder. Their model is formulated as a system of ordinary differential equations (ODEs) and serves as the underlying model for our extensions here; therefore, we shall often refer to the Phillips model as the "ODE model" versus our ABM-based, stochastic work. A consequence of this is that many of the modeling assumptions used in this project are inherited from the Phillips et al. ODE model. We refer the reader to the Phillips et al. study [5] for a complete discussion of these assumptions and their consequences.
Both the ABM and the ODE model contain the same compartments representing the state of individuals in a given population. These can be described as follows: 1. Susceptible (S): Individuals are not taking prescription opioids or heroin or fentanyl, and they have not previously suffered from opioid use disorder. 2. Prescribed (P): Individuals are taking prescription opioids, but their use patterns do not qualify as a disorder. 3. Addiction to prescription opioids (A): Individuals have a use disorder related to prescription opioids, but they are not using heroin or fentanyl. 4. Heroin addiction (H): Individuals have an opioid use disorder which includes heroin or fentanyl. It may still also include prescription opioids. 5. Stably recovered (R): Individuals who quit taking opioids and/or complete treatment for opioid use disorder and do not relapse within 4 weeks.
Phillips et al. [5] considered these compartments as proportions of the entire population so that S + P + A + H + R = 1. In the case of our ABM, we will assume that these classes are both mutually exclusive and exhaustive, with each agent belonging to only one of the classes at any given time step. The Phillips et al. ODE model can be expressed as A schematic of this model from Phillips et al. [5] is shown in Fig. 1. In addition to the compartments listed above, all of the ODE transition pathways will be represented in our ABM. The Phillips model consists of validated parameter values based on data from the state of Tennessee, so we will leverage these values and the model's time series outputs for parameterization and evaluation of our ABM results. However, the time-varying parameters from the Phillips et al. model will be set at their initial values and made constant for the purpose of our study, both for simplicity and to focus on the network effects of the ABM versus the ODE model formulation.
Converting the ODE model to an agent-based model
Given the work conducted by Phillips et al. on the deterministic, meanfield model represented by System 1, switching to an agent-based formulation conveys certain advantages for further analysis. A primary benefit is the individual-level characterization of agent-to-agent interaction versus the population-level, mass-action formulation of the ODEs. Using an individuallevel approach, it becomes natural to explicitly model preexisting social connections between individuals with a network and then directly consider both different network structures and different scenarios for agent-to-agent interaction that could depend on that network. We can also begin to quantify a certain degree of uncertainty due to individual-level effects by bootstrapping the results of our simulations, though this comes with a computational cost versus the deterministic, ODE approach.
Our process for converting the Phillips et al. ODE model to an agentbased model (ABM) proceeds as follows. In the ODE model, we assume that each compartment represents the mean expected fraction of a total population which belongs to that class, and that each term in the system of equations defines a mean rate of change between these compartments. In the case of a linear term, e.g. αS, the parameter (α) then defines a constant, mean rate of transition (in this case, from S to P ) per individual in the S compartment per unit time (years for the Phillips et al. model [5] and in our study). These transitions also occur independently of the time since the last event, as all information about individuals or past events in the ODE model are lost. Multiplying by the relevant compartment, which is always the compartment making the transition (S in our αS example), then gives the total expected number of individuals (or the population fraction in the units of Phillips et al. [5]) that make the transition. By definition, this also implies that individuals in the model are undergoing a Poisson process with rate parameter α, and we can therefore model the waiting time before an individual's transition between compartments with an exponential distribution.
In the case of the ODE model's nonlinear rates, this line of reasoning only changes slightly. As with the linear rates, each nonlinear term of the ODE model always includes the class that is making the transition. Setting aside the relapse rates momentarily, each of the other nonlinear rates also includes a second class which the transitioning class must interact with in some way, whether directly (e.g. via social contact) or indirectly (e.g. drug availability as a function of current demand, see Phillips et al. [5] for details). We can optionally relax the well-mixing assumption of the ODE model by assuming that these interactions take place according to a social network with average "infection" rate given by the coefficient of the corresponding ODE rate term.
For our study, we will assume that this network is an undirected, simple graph where the nodes are agents and the edges represent significant social interaction between the agents. The network will be fixed at the start of each simulation and will not change except to accommodate new agents which come into the network to replace a departing agent that underwent a death process during a time-step. This process is specific to the network generation algorithm chosen and will be explained in more detail later.
In the case of a social interaction via the network, the relative exposure of an individual agent to a substance using class, for example, the H-class, is given by the number of its network neighbors in H divided by its total number of neighbors. In the case of the relapse rates in the ODE model, the quotient terms are meant to determine the class into which individuals relapse. Since it is quite possible an individual in R has no neighbors in either A or H, the global quotient (using the total number of A and H in the entire network) is used instead for this transition.
In all cases, our agent-based model uses a two-step approach to determining agent transitions between classes. First, for a given agent with a given class attribute, all mean transition rates out of the current class (as given by the ODE model) are added together into a parameter λ. Under the assumption that this sum defines a mean waiting time in a Poisson process, we model the probability that the agent transitions out of its class in a time-step ∆t to be P transition in (t, t a i denotes all linear coefficients of rates out of the current class, b j denotes all nonlinear coefficients out of the current class, and N j is the density of all neighbors in the contact class. Relapse is considered as a linear rate for this purpose, with σ the coefficient. During a simulation, comparing this probability to a uniform, psuedorandom number establishes whether or not a given agent will make a transition during a time-step of length ∆t. Making more than one transition per timestep is a higher order transition probability (O(∆t 2 )) and is neglected in our model.
Assuming the agent will make a transition, the second step of the algorithm is to determine which class the agent transitions to among the various possibilities as determined by the directed connections in Fig. 1. Each individual transition rate contributing to λ is normalized by λ and treated as a probability. If relapse is an option, σ is divided up into A/(A + H) and H/(A + H) components, where A and H are the total number of prescription opioid addiction-class agents and heroin addiction-class agents in the simulation, and these two components define transitions probabilities into the A and H classes respectively. If both A and H are zero, we divide the probability σ/λ evenly between the two transition cases.
Lastly, since H = 0 defines an absorbing state for the model (as long as A > 0 or R = 0), our model artificially converts one random S agent into an H whenever H = 0. This avoids a discrepancy with the original ODE model due to discretization: if H(0) > 0 in the ODE system, H can asymptotically approach zero but never reach it. However, in a discrete, stochastic, agent-based model, it is quite possible to achieve H = 0 for nonzero initial conditions, especially when the initial count of H is relatively small. S was chosen as the reservoir class for this conversion because it tends to be the class with the greatest number of individuals by far when running simulations with parameters from Phillips et al. [5].
The total population of the model is determined prior to simulation and is constant between time steps. When agents undergo a death process in the model, we immediately introduce another agent and assign it the S class.
Social Networks
As mentioned before, preexisting social connections between discrete agents are modeled as network. We define a network, which is a type of graph, to be For a network G = (V, E), if there exists an undirected edge (v i , v j ) ∈ E then we say that v j is a neighbor of node v i and vice versa. The set of all neighbors of a node v i is referred to as the neighborhood of The degree of a node is its number of neighbors and can be thought of as a function, deg : Edges in our network represent social connections, meaning any relationship where there exists a potential to spread a behavioral practice or induce a behavioral change, a phenomenon that has been called a "social contagion" [18]. We consider each connection to be a homogeneous social interaction; that is, the strength of social interactions is considered to be equal for any pair of individuals in the network who are connected by an edge. By this definition, we can consider the neighborhood N (v i ) to be the epidemiologically relevant acquaintances of the agent v i Table 1: Description of network-dependent rates in the ABM. Note that β A , β P , θ 1 , θ 2 , and θ 3 are constant scalars.
In terms of network structure, we can envision a well-mixed population as a fully connected or complete graph. Therefore, we expect the mean of a large set of ABM realizations conducted on a fully connected network to approximate the Phillips et al. model [5]. Any other network structure could potentially yield different results, though it only affects the model through the five rates described in Tbl. 1.
In addition to the fully connected network, we examined the effect of networks created from three common network generation algorithms: the Erdős-Rényi network [19], the Barabási-Albert network [20], and the Watts-Strogatz network [21]. In any given ABM simulation, one of these algorithms is specified (or the complete network) and the network is generated for the requested number of agents.
However, when agents die in the model, we felt it overly artificial to rein-troduce a susceptible agent with the same connections as before. Instead, if n individuals die in time step t, then those n individuals are removed entirely from the network, including any edges associated with those individuals. Then n new individuals are immediately introduced into the network, forming new connections and each being assigned class S. In order to preserve the properties of the original network generation algorithm as closely as possible, we implemented network generation-specific reintroduction algorithms that define how a new node is introduced into the network after the removal of an old node. Descriptions of each network generation algorithm and their corresponding reintroduction algorithms are given below.
Erdős-Rényi Model
The Erdős-Rényi random graph has served as a baseline model for constructing random networks [19]. In this network generation algorithm, n nodes are created, and every edge (v i , v j ) between two nodes v i and v j has an equal probability p of being included in the network.
When a new agent (node) v n+1 is introduced into the network during a simulation, each possible edge with the new node, is added with the same probability p as used in the original network generation.
Barabási-Albert Model
The Barabási-Albert network [20] is a scale-free network generation algorithm that focuses on preferential attachment. Nodes are added in sequence to generate a graph, and for each newly-added node, the probability Π of forming an edge connecting the new node with any other node v i depends on the degree of This network generation algorithm has been extensively studied, and the preferential attachment and resulting scale-free degree distribution have a basis in observations of real-world networks [22]. Since this network is built in a sequential manner, our reintroduction algorithm is defined in the same way: Edge connections draw on the distribution described by Π just as if they were coming into the network at initialization.
Watts-Strogatz Model
The Watts-Strogatz network generational algorithm, also known as a small-world network, has been shown to demonstrate properties consistent with real-world social networks [21]. The stochasticity of this model is also more controlled than Barabási-Albert or Erdős-Rényi networks, with the amount of randomness in the network being controlled by one parameter.
The Watts-Strogatz algorithm defines three parameters for building the network G = (V, E): • N : Total number of nodes in the network, where |V | = N . In the ABM, this will correspond to the total population size.
• n: Known as the "neighborhood size." Each node is initially connected to the n closest nodes in the lattice structure (a process that will be described in subsequent sections). This parameter also fixes the eventual mean degree of all nodes in the network to be n, i.e. 1 By the definition given in [21], n must be an even number. Note that in [21], n is synonymous to the k parameter.
• p: Known as the "rewire probability." This is the probability that any edge within the original lattice may be disconnected and reconnected to another randomly-chosen node in the network. One can think of p as measuring the level of disorder in a Watts-Strogatz network, with p = 0 resulting in a regular lattice and p = 1 resulting in a random graph, similar to the Erdős-Rényi network.
In the Watts-Strogatz network generation algorithm, N nodes are arranged in a lattice. An edge is then created between each node and the n nodes closest to it within the lattice structure. After creating this initial structure, a Bernoulli process is then performed on the set of edges with each edge having a probability p of one of the nodes on the edge being swapped for another. For more information on this algorithm, we direct the reader to [21].
We are not aware of any established algorithm for the introduction of a new node into a Watts-Strogatz network, so we will outline our method here. On a high level, this Watts-Strogatz reintroduction algorithm strives to reintroduce new agents in the place of dead agents at the end of some time step t i while approximately preserving the local network topology of the Watts-Strogatz network.
Consider a Watts-Strogatz network defined by a graph G = (V, E) generated with parameter values N , n, and p. In the description of this algorithm, we will consider the terms "nodes" and "agents" to be synonymous when referring to operations on the network G. Let D ⊆ V be the subset of agents who die as a result of the ABM transitions. The first step in the algorithm is to remove all dead agents from the network, i.e., all edges belonging to any node in D are removed from E. This leaves each dead agent with no edges connecting it to the network. In the second step of the algorithm, they will be reintroduced as new nodes with their class property set to S. We will assume that |V \ D| > 0 so that D is a proper subset of V . If this is not the case, the entire network is simply reconstructed using the standard Watts-Strogatz algorithm with the same parameters.
The reintroduction algorithm can be broken down into two separate portions: 1. Pre-rewire neighborhood identification: A stochastic process is performed to identify a set of nodes that is highly-clustered and serves as an initial neighborhood for the reintroduced node. This is analogous to the initial Watts-Strogatz network generation, where nodes are connected to nearest neighbors within the lattice before edges are rewired. 2. Rewiring procedure: Rewiring, as in the original Watts-Strogatz algorithm, is performed on the pre-rewire edge set of the reintroduced node.
Pre-rewire neighborhood identification. Consider a "dead" node v d ∈ D with all edges removed. We will describe the construction of A, the pre-rewire neighborhood of v d . Since we wish to keep the average degree of all nodes in the network approximately equal to n, we will add n new edges connecting v d to the existing network.
A will be built through an iterative process. Let A m = {v 1 , ..., v m |m < n, v i ∈ V } ⊆ V be the intermediate version of A at some iteration m < n in the process. Let N m be the set of nodes given by the union of all neighbors of nodes v i ∈ A m but excluding any nodes already contained in A m , i.e.
where N (v i ) denotes the neighborhood of v i . Define a function deg Am : N m → R such that for any node v ∈ N m , deg Am is the number of edges connecting v to any node in A m . Therefore, To begin the algorithm, we choose a random node v 1 ∈ V \ D and set A 1 = {v 1 }. At all successive iterations, we will choose the next node v m+1 to be the one with the maximum number of edges connecting it to any nodes in the current collection A m , i.e.
If N m is empty then it is first checked whether A m = V . If this is the case, a node from D is chosen at random to be assigned to v m+1 (this does not prevent the reintroduction algorithm from acting on this node in the future if it hasn't already). If not, the network is not connected, and a node is chosen If there is more than one choice for this v m+1 , we take one at random from the candidates. Adding this new node to A m gives us A m+1 . Repeating this process for n iterations gives us the full set A.
Rewiring procedure. Once A is obtained for a given node v d , we then perform rewiring in an identical procedure to the one described in the original Watts-Strogatz algorithm but restricted to edges in the set {(v d , v i )|v i ∈ A}. That is, each edge connected to v d has a probability p of being rewired to a random node in V where p is the parameter used to generate the starting Watts-Strogatz algorithm network. This procedure concludes the reintroduction of v d into the network.
This reintroduction algorithm is repeated for all v d ∈ D. Additionally, we repeat the reintroduction procedure for every disconnected agent at each time step t i in the simulation so that there are never any isolated nodes, but the class property of these agents is preserved rather than reset to S. This reintroduction prevents disconnected agents from persisting within the network during a time-step, a phenomenon that would disrupt the networkbased rates in the ABM.
ABM parameter fitting
For parameters that were not network-dependent, we used estimated values from Phillips et al. derived from Tennessee data [5]. In Phillips et al. [5], α and µ a were defined as time-varying parameters. However, in our work, we keep these parameters constant at their initial values in order to better focus on comparing the ODE model results, which represent a fully connected community, to the various social network structures used in the ABM.
In the context of the data-fitted ODE model of Phillips et al., a primary question for our study was whether or not the output of their fitted model could be reproduced with an agent-based model operating on a social network. However, since the agent-based model derived here is stochastic, it would be prohibitively expensive to solve a nonlinear optimization problem for new parameters based on mean trajectories obtained via an average of ABM simulations over the space of all feasible parameter values. Instead, we began by fitting the Phillips et al. ODE model to the mean realization of the ABM generated with fixed network parameters and model parameters from the optimized Phillips et al. model. Our motivation was to observe how the ODE model parameters would need to change in order to mimic an imposed network structure. Working backwards, we might then narrow our search for which of the ABM parameters would need to be adjusted to reproduce the ODE results.
Parameter estimation on the ODE model was conducted using the sequential least-squares quadratic programming (SLSQP) method through the Python library Scipy [23]. The objective function F for this optimization was the mean squared error between the ODE trajectory and the pointwise mean ABM trajectory of all compartments. To account for all compartments of the model, we constructed vectors at each discretized time t i , resulting in a matrix of size (1000, 5) for 1000 time-steps and 5 compartments. F is then defined by This optimization was then repeated for different types and parameterizations of network algorithms, each time searching for the optimal parameterization of the ODE model result O * (t) with the smallest value of F (M, O * ).
All optimizations started with initial conditions described in Tbl. A.6 located in Appendix A. For all experiments, the optimization was run with a maximum 500 iterations and a target tolerance of 1 * 10 −20 before the optimization algorithm stopped. Each reference ABM model was run 301 times with base parameters described in Tbl. D.10 and network parameters described in Appendix D with the output trajectories averaged. After the ODE model was fit to the mean ABM trajectory, we compared the original parameter values to the newly fitted values and used an inversion formula to approximate new parameter values that might be necessary to make the ABM match the original, data-driven Phillips et al. ODE model trajectory. This works as follows: For a given parameter (e.g., α) in the ABM and ODE models, we first find the value k such that α f = kα o where α o is the original Philips et al. value for α and α f is the value of α when the ODE model is fit to the mean of a given ABM model. Inverting this function for parameter shift, we then have an approximation for the needed shift in the ABM parameter value to match the original ODE result, α abm = α o /k = α 2 o /α f . For our discussion, we will refer to parameters such as α abm as "inverted" parameters. To improve convergence of the optimization algorithm, we also imposed bounds on the fit values of the ODE model. Following the example of α, we restricted the fit bound of α f to [α o * 10 −3 , α o ]. We use α o as an upper-bound for α f based on the intuition that the well-mixed ODE model should only decrease its value of network-dependent parameters in order to fit the sparser network structure used in the ABM.
Using these inverted parameters, we can compare visually how well the ABM might fit the ODE model results. This procedure was repeated for a variety of network topologies on the ABM. Statistical analysis was also conducted to compare how these inverted parameters varied according to network statistics relevant to the network generation algorithms we chose. These results will be described in Section 3.1.
Experimental details
In all cases, our ABM was run with a time step of ∆t = 0.01 where t is in years. In order to keep the model trajectory at 10 years (as was done in Phillips et al. [5]), each model was run for 1000 time steps. After each time step, the population of individuals belonging to each class is recorded.
In the case of all but the fully connected network model, we also record two primary network statistics: the average path length (APL) and the clustering coefficient (CC). These were used for network comparison. Other network statistics were recorded as well, including the mean and variance of node degree from each model. Each network statistic was recorded at the beginning of the simulation, before any alteration by the ABM. We found this recording strategy to be adequate after analysis of beginning and ending network statistics showed very little perturbation in values; we hypothesize that this is due to low death rates causing low reintroduction rates for each model (see base parameters in Tbl. A.6) as well as our chosen methods of node reintroduction.
Since our results rely on a mean trajectory of several ABM runs for a given network parameterization, we used a simple arithmetic mean to aggregate network statistics across multiple model realizations. These could then be compared between network generation algorithm parameterizations. These metrics were calculated using the NetworkX library in Python [24]. The model was originally constructed in the NetLogo programming language [25], but was later transferred to Python and implemented using the Net-workX library [24]. All codes utilized in this research are publicly available from GitHub. The Phillips et al. model is available at https:// github.com/mountaindust/Heroin_model and all ABM-related codes used in this research are available with documentation at https://github.com/ owencqueen/SPAHR_Model.
Parameter estimation
To analyze the effect of network structure and ABM stochasticity on the Phillips et al. ODE model, it is necessary to fit one of these models to the results of the other. Since agent-based model results are the realization of a stochastic process while the Phillips et al. ODE system is meant to represent a mean-field model, it is far easier to conduct parameter estimation on the ODEs, thereby fitting the Phillips et al. to the ABM for a given social network structure. We then varied the network structure and examined how each parameter in the ODEs had to change in order to approximate the effect of the network on substance use dynamics. Details of the parameter estimation are described in Section 2.3.
Necessary network dynamics based on Phillips et al. results
Initial parameter estimation focused on the ER models described in Appendix D.
First, an optimization was attempted to find parameters that minimized Equation 2 for each tested network parameter set (Appendix D) of the Erdős-Rényi agent-based model, using a procedure as described in Section 2.3. During this initial attempt, all network-dependent parameters (Tbl. 1) were optimized in the ODE model, with all other parameters constant.
When attempting to optimize all network-dependent parameters, the optimization could not produce a solution that reasonably fit the ABM to the ODE model. An ad hoc ablation study was conducted to determine the source of improper fitting in the optimization of ODE parameters. Network dependence was removed for previously network-dependent parameters, and resulting fits were compared visually and quantitatively, by measuring the final loss value (Equation 2) after convergence of the optimization algorithm. To convert parameters from network dependence to independence, we reverted to the Phillips et al. definition of the parameters, as described in Tbl. 1. as is shown in Tbl. 1. By reverting to the ODE definitions, we observed that our final loss value would increase by up to two-fold, but parameter values were much more controlled, with θ 1 being the primary variable that controlled the goodness-of-fit. Tbl. 2 shows one example of this phenomena observed when removing network dependence from θ 2 and θ 3 ; similar patterns were observed across WS, BA, and ER models for a variety of model construction parameters. Therefore, the decision was made to only allow the parameters θ 1 , β A , and β P to retain network dependence and vary during the optimization procedure.
We interpret this result as indicative of a lack of social network influence (in the sense of presence or absence of H individuals in direct social contact) on initiation of heroin use for the P and A classes. Put another way, for individuals who may be developing an opioid use disorder based on prescription use (P class) or already have an opioid use disorder (A class), the social network simply has no bearing on how likely they are to move to the H class compared to the relative prevalence of H in the general population. Agents in these classes will develop heroin use disorder by seeking out their own, new access to heroin without the necessity of H contacts within their existing network.
However, the case of S is different: Individuals who are not actively using or recovering from opioids develop opioid or heroin use disorder through direct, social connections to P , A, or H agents. In terms of the model parameters, this means that β P , β A , and θ 1 remain network-dependent pathways in the ABM.
Analysis of Network Statistics
Various experiments were performed in order to analyze network statistics against baseline modeling outcomes derived from the Phillips Table 3: Pearson's correlation coefficient (r) between a variety of statistics for each network and modeling outcome for model simulations on an Erdős-Rényi random network (reference Appendix D for parameters of models tested). APL is average path length, CC is clustering coefficient, DMean is mean degree of all nodes in the network, DVar is variance of degree between all nodes in the network, and A f and H f is the final proportion of A and H individuals respectively. The p-value is also given for the computation of r in each relationship.
To begin, various network statistics were tested against a variety of Erdős-Rényi (ER), Barabási-Albert (BA), and Watts-Strogatz (WS) models for their relative importance to the final number of A and H individuals, denoted A f and H f respectively. The parameter values for each model considered in this analysis are detailed in Appendix D. We repeated each simulation 300 times for each chosen parameter value, with each simulation starting from the initial conditions S 0 , P 0 , A 0 , H 0 , and R 0 as shown in Tbl. A.6 in the Appendix and then run for 1000 time steps. We found that the change in network statistics from beginning to end of each simulation run was insignificant (our reintroduction algorithms worked as intended in this regard); therefore, statistics for the network topology were recorded at the beginning of each simulation.
The resulting correlations for the ER network statistics versus final values of A and H are shown in Tbl. 3. Note that the network statistics are not independent of each other, so their correlations should be compared for relative importance rather than taken in isolation. One can see that on this relative basis, average path length (APL) appears to be very important for the ER models in strength of correlation with H f and A f + H f . This relationship is negative, meaning that the more sparse the network becomes (higher APL), the more H f is expected to decrease. Note that none of the statistics tested showed a strong correlation with A f . We have visualized the data and regression line relating APL and H f in Fig. 5. Figure 5: H f plotted against beginning APL value for each of several runs of ER models with varying p parameters. A least-squares regression line is also shown to emphasize the negative correlation between APL and H f . Each model parameter was chosen to display a wide range of APL values. In the text below the legend, the Pearson's correlation coefficient (r) and the p-value for this r statistic is displayed. The p-value is very low (truncated to 0), meaning that this is a statistically significant correlation coefficient.
Similar correlation analyses for BA and WS models are shown in Tbls. B.7 and B.8 in the appendix. Across all models, APL seems to produce a consistently strong correlation against H f values, but for BA and WS, this correlation is weaker than for ER models. We hypothesize that this is because the exact location of A and H nodes within the non-trivial network structure matters far more than in an ER network, where the network structure can be thought of as more homogeneous. For WS, the degree mean and degree variance show the strongest correlation to H f values by far. However, this can be explained by the nature of the WS generation algorithm. The WS algorithm starts with all nodes having identical degrees, and only by rewiring, which is more frequent with a higher value for the p parameter, would this degree change. Therefore, a higher p parameter would cause more variance in the degree and also tends to decrease APL and CC (clustering coefficient) [21]. Likewise, the mean degree over all nodes in the network is directly correlated with the n parameter, and this parameter also has a direct effect on the value of the APL and CC for that network [21].
Comparing Models with Similar APLs
In order to understand the significance of each of these statistics for predicting H f , an analysis was conducted using two network models with approximately equal APL. The goal was to hold APL constant and evaluate how changes in CC affect the modeling outcomes. APL was chosen as the fixed parameter for this analysis due to its strength in correlation for H f , as previously discussed.
The first models tested were Erdős-Rényi with p = 0.0026 and Watts-Strogatz with n = 8, p = 0.2. Model parameters were chosen in order to produce networks with APLs with a difference of less than 0.001. In addition, these Erdős-Rényi and Watts-Strogatz network structures were chosen due to strong differences in their structure, evidenced by the difference in mean and variance of the degrees of nodes within each of these networks as shown in Tbl. 4. Each model was run 2000 times as previously described. After each run, the H f and APL values were recorded.
Two statistical tests were performed to quantify similarity between distributions of H f from these models: the two-sample Kolmogrov-Smirnov (KS) test and the two-sample Epps-Singleton (ES) test, which is the discrete analog for KS test. The null hypothesis H 0 of each of these tests is that both samples have equivalent underlying distributions. Model results are visualized in Fig. 6 with the corresponding statistics given in Tbl. 4. APL and CC are both continuous values, thus they should be evaluated using the KS test.
However, H f is a discrete distribution for the fixed population size within the model, so it should be evaluated using the ES test. For completeness, both KS and ES tests were run on each of these statistics.
In the left plot of Fig. 6, APL distributions seem very similar, but the KS test strongly suggests different APL distributions between the ER and WS models (Tbl. 4). There is a weak statistical signal indicating possible correspondence between the H f distribution produced by the various runs of these two models. As expected, there is strong statistical evidence against similarity in clustering coefficient distributions across these two modelsa result of the differences in generation of the networks underlying these models.
Determining how average-path length relates to model parameters
Using a parameter sweep, we analyzed the affect of average-path length (APL) on the outcome of the parameter estimation procedure described in Section 2.3. APL is the focus of these analysis due to the strong correlations discovered between APL and model outcome in the analyses described in Section 3.2.
We found that θ 1 is the only parameter which exhibits any significant change with respect to APL. Tbls. 5, D.10, and D.11 show optimization results for changes in parameters for a variety of ER, BA, and WS models, respectfully (BA and WS tables are located in the appendix). Fig. 7. The same analysis is shown for BA models in Fig. D.11 and for WS models in Fig. D.12. The Pearson's correlation coefficient for the BA network models was found to be 0.719206 with a p-value of 0.008382, and for the WS network models, it was 0.845941 with a p-value of (< 10 −6 ).
Conclusions
Our study utilized an assumption that the rates of change given in the Phillips et al. [5] model represent the mean of an underlying Poisson process. This assumption was used to convert the ordinary differential equations into a stochastic, individual-based model which could then be combined with a social network. Social networks were stochastically generated from three different algorithms and were static for each simulation except for when a node underwent a death process and was subsequently added back into the network using a method similar to the original generation algorithm. The result is a random process which can be used to bootstrap a time series distribution (see Fig. 4) and explore the effects of different social network dynamics and structures.
Since relaxation of the well-mixed assumption of the is the ABM, "inverted" value as previously described in Section 2.3. The final loss is the value of mean squared error between the ABM projection mean and the ODE model projection for every class. These relationships are visualized in Fig. 7. We quickly discovered that our original assumption that the P → H and A → H transitions (with parameters θ 2 and θ 3 ) depended on H neighbors in the social network resulted in outlandish values for θ 2 and θ 3 when compared to data. Upon further reflection, it made sense that individuals who were already using prescription opioids in one way or another would likely not require existing social contacts using heroin or fentanyl in order to initiate heroin or fentanyl use. The model was changed so that these transitions Figure 7: APL vs. inverted θ 1 values for optimizations ran on ER models with varying p values. Each point represents the derived θ 1 value plotted against the average APL over 300 model runs, with each network in these 300 runs being generated with the same p parameter. Horizontal bars are shown around each point corresponding to a 95% confidence interval for that APL value. The Pearson's correlation coefficient (r) and the p-value for that r calculation is also displayed. Separate analyses are conducted with WS ( Fig. D.12) and BA (Fig. D.11) models.
were based on the total proportion of H in the network rather than just neighbors, and θ 2 and θ 3 were fixed at their Phillips et al. reported values. The resulting ABM model was capable of reproducing the Phillips et al. result quite well. Additionally, we discovered that θ 1 (representing the rate for the S → H transition) was the only parameter needing adjustment due to adding a social network.
Seeking to further explore the relationship between social network structure and θ 1 , we leveraged different network generation algorithms in order to vary a common collection of social network statistics, including average path length (APL), clustering coefficient (CC), degree mean (DMean), and degree variance (DVar). We found that a linear relationship with average path length was capable of explaining a large portion of the variance in θ 1 due to the social network (r = 0.989 for Erdős-Rényi, r = 0.947 for Barabási-Albert, and r = 0.846 for Watts-Strogatz). We hope that this information may be used to infer the "infectivity" of heroin and fentanyl (specifically, heroin or fentanyl initiation by opioid naive individuals caused by social contact with heroin or fentanyl users) in communities where average path length of social contact can be estimated. Quantifying this information across different types of communities may shed significant light on factors that raise or lower risk for heroin and fentanyl use, thereby providing targets for management and further quantitative study.
CRediT author statement Fig. D.11. Figure D.12: APL vs. inverted θ 1 values for optimizations run on WS models with varying p values. Each point represents the average APL for the derived θ 1 value over 300 model runs, with each network in these 300 runs being generated with the same p parameter. Horizontal bars are shown around each point corresponding to a 95% confidence interval for that APL value. The Pearson's correlation coefficient (r) and the p-value for that r calculation is also displayed.
|
2022-02-25T06:47:38.637Z
|
2022-02-15T00:00:00.000
|
{
"year": 2022,
"sha1": "93d81a8cf760c8128aed31264ee036a8d9fe7f0f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "93d81a8cf760c8128aed31264ee036a8d9fe7f0f",
"s2fieldsofstudy": [
"Mathematics",
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
}
|
237933264
|
pes2o/s2orc
|
v3-fos-license
|
Fluid warming with parylene-coated enFlow cartridge: Bench and pilot animal study of aluminum extraction due to prolonged use
Objectives: Intravenous fluid warming devices with surface heating systems transfer heat using aluminum blocks, which if uncoated elute toxic levels of aluminum into the infusate. This study examined extractable aluminum detected from prolonged use of the updated version of the enFlow® cartridge, which uses a parylene-coated aluminum heating block. Methods: In dynamic bench tests, we measured the concentration of aluminum that leached into three solutions (Sterofundin ISO, Plasma-Lyte 148, and whole blood) that were continuously pumped (0.2 and 5.5 mL min−1) and warmed to 40°C by the enFlow cartridge (parylene-coated) for 5 h. Prolonged quasi-static bench tests measured aluminum concentration in 16 solutions which were gently rocked within the enFlow cartridge (parylene-coated) for 72 h at 40°C. Aluminum concentrations were measured using inductively coupled mass spectroscopy and matrix blank corrected. Measured aluminum concentrations were compared to a Tolerable Exposure limit to calculate Margins of Safety based on the US Food and Drug Administration maximum recommended concentration in parenteral fluids (25 μg L−1). A parallel pilot in vivo animal study was performed using mice injected with fluids warmed for 72 h by the enFlow cartridge (parylene-coated). Results: The enFlow cartridge (parylene-coated) demonstrated low toxicological risks in all tests. Sterofundin ISO resulted in the highest aluminum concentration after simulated prolonged use of the enFlow cartridge (parylene-coated) (3.11 μg device−1), which represents a 99.2% decrease from the enFlow cartridge (uncoated) and Margin of Safety of 1.7. Dynamic tests at two different flow rates with three challenge solutions resulted in concentrations less than the method detection limits (20.6 or 41.2 μg L−1) of the analysis method. The animals in the in vivo study showed no evidence of toxicity. Conclusion: Observed toxicological risk levels associated with the enFlow cartridge (parylene-coated) intravenous fluid warmer were below those set by the Food and Drug Administration and suggest that the use of enFlow cartridge (parylene-coated) is safe with a variety of intravenous solution types and in different therapeutic scenarios.
Introduction
Maintaining a constant body temperature during anesthesia prevents major complications and prolonged hospitalization. 1,2 Therefore, patient-specific temperature management is a major imperative during operative procedures. A drop of body core temperature below 36°C meets the criterion of hypothermia. 3 To help prevent such a decline in core temperature, several intravenous fluid warmers are used in the operating room to warm intravenous fluids. Most warming systems use a disposable cartridge containing a heating block to warm the fluid up to 40°C. 4,5 Some brands of intravenous fluid warmers using aluminum heating blocks have been shown to leach potentially significant amounts of aluminum into the infusate. One group 6 studied a fluid warming device with a coated aluminum heating block (Fluido ® Compact, The 37Company, Amersfoort, the Netherlands) and an uncoated device (enFlow ® cartridge (uncoated), Vyaire Medical, Inc., Mettawa, IL, USA). The researchers pumped two different infusion solutions (saline and a balanced electrolyte solution) through the two systems for 60 min and evaluated the leached aluminum. They found an increased and potentially unacceptably high level of leached aluminum when using the uncoated system. A second study examined the enFlow cartridge (uncoated) but with blood products as well as an electrolyte product for 60 min. 7 This study confirmed that the enFlow cartridge (uncoated) warmer also leached potentially dangerous levels of aluminum into the warmed intravenous (IV) fluids. Cabrera et al. 8 recently evaluated the aluminum leaching from the Level 1 ® H-1025 Fast Flow Fluid Warmer (Smiths Medical, Minneapolis, MN, USA). The authors tested three perfusion solutions: saline, Ringer's lactate, and heparinized whole blood, at a constant flow rate of 30 mL min −1 over 60 min. They found that the amount of aluminum leached from the system did not reach clinically significant levels, although their findings were subject to debate. [8][9][10] The original enFlow cartridge (uncoated) fluid warmer was recently redesigned with a parylene coating on the fluid-contacting portion of the aluminum heating block. This redesigned device, known as the enFlow cartridge (parylene-coated) (Figure 1), is identical to the original enFlow cartridge (uncoated) except for the parylene coating applied to the aluminum heating blocks.
Previous studies evaluated fluid warming devices only for a relatively short time (1 h) using only a few intravenous fluids. [6][7][8] This study evaluates potential aluminum leaching and its toxicity after a prolonged (5 or 72 h) exposure and for 16 different clinically relevant fluids to simulate a clinical scenario of a patient having multiple surgeries using multiple types of intravenous fluids. We evaluated both lipophilic and hydrophilic fluids with chronic exposures that exceed manufacturers' recommendations. In addition, available literature does not address in vivo correlates of toxicity or biological effects that may arise related to the fluid warmer, so we also performed a pilot in vivo preclinical study in mice using both hydrophilic and lipophilic heated fluids from the enFlow cartridge (parylene-coated). We hypothesized that the coated enFlow cartridge (parylene-coated) system does not result in a significant leaching of aluminum into heated fluids as measured by toxicity assessment.
Methods
We performed three different experiments for this study: dynamic flow fluid analysis, long-term quasi-static fluid analysis, and in vivo animal testing in mice.
Bench testing (dynamic and quasi-static testing)
Two different bench setups and durations were tested: "dynamic" and "quasi-static" (Figure 2). The experimental setup for dynamic testing was similar to previous studies: 6,7 challenge fluids were flowed through the enFlow cartridge (parylene-coated) at a fixed flow rate for 5 h and the outputted fluids were collected. Device warming was activated for the duration of the testing at the fixed temperature of 40°C. Each solution was tested at two different flow rates: 0.2 mL min −1 for neonates and 5.5 mL min −1 for adults. The aluminum concentration within the outputted fluids was measured and expressed as μg L −1 .
For quasi-static testing, methods based on the principles described in ISO 10993-1:2016 11 and ISO 10993-18:2005 12 were followed. Specifically, an enFlow cartridge (parylenecoated) was filled with one of the challenge solutions and capped closed. The cartridges were then placed inside a temperature chamber at 40°C and gently rocked continuously for 72 h. Following 72 h, the total aluminum content within the cartridge was quantified. Since there was no flow during the quasi-static tests, aluminum content was expressed as μg device −1 .
Quasi-static and dynamic bench testing was managed by Nelson Laboratories, LLC (Salt Lake City, UT, USA) and performed by American West Analytical Laboratories (AWAL, South Salt Lake City, UT, USA). AWAL is accredited by the National Environmental Laboratory Accreditation Program.
Analytical chemistry. In both dynamic and quasi-static bench testing, the aluminum concentration post-warming within challenge fluids was determined using inductively coupled mass spectroscopy (ICP/MS). Samples were first digested with a nitric acid (HNO 3 ) and hydrochloric acid (HCl) mixture. Following preparation, samples were forced through a nebulizer and converted into an aerosol. The resultant aerosol is then forced through a plasma which ionizes the atoms. The ionized atoms are extracted from the plasma by a vacuum interface and directed through a quadrupole which separates the ions by mass-to-charge ratio.
The aluminum preparation and analyses for each sample of challenge fluids used matrix spikes, matrix spike duplicates, and matrix blanks in addition to the typical analytical laboratory quality control samples. Matrix blanks followed the same procedure and analysis as matrix samples but were not incubated. Using these additional quality control samples allows for evaluation of matrix effects on the detection of aluminum. Since the matrices used may have had inherent aluminum, all results were matrix blank corrected to determine device-related extractable aluminum amounts.
The method detection limit is defined by the US Environmental Protection Agency (EPA) 13 as the minimum concentration a substance can be measured with 99% confidence that the concentration is greater than zero. The method detection limit is dependent on the instrumentation, matrix, and skill of the operator. The reporting limit, also known as the practical quantitation limit, represents the smallest concentration of aluminum that can be detected within a sample and can be reported with a reasonable degree of accuracy. The reporting limit is typically two to five times larger than the method detection limit.
For the dynamic tests, aluminum concentration method detection limits were 20.6 μg L −1 for Sterofundin ISO and Plasma-Lyte 148 solutions and 41.2 μg L −1 for whole blood. The reporting limits were 50 μg L −1 for Sterofundin ISO and Plasma-Lyte 148 solutions and 100 μg L −1 for whole blood. For quasi-static test, aluminum concentrations and reporting limits were expressed as μg device −1 . For most solutions, approximately 5 mL of matrix was recovered from the cartridge after warming. Therefore, a reporting limit of 50 μg L −1 is equivalent to 0.250 μg device −1 . Aluminum concentrations reported by the analytical methods used are within 10% of the true value based on the quality control requirement of the laboratory.
Establishing acceptance criteria for bench testing. To determine the toxicological hazard of the enFlow cartridge (parylenecoated), we compared the measured aluminum concentrations to the Tolerable Exposure (TE) levels for aluminum estimated based on guidelines described in ISO 10993-17:2002. 14 TE levels represent the maximum dose at which exposure to the substance does not produce adverse events or pose an unacceptable risk to human health. 15 In this study, we estimated the worse-case TE as a chronic exposure beyond 24 h in neonatal populations. Specifically, TE was defined based on the FDA's recommended maximum concentration of aluminum for large volume parenteral products (25 μg L −1 ). 16 For the quasi-static bench tests, the total aluminum leached into the fluid within the cartridge was quantified over the 72-h duration. We therefore calculated the minimum TE assuming the lowest parenteral nutrition volume (0.060 L kg −1 day −1 ) for the standard infant weight specified in ISO 10993-17:2002 (3.5 kg) 14 . .
To characterize the hazard associated with each substance, Margin of Safety was quantified as the ratio of TE to the measured aluminum concentration
Margin of Safety TE Measured aluminum concentration =
Margin of Safety is a unit-less index which indicates a fold-level difference between the threshold and measured exposure level. A Margin of Safety greater than 1.0 indicates low toxicological risk. 14 The worst-case Margin of Safety was calculated as the ratio between the TE and the challenge solution with the highest concentration of aluminum leached from the enFlow cartridge (parylene-coated). To calculate Margin of Safety, the aluminum content that accumulated within the enFlow cartridge (parylene-coated) over the 72-h period was compared to a 24-h TE.
For the challenge solutions which were tested with both the enFlow cartridge (uncoated) and the enFlow cartridge (parylene-coated), the percent decrease in the measured aluminum concentration when using the enFlow cartridge (uncoated) to the enFlow cartridge (parylene-coated) was quantified for each solution
×100
In vivo animal testing in mice. In addition to the dynamic and quasi-static bench testing, a pilot preclinical acute systemic toxicity testing was conducted on mice using the enFlow cartridge (parylene-coated) to assess the potential health hazards associated with acute exposure using the warmed fluids from the device. Animal testing was managed by Nelson Laboratories, LLC and performed by American Preclinical Services, LLC (Minneapolis, MN, USA) in compliance with ISO 10993-12:2012, 15 ISO 10993-11:2017, 17 and FDA Good Laboratory Practice 21 CFR Part 58. 18 The study protocol was reviewed and approved by the Institutional Animal Care and Use Committee of American Preclinical Services, LLC (APS Study ID: PRF922-ST10). A total of 20 male albino outbred strain mice (10 test mice and 10 negative control mice) were used in the study. Use of the enFlow cartridge (parylene-coated) was simulated by injecting the mice with a solution (saline or sesame seed oil) that was previously heated and agitated inside an enFlow cartridge (parylenecoated). The test extracts were prepared according to ISO 10993-12:2012. 15 Specifically, a total solution volume of 73.6 mL was used based on an extract ratio of 3 cm 2 mL −1 and total enFlow cartridge (parylene-coated) surface area of 220.8 cm 2 . An enFlow cartridge (parylene-coated) was filled with the solution and submerged in the remaining volume and then continuously agitated on an orbital shaker at 60 r/ min at 50°C for 72 h. The solutions were then extracted from the cartridges and injected into the test mice within 24 h without alteration. The concentration of Al in the extract was not quantified. Preparation of control extracts was identical to test extracts but without the enFlow cartridge (parylenecoated). Test mice received a single 50 mL kg −1 injection of the test extract and control mice received a single 50 mL kg −1 injection of control extract on Day 0. For each group, five mice received extracts using normal saline via IV injection and five received extracts using sesame seed oil via intraperitoneal injection. Animal bodyweight was measured immediately prior to the injection (Day 0) and then daily for the next 3 days. Overall, animal health and signs of acute toxicity were monitored at 4 ± 0.25 h, 24 ± 2 h, 48 ± 2 h, and 72 ± 2 h post-injection by comprehensive clinical observations by trained personnel. Specifically, the following were monitored: changes in skin and fur; eyes and mucous membranes; respiratory, circulatory, autonomic, and central nervous systems; and somatomotor activity and behavior patterns.
Data and statistical analyses. Mean values and standard deviations of the animal weights for the control and test groups were quantified for each injection solution at each measurement time. Percent change in bodyweight from time 0-72 h post-injection was calculated for each animal. Two success criteria were defined prior to the start of the preclinical study. The first success criterion was that no animals in each fiveanimal test group showed greater biological reactivity during the 3-day observation period. The second success criterion was that all the following were met for each five-animal test group: less than two animals died; less than two animals experienced convulsions or prostration; and final bodyweight changed by less than 10% in less than three animals. This in vivo study was a first-in-animal pilot study. Therefore, no a priori sample size justification was performed and five animals per group were chosen to provide pilot data for future studies.
Independent and dependent variables. For the bench testing, the independent variables were protocol type (i.e. dynamic vs quasi-static), challenge solution, and flow rate. The dependent variable was measured aluminum concentration. To understand the hazard associated with each substance, we calculated Margins of Safety by comparing these measured aluminum concentrations to TE limits. For the in vivo testing, we compared the physiological responses of mice that received injections simulating the use of the enFlow cartridge (parylene-coated) to control injections without the enFlow cartridge (parylene-coated). We tested both intravenous and intraperitoneal injections. The dependent variables were animal bodyweight, overall animal health, and signs of acute toxicity at 4, 24, 48, and 72 h post-injection. All data analyses were performed using MATLAB (MathWorks, Natick, MA, USA).
Dynamic bench testing
The concentration of aluminum in the solutions following dynamic testing using the coated enFlow cartridge (parylenecoated) was less than the method detection limit for both flow rates and all three solutions. Specifically, the concentrations were less than 20.6 μg L −1 for Sterofundin ISO and Plasma-Lyte 148 solutions and less than 41.2 μg L −1 for whole blood.
Quasi-static bench testing
For quasi-static testing of the enFlow cartridge (parylenecoated), the derived Margin of Safety values for aluminum were above a value of 1.0. Table 1 shows the uncorrected, matrix blank, and blank corrected aluminum concentrations for the 16 challenge IV solutions. Blank corrected aluminum concentrations represent the aluminum added to the solution from the enFlow cartridge and were calculated by subtracting the matrix blank concentration from the uncorrected aluminum concentration. The method detection limit and reporting limits are also tabulated and varied between the different challenge solutions. The reporting limits for single donor human whole blood, 5% dextrose solution, and 3% sodium chloride injection USP were raised due to sample matrix interferences. Note that blank corrected concentrations can be less than the reporting limits because it is calculated by subtracting the matrix blank from the uncorrected concentration. Table 2 compares the amount of aluminum detected in the 16 challenge IV solutions heated at 40°C (104°F) for 72 h with the enFlow cartridge (uncoated) and enFlow cartridge (parylene-coated). For the 10 challenge solutions that were tested with both the enFlow cartridge (uncoated) and coated enFlow cartridge (parylene-coated), the aluminum concentration decreased by at least 98.9% for all solutions except for 3% sodium chloride injection USP (36.4% decrease). Margin of Safety estimates for the enFlow cartridge (parylene-coated) based on a TE of 5.25 μg device −1 are also included in Table 2 for each of the challenge solutions. The aluminum content for the most commonly used fluids in clinical practice was 0.090 μg device −1 (human packed cells), 0.731 μg device −1 (human platelet lysate), 0.833 μg device −1 (single donor human whole blood), 1.32 μg device −1 (Plasma-Lyte 148), 2.62 μg device −1 (Ringer's lactate in 5% dextrose), and 3.11 μg device −1 (Sterofundin ISO). The highest aluminum content for all challenge solutions tested was in Sterofundin ISO. The aluminum content that leached into the Sterofundin ISO using the enFlow cartridge (parylene-coated) (3.11 μg device −1 ) represents a 99.2% decrease compared to aluminum content leached when using the enFlow cartridge (uncoated) (376 μg device −1 ). The Margin of Safety for the enFlow cartridge (parylene-coated) when using Sterofundin ISO is 1.7. The total volume of Sterofundin ISO extracted from the enFlow cartridge (parylene-coated) after the 72-h incubation period was approximately 5 mL. Therefore, the final concentration of aluminum that accumulated over 72 h was approximately 622 μg L −1 .
For both dynamic and quasi-static bench testing, results for the laboratory control samples, laboratory control sample duplicates, matrix spikes, and matrix spike duplicates were all within the quality control limits for percent recovered and relative percent difference.
In vivo animal study
All 20 animals survived the preclinical testing and were in overall good health over the course of the study. Test animals, which received injections simulating use of the enFlow cartridge (parylene-coated), showed no greater reaction to the injection compared to the control animals. The animals weighed between 25.5 and 34.4 g at the start of the study and none developed weight loss greater than 10% over the course of the study (Table 3).
Discussion
This study found that the enFlow cartridge (parylene-coated), when used in both acute and chronic exposures, resulted in Table 1. Uncorrected, matrix blank, and blank corrected aluminum concentrations from solutions heated at 40°C (104°F) for 72 h with the enFlow cartridge (parylenecoated). Blank corrected aluminum concentrations represent the aluminum added to the solution from the enFlow cartridge and were calculated by subtracting the matrix blank concentration from the uncorrected aluminum concentration. Method detection limit and reporting limits varied between the different challenge solutions. The reporting limits for single donor human whole blood, 5% dextrose solution, and 3% sodium chloride injection USP were raised due to sample matrix interferences. Results are sorted from the lowest to highest enFlow cartridge (parylene-coated) aluminum concentration. Solutions marked with an asterisk (*) are commonly used in clinical practice. minimal aluminum elution and favorable derived Margin of Safety above values of 1.0, correlating with safe patient exposure levels that are below those set by the FDA. 16,19 Dynamic tests at two different flow rates with three challenge solutions resulted in concentrations less than the method detection limits (20.6 or 41.2 μg L −1 ) of the analysis method, levels comparable with other marketed warming devices. This result is consistent with the finding from a previous study conducted on a different coated IV fluid warmer. 6 In that study, the exposure was limited to 1 h compared to 5 h in this study. The two flow rates tested in the dynamic bench testing were chosen to simulate the range of typical clinical conditions. Specifically, the 0.2 mL min −1 is a typical rate for maintenance fluids in a neonate. Standard practice dictates 4 mL kg −1 h −1 for the first 10 kg in body weight. For a 3-kg neonate, this equates to 0.2 mL min −1 . The 5.5-mL min −1 flow rate was selected to represent an adult patient undergoing a major surgery under anesthesia. For adults, standard practice recommends 4 mL h −1 kg −1 for the first 10 kg, 2 mL kg −1 for the second 10 kg, and 1 mL kg −1 for the remaining body weight. For a 70-kg patient, this corresponds to 110 mL h −1 . We further assume the patient has fasted for 8 h (i.e. 880 mL) which will be replaced evenly over a 4-h surgery (i.e. 220 mL h −1 ). Therefore, 330 mL h −1 (i.e. 5.5 mL min −1 ) was used to represent a typical adult infusion rate.
The quasi-static bench testing enabled us to simulate prolonged use of the enFlow fluid warming system. We quantified the total aluminum that leached into the enFlow cartridge over a 72-h period while the cartridge was gently rocked in a 40°C temperature chamber. Since we only quantified the aluminum at the end of the 72-h period, we do not know if the aluminum leached at a continuous rate over this period. It is plausible that the rate of leaching was the highest at the beginning of the incubation period and then was lower for the remaining time. We therefore decided to calculate Margin of Safety by comparing the 3-day aluminum content to a single-day TE threshold. This calculation assumes that the total amount of aluminum was extracted within the first 24 h.
The highest concentration of aluminum for the quasistatic testing was 3.11 μg device −1 with Sterofundin ISO. Since there was approximately 5 mL of solutions within the device, the final concentration of aluminum within the solution was approximately 622 μg L −1 . However, it is unlikely that this amount of aluminum would leach into the challenge solution while it is flowing through the device during clinical use. The dynamic bench test presented in this study showed aluminum concentrations are <20.6 μg L −1 for Sterofundin ISO at both 0.2 and 5.5 mL/min flow rates. Furthermore, previous studies have shown the chemical reactions between the aluminum ions and solutions are rate limited. Specifically, the maximum aluminum concentrations that leached into Plasma-Lyte 148 from the enFlow cartridge (uncoated) was an order of magnitude lower at a high flow rate (16.6 mL min −1 and 658 μg L −1 ) compared to a low flow rate (2 mL min −1 and 6028 μg L −1 ). 7 In addition, the amount of aluminum that leached into the solution increased over the period of an hour. While the mechanisms by which aluminum binds to a challenge solution are unknown, one hypothesis is that the aluminum ions form ionic complexes through carboxyl groups of certain organic anions such as acetate within balanced salt solutions. 20 This hypothesis could explain the large variation in the amount of aluminum that leached into the different challenge solutions.
We estimated TE based on the FDA recommendation of 25 μg L −1 for large volume parenteral injection. To achieve the most conservative estimate of TE, we used the flow rate of 60 mL kg −1 day −1 (i.e. 210 mL day −1 for a 3.5-kg infant) which represents the minimum flow rate typically used for parenteral nutrition. The same FDA standard also recommends a maximum parenteral Al level of 4-5 μg kg −1 day −1 for a patient with impaired kidney function. 21,22 For a 3.5-kg infant, this limit corresponds to a TE of 14 μg day −1 , which is a higher threshold than the 5.25 μg day −1 we selected.
In clinical use, IV fluid and blood warmers are used with a wide range of solutions such as saline and electrolyte solutions as well as blood and blood products. Each solution has unique thermochemical properties and interaction with the warmer's cartridge and may therefore result in different amounts of aluminum leaching into the solution. The previous studies mentioned above examined aluminum exposure when using IV fluid warmers with a small subset of potential solutions. This study expanded on these studies and measured aluminum elution after prolonged exposure to 16 different clinically relevant challenge fluids when using the enFlow cartridge (parylene-coated). For 10 of these 16 fluids, the aluminum exposure was also tested using the enFlow cartridge (uncoated) and compared to the exposure when using the enFlow cartridge (parylene-coated). Recent bench testing from other laboratories revealed potentially unsafe levels of aluminum leaching into solutions from the enFlow cartridge (uncoated). 6,7 Specifically, Perl et al. 6 found an aluminum concentration of approximately 6000 μg L −1 when flowing Sterofundin through an enFlow cartridge (uncoated). Taylor et al. expanded upon this study, examining aluminum concentrations for a total of five solutions. They found levels of aluminum exposure in Plasma-Lyte 148 and compound sodium lactate solutions that were comparable to those found in Sterofundin in Taylor et al. 7 The results of this study confirmed that the parylene coating on the cartridge significantly reduced the amount of leaching of aluminum into a wide range of clinically relevant challenge solutions (see Table 2).
There are three significant strengths of this research. First is that we tested the enFlow cartridge (parylenecoated) for prolonged exposure to fluids. In general, fluid warmers in an individual patient could potentially be required for time intervals of 1-12 h, depending on the extent of the surgery and resuscitative efforts. Difficult individual cases may then require repeated surgeries with renewed need for fluid warming. Because of this, we chose to study the effects of 72 h of exposure to enFlow, analogous to 14 5-h surgeries. Second, most studies have only addressed a few representative fluids, limiting the generalizability of their research. We sought to evaluate the safety of this device over a much broader spectrum of fluids that might actually be used in direct patient care. We chose 16 different IV solutions that were clinically relevant in anesthesiology from simple (saline) to complex (whole blood), encompassing lipophilic (such as whole blood, platelet lysate, and buffy coat) and hydrophilic (such as Sterofundin, saline, and dextrose in water), and found no evidence of aluminum leaching into the fluid using enFlow cartridge (parylene-coated). Finally, most published studies lack in vivo assessments of aluminum levels resulting from fluid warmers. Therefore, we added the studies on mice to evaluate the potential impact of aluminum toxicity resulting from the use of the enFlow cartridge (parylene-coated). This study was specifically designed to address the major concerns of practicing clinicians.
A limitation of this study is that the aluminum concentration minimum detection limit of our analysis method was 41.2 μg L −1 for whole blood. Therefore, in the dynamic testing using whole blood, it was not possible to differentiate between the measured aluminum concentration and the most stringent FDA standard of 25 μg L −1 . Regardless, the results of quasi-static testing show aluminum concentrations under this threshold for all challenge solutions. We did not compare enFlow cartridge (parylene-coated) to devices other than the original enFlow cartridge (uncoated) using the same experimental setup, so no assertions can be made about comparisons between other brands of devices. As mentioned, our test methods, while standard, were different from those previously used with other warming devices, making direct comparisons to other studies impossible. To determine our TE, we used the standard infant weight (3.5 kg) specified in ISO 10993-17:2012 15 which exceeds values typical of premature neonates (e.g. 500 g to 2.5 kg).
The in vivo testing in mice provides a first-in-animal pilot study of acute systemic toxicity exploring the potential health hazards associated with use of the enFlow cartridge (parylene-coated) following the procedures described in ISO 10993-11:2017 17 and ISO 10993-12:2012. 15 All animals in the study survived, remained in overall good health, and showed no weight loss. The preclinical study had several limitations which should be addressed in future studies.
First, this study only tested saline and sesame seed oil solutions, while future studies should use a balance salt solution such as Sterofundin ISO since our bench testing has shown it results in the highest concentration of aluminum leaching. In addition, future studies should measure the concentration of aluminum within the injected extract to quantify the aluminum dose. Finally, preclinical study in large animals would enable simulated use of the device at clinically relevant flow rates as well as quantification of the change in blood plasma aluminum concentration of the animal resulting from the infusion.
Conclusion
The results of these experiments indicate that observed toxicological risk levels associated with the enFlow cartridge (parylene-coated) intravenous fluid warmer were below those set by the FDA and other regulatory bodies and suggest that the use of enFlow cartridge (parylene-coated) is safe with a variety of IV solution types and in different therapeutic scenarios. The enFlow cartridge (parylene-coated) showed marked improvement in safety compared to its predecessor, the enFlow cartridge (uncoated).
Ethical approval
Ethical approval for this study was obtained from the Institutional Animal Care and Use Committee of American Preclinical Services, LLC (APS Study ID: PRF922-ST10).
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: M.J.P. is the medical director of Vyaire Medical. Vyaire Medical, the manufacturer of this device, funded this research but had no role in study design, data acquisition, or analysis. A.D.W. is an employee of Vyaire Medical, and E.A.R. is a paid external consultant for Vyaire Medical.
|
2021-08-27T16:52:31.287Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "4db7fcf622da20e12d3a55c80ba449461536b42d",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/20503121211026849",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "0e7b06daf65368787c9020e617f9944557cede80",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211089107
|
pes2o/s2orc
|
v3-fos-license
|
THE PROMISE AND CHALLENGE OF THERAPEUTIC GENOME EDITING
Genome editing, involving precise manipulation of cellular DNA sequences to alter cell fates and organism traits, offers the potential to both understand human genetics and cure genetic disease as never before. Scientific, technical and ethical aspects of employing CRISPR technology for therapeutic applications in humans are discussed, focusing on specific examples that highlight both opportunities and challenges. Genome editing is or will soon be in the clinic for several diseases, with more applications in the pipeline. The rapid pace of the field demands active efforts to ensure responsible use of this breakthrough technology to treat, cure and prevent genetic disease.
goal with tremendous potential to save and improve lives, representing a convergence of technical and medical advances that could eventually eradicate many genetic diseases.
Although methods for genome engineering and gene therapy have been of interest for decades, the development of engineered and programmable enzymes for DNA sequence manipulation has driven a biotechnological revolution [1][2][3][4][5] . In particular, fundamental research showing how clustered regularly interspaced short palindromic repeats (CRISPRs) and CRISPR-associated (Cas) proteins provide microbes with adaptive immunity has propelled transformative technological opportunities afforded by RNA-guided proteins. CRISPR-Cas9 and related enzymes have been used to manipulate the genomes of cultured cells, animals and plants, vastly accelerating the pace of fundamental research and enabling breakthroughs in agriculture and synthetic biology (reviewed in refs. [6][7][8][9]. Building on past gene therapy efforts 10 , we are entering an era in which genome editing tools will be used to inactivate or correct disease-causing genes in patients, offering life-saving cures for people facing genetic disorders.
In this review I discuss therapeutic opportunities of genome editing, the ability to alter the DNA in cells and tissues in a site-specific manner. In addition to presenting current capabilities and limitations of the technology, I also describe what it will take to apply therapeutic genome editing in the real world. Comparison of somatic cell and germline editing highlights the importance of open public discussion about, and regulation of, this powerful technology.
THE SCOPE OF GENOME EDITING APPLICATIONS
Although the genetics of human disease are often complex, some of the most common genetic disorders stem from mutations in a single gene. Cystic fibrosis, Huntington's chorea, Duchenne muscular dystrophy and sickle cell anemia each represent diseases resulting from defects in just one gene in the human genome; on a global scale such monogenic diseases, of which ~5,000 are known, affect at least 250 million individuals. DNA sequencing in affected families has provided detailed information about the mutations that lead to each disorder, as well as correlations between specific genetic changes (genotype) and disease severity. These data in turn reveal DNA sequence alterations or corrections that could provide a genetic cure by either disrupting function of a toxic or inhibitory gene or restoring function of an essential gene.
Sickle cell disease and muscular dystrophy, two common human genetic disorders, provide instructive examples of diseases that could be treated or cured by genome editing in the foreseeable future. Sickle cell disease results from a single base pair change in DNA that in turn generates a defective protein with destructive consequences in red blood cells. Duchenne muscular dystrophy belongs to a set of muscle-wasting diseases resulting from DNA sequence changes that disrupt normal production of a protein required for muscle strength and stability. A closer look at each of these diseases illustrates the ways that genome editing could offer therapeutic benefit to patients.
Sickle cell disease occurs in people that have two defective copies of the gene encoding β-globin, the protein required to form oxygen-carrying hemoglobin in adult blood cells.
Described originally by Linus Pauling and colleagues 11 and mapped to a genetic locus in the 1950s 12 , a single A to T mutation results in a glutamate-to-valine substitution in β-globin (Fig. 1). This seemingly small change causes the defective protein to form chain-like polymers of hemoglobin, inducing red blood cells to assume a sickled shape that leads to occluded blood vessels, pain and life-threatening organ failure. Although bone marrow transplantation can cure the disease, it requires using cells from an individual whose immune profile matches that of the patient. In principle, sickle cell disease could be cured by removing blood stem cells -hematopoietic progenitors -from a patient and using genome editing to either correct the disease-causing mutation in β-globin or activate expression of ɣ-globin, a fetal form of hemoglobin that could substitute for defective β-globin (Fig. 1).
The edited stem cells could then be transplanted back into the patient, where their progeny would produce normal red blood cells.
The ability to conduct the editing in cells extracted from sickle cell patients makes their disease -and other blood disorders -some of the more tractable pathologies that could be treated by genome editing in the near term. Most genetic diseases, however, will require genome editing of cells in the body (in situ) to correct a genetic defect associated with disease. Muscular dystrophy exemplifies this type of disorder because it involves weakening and disruption of skeletal muscles over time (reviewed in refs. 13,14). The most common type, Duchenne muscular dystrophy (DMD), affects one in 5,000 males at birth who inherit mutations in the gene encoding dystrophin, a scaffolding protein that maintains the integrity of striated muscles (Fig. 1). Over time these patients lose the ability to walk and eventually succumb to respiratory and heart failure, typically causing death by the third decade of life. In contrast with therapies to delay disease progression, genome editing offers the possibility of permanent restoration of the missing dystrophin protein. Although >3000 different mutations can cause DMD, most occur at hotspots within the dystrophin gene. Notably, restoration of a small percentage (~15%) of normal dystrophin expression levels can provide a clinical benefit 15 .
To treat or cure monogenetic disorders like sickle cell disease and DMD, it will be important to match the underlying genetic defect with the best genome editing approach. In each case this involves multiple considerations including the type of editing needed, the mode of cell or tissue delivery required and the extent of gene knockout or correction that will provide therapeutic value.
The next section describes current genome editing technologies that offer the potential of curative human genome editing. hematopoietic stem cells 17,18 and engineering of immune system cells to treat childhood cancer 19 . To realize this promise, the development of CRISPR-Cas9 for genome editing offers a simpler technology that has been adopted widely due to the ease of programming its DNA binding and modifying capabilities. Cas9 is a protein that assembles with guide RNA, either as separate crRNA and tracrRNA components or a chimeric single-guide RNA (sgRNA), to create a molecular entity capable of binding and cutting DNA 1 . Importantly, DNA binding occurs at a 20-base pair DNA sequence that is complementary to a 20nucleotide sequence in the guide RNA and can be readily altered by the experimenter 1,20 (Fig. 2). The DNA recognition site must be adjacent to a short motif (protospacer adjacent motif, PAM) which acts as a switch, triggering Cas9 to make a double-stranded DNA break within the targeted sequence 1,20 . In cells of all multicellular organisms, including humans, such double-stranded DNA breaks induce DNA repair by endogenous cellular pathways that can introduce alterations to the DNA sequence, including small sequence changes or genetic insertions 21,22 . Although CRISPR-Cas9-induced genome editing is effective in virtually all cell types, controlling the exact editing outcome remains a challenge in the field, as discussed later in this review.
Although S. pyogenes (SpCas9) is the CRISPR-Cas enzyme most commonly used for genome editing and genetic manipulation, a growing collection of natural and engineered Cas9 homologs and other CRISPR-Cas RNA-guided enzymes is expanding the genome manipulation toolbox 6,23,24 . It is the intrinsic programmability present in this diversity of enzymes that underscores the utility of CRISPR-Cas technology for genome editing and other applications including gene regulation and diagnostics (Fig. 2).
For safe and effective clinical use ex vivo and in vivo, genome editing needs to be accurate, efficient and deliverable to desired cells or tissues. CRISPR-Cas9-induced DNA cleavage induces genome editing during double-strand DNA break repair by non-homologous end joining and/or homology-directed repair (Fig. 2). Homology-directed repair, requiring the presence of a DNA template, is in most cases used by the cell less frequently than nonhomologous end joining. Furthermore, both types of repair can happen in the same cell, creating different alleles of an edited gene. Two concurrent double-strand DNA breaks can induce chromosomal translocations. For these reasons, an active area of CRISPR-Cas technology development involves controlling DNA repair outcomes to ensure that the desired genetic change is introduced.
Alternatives to DNA cleavage-induced editing include using CRISPR-Cas9 for direct chemical sequence alteration (base editing) 25,26 , providing RNA templates for gene alteration (prime editing) 27,28 , and for transcriptional control (CRISPR interference, CRISPRi; CRISPR activation, CRISPRa) 29,30 (Fig. 3). In addition, it may be possible to control gene outputs through Cas9-mediated epigenetic modification (reviewed in refs. 31, 32). While these methods have been used in cultured cells, they are not yet ready for clinical use until matters of specificity 33,34 and delivery are addressed. Two strategies to mitigate or cure sickle cell disease take advantage of demonstrated strategies for site-specific genome editing (Figs. 1, 2). The first involves restoration of the normal β-globin gene sequence by homology-directed repair 35 . The second approach is to activate expression of ɣ-globin, the fetal form of hemoglobin typically silenced in adult cells, by disrupting ɣ-globin repressors [36][37][38][39][40][41] or their binding sites in the ɣ-globin gene promoter 40,42,43 . These genome-editing strategies require harvesting a patient's hematopoietic progenitor/stem cells (HPSCs), either to correct the β-globin mutation or to restart expression of ɣ-globin, and then re-introducing the edited cells into the bone marrow. Major progress in delivering to 44 and handling HPSCs has resulted in formidable efficiencies of mutation correction or mitigation 18,[45][46][47] that are expected to be curative.
Such an approach, while requiring bone marrow transplantation, would remove the need for a compatible bone-marrow donor and thus provide a path for treating and potentially curing many more people than can be treated at present. As discussed below, improvements in in vivo delivery technology may one day enable treatment without requiring bone marrow transplantation, which would reduce both expense and patient hardship.
While in vivo editing may resolve some of the issues with ex vivo sickle cell therapies, studies in muscular dystrophy illustrate that other challenges arise when attempting in situ gene correction. Three reports highlight both the tremendous potential and the significant remaining challenges to using genome editing to treat or cure muscular dystrophy in humans. In the first study, a DMD mouse model was created using CRISPR-Cas9 to generate a common deletion (ΔEx50) in the dystrophin gene that occurs in DMD patients 48 . The severe muscle dysfunction in the ΔEx50 mice was corrected by systemic delivery of adeno-associated virus (AAV) encoding CRISPR-Cas9 genome editing components, restoring up to 90% of dystrophin protein expression throughout skeletal muscles and the heart of ΔEx50 mice. The second study used CRISPR-Cas9-mediated genome editing to remove a mutation in exon 23 in the mdx mouse model of DMD, providing partial recovery of functional dystrophin protein in skeletal myofibers and cardiac muscle 25,26,49 . In the third study, dogs harboring the ΔEx50 mutation corresponding to a mutational "hotspot" in the human DMD gene were treated using CRISPR-Cas9 50 . After virus-mediated systemic delivery in skeletal muscle, dystrophin levels were restored to 3-90% of normal, and the muscle tissue appearance in treated dogs was improved. Although promising, these reports, as well as early-stage data from patients treated with in vivo gene editing using ZFNs, highlight the gap between animal studies and applications in humans [51][52][53] and underscore the need for improved methods for in situ delivery, as discussed in the next section. An early stage clinical trial using in vivo CRISPR-Cas9 delivery to the eye to treat congenital blindness 54 and a close-to-the-clinic program for liver gene editing 55 will shortly provide key first-in-human data to inform the direction of that effort.
TOWARDS TISSUE-SPECIFIC DELIVERY
For any of these genome editing methods to be useful clinically, the CRISPR-Cas enzymes, associated guide RNAs and any DNA repair templates must make their way into the cells in need of genetic repair. To produce a functional genome editing complex, Cas9 and sgRNA can be introduced to cells in target organs in formats including DNA/DNA, mRNA/sgRNA, or protein/sgRNA, respectively. All three formats are currently, or shortly to be, used in the clinic, using viral vectors, nanoparticles and electroporation of protein-RNA complexes, and each has distinct benefits and limitations (Fig. 4). The currently favored form of ex vivo delivery to primary cells is electroporation of Cas9 as a preformed protein-RNA (ribonucleoprotein, RNP) complex 44,56 . In vivo delivery, which is much more challenging, is currently conducted using viral vectors (typically adeno-associated virus, AAV) or lipid nanoparticles bearing Cas9 mRNA and an sgRNA. The difficulty of ensuring efficient, targeted delivery into desired cells in the body currently limits the clinical opportunities of in vivo genome editing, although this is an area of increasing research and development.
Viral delivery vehicles, including lentivirus, adenovirus and adeno-associated virus (AAV), offer advantages of efficiency and tissue selectivity (Fig. 4). AAV is attractive due to its reduced risk of genomic integration, inherent tissue tropism and clinically manageable immunogenicity. In addition, long-term expression of trans-genes encoding Cas9 and sgRNA from the episomal viral genome could help boost genome editing efficiency in patients, such as those with Duchenne muscular dystrophy as discussed below 57 . Notably, the FDA has approved AAV for gene replacement therapy in spinal muscular atrophy and congenital blindness, and clinical trials are in progress 58 .
There are significant challenges to using AAV for therapeutic delivery of CRISPR-Cas components, however. First, the AAV genome can only encode ~4.7 kb of genetic cargo, less than other viral vectors and not much larger than the 4.2 kb length of the gene encoding S. pyogenes Cas9. As a result, in applications calling for corrective gene insertion, a second AAV vector encoding the sgRNA and/or a template sequence for homologydirected DNA repair must be used, reducing efficiency due to the need for cells to acquire both AAV vectors at once 59,60 . Smaller genome editing proteins, such as the S. aureus Cas9, C. jejuni Cas9 and newly identified CRISPR-Cas enzymes, may circumvent this issue 23,61-65 . Second, long-term expression of genome editing molecules may expose patients to undesired off-target editing or immune reactions 66,67 . Third, the production of AAV at scale and the employment of good manufacturing process (GMP) methods at affordable cost for clinical use remains a formidable challenge [68][69][70] .
Nanoparticles offer an alternative to viral-based delivery of Cas9 and sgRNAs and are suitable for delivering genome editing components in the form of DNA, mRNA, or ribonucleoprotein (RNP) (Fig. 4). For example, lipid-mediated nanoparticle (LNP) delivery has been used to transport CRISPR-Cas components in the form of either mRNA/sgRNA or preassembled RNPs into tissues [71][72][73][74] . When combined with the highly anionic sgRNA, the cationic Cas9 protein forms a stable RNP complex that has anionic properties suitable for encapsulation by cationic lipid nanoparticles, potentially enabling delivery into cells through endocytosis and macropinocytosis. Cationic lipid-based delivery is a relatively easy, low-cost process to deliver CRISPR components into cells 75 . This approach has been used for one-shot delivery of Cas9 RNPs into mice to achieve therapeutically useful levels of genome editing in the liver 55 . Disadvantages of this approach include significant toxicity of the LNPs 76 and the sometimes undesired selectivity of cell-type specific uptake of the particles.
Inorganic nanoparticles are another type of delivery vehicle with advantages including tunable size and surface properties. Gold nanoparticles, in particular, are attractive materials for molecular delivery because of the intrinsic affinity of gold for sulfur, enabling functionalized molecules to be coupled to the gold particle surface. Gold nanoparticles were used originally for nucleic acid delivery by conjugating to thiol-linked DNA or RNA (reviewed in ref. 77). Cas9 protein-sgRNA complexes can be incorporated by assembly with DNA-linked particles 78,79 . Such assemblies, complexed with polymers capable of disrupting endosomes and including DNA templates for homology-directed repair, were found to promote correction of dystrophin gene mutations in mice 80 . Ongoing research continues to advance nanoparticle delivery technology, such as for endothelial cells that could enable access to lung and other organs 81 .
Strategies for nonviral cellular delivery of CRISPR-Cas components include electroporation, which involves pulsing cells with high-voltage currents that create transient nanometersized pores in the cell membrane. This process allows negatively-charged DNA or mRNA molecules or CRISPR-Cas RNPs to enter the cells. Although this method is a primary method of Cas9-sgRNA delivery to cells ex vivo, electroporation has also been used successfully for Cas9 delivery to animal zygotes 82,83 , and to introduce CRISPR-Cas constructs directly into mouse skeletal muscle, resulting in restoration of dystrophin gene expression 84 . Electroporation will likely be of limited utility for most in vivo genome editing applications due to impracticality.
Another non-viral delivery method is direct application of pre-assembled CRISPR-Cas RNPs, with or without chemical modifications to assist cell penetration, to cultured cells or organs. This delivery mode can reduce possible off-target mutations relative to delivering Cas9-encoding DNA or mRNA due to the short half-life of RNPs 76,85-87 . New strategies for direct delivery of CRISPR-Cas9 RNP complexes continue to emerge, including those using molecular engineering to enhance targeting of specific cell types 88
ACCURACY, PRECISION AND SAFETY OF GENOME EDITING
The clinical utility of genome editing depends fundamentally on accuracy and precision. Accuracy refers to the ratio of on-versus off-target genetic changes, whereas precision relates to the fraction of on-target edits that produce the desired genetic outcome. Inaccurate (off-target) genome editing occurs when CRISPR-induced DNA cleavage and repair happens at genomic locations not intended for modification, typically sites that are close in sequence to the intended editing site (reviewed in ref. 93). Imprecise genome editing results from different modes of DNA repair after on-target DNA cleavage, such as a mixture of nonhomologous end joining and homology-directed recombination events that produce different sequences at the desired editing location in different cells. In addition, large deletions and complex genomic rearrangements have been observed after genome editing in mouse embryonic cells, hematopoietic progenitors and human immortalized epithelial cells [94][95][96] .
Although these events occur at low frequency, they could be significant in a clinical setting if rare translocations led to cancer [97][98][99] . Careful testing will be required to detect and monitor both the accuracy and precision of genome editing in clinical settings and ultimately to reduce or eliminate undesired events by controlling target site recognition and DNA repair outcomes. The National Institute of Standards and Technology (NIST) manages a scientific consortium aimed at measuring and standardizing such outcomes as genome editing technology advances 100 .
The risks intrinsic to DNA cleavage-induced genome editing have spurred development of CRISPR-Cas9-mediated genome regulation or editing methods that do not involve doublestranded DNA cutting. CRISPR interference (CRISPRi) and CRISPR activation (CRISPRa) employ catalytically deactivated forms of Cas9 (dCas9) that are fused to transcriptional repressors or activators 29,101 . Similarly, CRISPR-Cas9-mediated epigenetic modification to control gene expression is also under development 102 . An alternative approach is to use CRISPR-Cas9 coupled to DNA editing enzymes that catalyze targeted A-to-G or Cto-T genomic sequence changes without inducing a break in DNA, potentially reversing pathogenic single-nucleotide changes or disabling genes via the introduction of a stop codon 25,26 . CRISPR-Cas9 can also be linked to reverse transcriptase and deployed for targeted template-directed sequence alterations 103 . All of these strategies, though elegant in principle, involve large chimeric proteins that are pose additional challenges of delivery into primary cells or animals. The specificity of action both at the target, as well as genomewide, remains an area of active investigation. Issues of delivery, potency, and specificity of CRISPRi, CRISPRa and CRISPR-mediated base editing and prime editing will need to be thoroughly addressed before they are ready for clinical use.
Other factors affecting clinical applications of genome editing include the immunogenicity of bacterially-derived editing proteins, the potential for pre-existing antibodies against CRISPR components to cause inflammation and the unknown long-term safety and stability of genome editing outcomes. Immunogenicity of CRISPR-Cas proteins could be managed by high-efficiency one-time editing treatments and by using different editing enzymes. Pre-existing Cas9 antibodies and reactive T-cells have been detected in humans exposed to pathogenic bacteria harboring CRISPR systems, although it is unknown whether these are present at concentrations sufficient to trigger an immune response to the genome editing enzymes 66,104 . Notably, genome editing therapies that involve ex vivo editing, such as for sickle cell disease, are not as affected by either immunogenicity or pre-existing CRISPR-Cas antibodies, with residual Cas9 protein in the ex vivo edited cells being a manageable issue. The potential for inadvertent selection of genome-edited cells with undesired genetic changes came to light with the observation that selection for inactivation of the p53 pathway, which is associated with rapid cell growth and cancer, can occur during laboratory experiments on cells that are not used clinically 105,106 . Subsequent experiments showed that p53 inactivation can be controlled or avoided through protocol optimization 47,107 . As for long-term safety and efficacy of genome-edited cells in vivo, much remains to be determined. However, the recent report of a single HIV-positive patient who received CRISPR-Cas9-edited hematopoietic progenitor cells showed that although the number of edited cells was too low to mitigate HIV infection, no adverse outcome was detected over 19 months after transplantation of the edited cells 108 . Together, these findings suggest that there are, at present, no known insurmountable hurdles to the eventual development of safe and effective clinical applications of genome editing in humans.
THERAPEUTIC GENOME EDITING
The clinical potential of genome editing exemplified by applications in sickle cell disease, muscular dystrophy and other monogenetic disorders could be stymied by extreme pricing of such next-generation therapeutics. Although CRISPR technology itself is a democratizing tool for scientists, extension of its broad utility in biomedicine requires addressing the costs of development, personalization for individual patients and the intrinsic difference between a chronic disease treatment versus a one-and-done cure (reviewed in ref. 103).
Current clinical trials using the CRISPR platform aim to improve chimeric antigen receptor (CAR) T-cell effectiveness, treat sickle cell disease and other inherited blood disorders, and stop or reverse eye disease 109 . In addition, clinical trials to use genome editing in degenerative diseases including muscular dystrophy patients are on the horizon. For sickle cell disease, the uniform nature of the underlying genetic defect lends itself to correction by a standardized CRISPR modality that could be used in many if not most patients. This simplifies clinical testing but also makes the need to address patient cost and access more acute, given that the ~100,000 US patients and millions more in African and Asian countries will be candidates for treatment.
For muscular dystrophy, the genetic diversity among patients lends itself to personalization that is an inherent strength of the CRISPR genome editing platform, yet also complicates clinical testing strategies. In addition, progressive diseases like muscular dystrophy require early treatment to be most effective, raising questions about coupling diagnosis and treatment. Beyond these examples, many rare genetic disorders will be treatable in principle if a streamlined strategy for CRISPR therapeutic development can be implemented 103 . With its potential to address unmet medical needs, clinical use of genome editing will ideally spur changes to regulatory guidelines and cost reimbursement structures that will benefit the field more broadly as these therapies continue to advance.
Notably, all of the genome editing therapeutics under development aim to treat patients through somatic cell modification. These treatments are designed to affect only the individual who receives the treatment, reflecting the traditional approach to disease mitigation. However, genome editing offers the potential to correct disease-causing mutations in the germline, which would introduce genetic changes that would be passed on to future generations. The scientific and societal challenges associated with human germline editing are distinct from somatic cell editing and are discussed in the next section.
HERITABLE GENOME EDITING
Human germline genome editing can introduce heritable genetic changes in eggs, sperm or embryos. Germline genome editing is already in widespread use in animals and plants and has been employed in human embryos for research purposes. A report of alleged use of human embryo editing resulting in the birth of twin baby girls with edited genomes has focused global attention on an application of genome editing that must be rigorously regulated, as underscored by international scientific organizations.
Human germline editing differs from somatic cell editing because it results in genetic changes that are heritable if the edited cells are used to initiate a pregnancy (Fig. 5). Germline editing has been used for years in animals, including mice, rats, monkeys and many others, and experiments show that it can be done in both nonviable or viable human embryos as well [110][111][112][113] . Although none of the published work involves implantation of the edited embryos to initiate a pregnancy, such work was reported at a conference on human genome editing in November 2018, leading to international condemnation in light of clear violations of ethical and scientific guidelines.
This work and the accompanying discussion around human germline editing have raised important questions that affect the future direction of the science as well as the societal and ethical issues that accompany any such applications. First, research using CRISPR-Cas9 in human embryos has challenged current understanding of DNA repair mechanisms and developmental pathways that occur in these cells. A report of inaccurate CRISPR-Cas9based genome editing in non-viable human embryos 110 was not substantiated by later publications, but the mechanism by which double-stranded DNA breaks are repaired in early human embryos remains under debate. Some results were interpreted to indicate repair of a CRISPR-Cas9targeted gene allele by homology-directed repair with the cell's other allele as the donor template 114 . Other scientists argued that such repair would be impossible given the apparent physical separation of sister chromatids early in embryogenesis, and suggested the data could also be consistent with large deletions in the embryo genomes 94,115 . Resolving this fundamental question will require further experiments. Human embryo editing has also begun to reveal differences in the genetics of early development in mice versus humans 111 , underscoring the potential value of research that will be enabled by precision genome modification. The third question raised by applications of CRISPR-Cas9 in human embryos is how to move the technology forward while ensuring responsible use. At the time of this writing, international commissions convened by the World Health Organization (WHO) and by the US National Academy of Sciences and National Academy of Medicine, together with the Royal Society, are drafting detailed requirements for any potential future clinical use. Medical needs must be defined so that risks versus possible benefits can be evaluated. Most importantly, procedures by which patients could be informed about the technology, its risks and a process for monitoring health outcomes must be determined.
OUTLOOK
Therapeutic genome editing will be realized, at least for some diseases, over the coming 5-10 years. This profound opportunity to change healthcare for many people requires scientists, clinicians and bioethicists to work with healthcare economists and regulators to ensure safe, effective and affordable outcomes. The potential impact on patients is too important to wait. DNA repair that can introduce altered genomic sequences at the site of the repair. As a result, research focused increasingly on engineered enzymes capable of introducing targeted breaks in genomes, which would in turn produce site-specific genetic changes. 23. Liu
|
2020-02-13T09:14:18.079Z
|
2020-02-01T00:00:00.000
|
{
"year": 2020,
"sha1": "58b8496f86ab37bb66e32f96f5492f531b5ca37b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9fed3d53b980ec19b529c18b6b3017f9463c29d4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
33033450
|
pes2o/s2orc
|
v3-fos-license
|
A discussion about LNG Experiment: Irreversible or Reversible Generation of the OR Logic Gate?
In a recent paper M. Lopez-Suarez, I. Neri, and L. Gammaitoni (LNG) present a concrete realization of the Boolean OR irreversible gate, but contrary to the standard Landauer principle, with an arbitrary small dissipation of energy. A Popperian good falsification! In this paper we discuss a theoretical description of the LNG device which is indeed a 3in/3out self--reversible realization of the involved OR gate, satisfying in this way the Landauer principle of no dispersion of energy, contrary to the LNG conclusions. The different point of view is due to a different interpretation of the two outputs corresponding to the inputs 10 and 01, which are considered by LNG indistinguishable so producing a non reversible realization of the standard 2in/1out gate. On the contrary, always considering these two outputs indistinguishable, by a suitable normalization function of the cantilever angles, the experimental results obtained by the LNG device coincide with the OR connective obtained from the third output of the self-reversible 3in/3out CL gate by the Inputs-Ancilla->Garbage-Output procedure. Thus, by the self-reversibility this realization is without dissipation of energy according to the Landauer principle. Furthermore, using the self-reversible Toffoli gate it is possible to obtain from the LNG device the realization of the connective AND adopting another normalization function on the cantilever angles. Finally, by other suitable normalization procedures on cantilever angles it is possible to obtain also the other logic NOR and NAND connectives, and in a more sophisticated way the XOR and NXOR connectives in a self-reversible way. All this leads to introduce a universal logic machine consisting of the LNG device plus a memory containing all the necessary angle normalization functions to produce in a self-reversible way, by choosing one of these latter, the logic connectives now listed.
Introduction
This paper discusses a recent result obtained by M. López-Suárez, I. Neri, and L. Gammaitoni (LNG) in [LSNG16] regarding the link between irreversibility of some logic gates and energy dissipation due to information loss. Quoting LNG from their paper [LSNG16] "Popular gates like And, Or and Xor, processing two logic inputs and yielding one logic output, are often addressed as irreversible logic gates, where the sole knowledge of the output logic value is not sufficient to infer the logic value of the two inputs. Such gates are usually believed to be bounded to dissipate a finite minimum amount of energy determined by the input-output information difference." From this point of view, "a way to understand irreversibility is to think of it in terms of information erasure. If a logic gate is irreversible, then some of the information input to the gate is lost irretrievably when the gate operates -that is, some of the information has been erased by the gate. Conversely, in a reversible computation, no information is ever erased, because the input can always be recovered from the output. Thus, saying that a computation is reversible is equivalent to saying that no information is erased during the computation" [NC00,pag. 153].
The connection between energy consumption and irreversibility is provided by the so-called Landauer's principle which can be formulated in two forms: Landauer's principle (first form): If a computer erases a single bit of classical information, the amount of energy dissipated into the environment is at least k B T ln2, where k B is the Boltzmann's constant, and T is the absolute temperature of the environment of the computer (typically in the form of waste heat). To this form of Landauer's principle an alternative formulation can be given, according to the laws of thermodynamics, not in terms of energy dissipation, but rather in terms of entropy: Landauer's principle (second form): If a computer erases a single bit of information, the entropy of the environment increases by at least k B ln2, where k B is the Boltzmann constant. The interesting result of LNG paper is that they claim of presenting "an experiment where an Or logic gate, realized with a micro-electromechanical cantilever, is operated with energy well below the expected limit [i.e., k B T ln2], provided the operation is slow enough and frictional phenomena are properly addressed." This, if true, is a really interesting falsification of the above formulations of Landauer's principle, notable of a great interest from the scientific community interested on this kind of arguments.
In the present paper we take into account this experimental device giving a possible interpretation/description of it as a reversible 3in/3out logic gate, and so in agreement with the Landauer's principle (or better its negation involving reversibility and no erasure of information energy in the environment) operating with energy below the expected limit. The 2in/1out Or gate can be recovered fixing one input as ancilla set to the bit 0, and considering two of the outputs as garbage and the remaining output as producing the expected Or. Furthermore, since our gate is not only reversible, but also self-reversible, the serial cascade of two of them produces the identity gate which furnish as global output just the same input, without no real dissipation of information.
The LNG realization of the OR Logical Gate
The device constructed by LNG and described in [LSNG16] "consists of a logic switch made with a Si 3 N 4 elastic cantilever L that can be bent by applying electrostatic forces with two electrical probes P 1 and P 2 closed to the cantilever tip." Figure 1. Schematic experimental situation consisting of the two electrical probes P 1 and P 2 which can act by electrostatic forces on the cantilever L.
Under the initial condition of the cantilever L in the vertical position we can have the following two experimental transitions of the physical system "Probes+Cantilever": (Ex1) If no electrode voltage V is applied to the two probes, the cantilever remains in the vertical position. (Ex2) If on the contrary an electrode voltage V = 0 is applied to at least one probe, the position of the cantilever is changed as consequence of the electrostatic force. Let us stress the following interpretation assumed by the authors of [LSNG16] which is the main argument of our analysis: (LNG) The input of the logic gate is associated with the voltages V (P 1 ) and V (P 2 ) of the respective electrical probes P 1 and P 2 . The position of the cantilever tip, measured by its deviation angle α o with respect to the vertical position, encodes the output of the logic gate. In a first approximation, this corresponds to a 2in/1out gate G LN G formalized by the correspondence: (V (P 1 ), V (P 2 )) GLNG − −−− → α o But in the quoted paper it is assumed a drastic convention of associating with the voltages V (P i ) of the probe P i their "normalized" value [V (P i )] = 1 iff V (P i ) = 0 and [V (P i )] = 0 otherwise, i.e., iff V (P i ) = 0, corresponding to the logic truth value 1 if the probe is on (V (P i ) = 0) and the truth value 0 if the probe is off (V (P i ) = 0). Similarly for the deviation angle we put [α] = 1 iff α = 0, and [α] = 0 otherwise. With these conventions the authors consider their device as a 2in/1out Boolean gate G LN G : As consequence of all these conventions, i.e., interpreting absence or presence of electric voltage on the probes as logic values 0 and 1, respectively, this device realizes the Or logic gate according to the following table which collects all the above remarks: Our position about the LNG realization of the Or gate can be exposed in the following considerations: (CL) In the transition depicted by Fig. 2 the input state is applied to the physical system "Probes+Cantilever", but in order to describe the output state one must take into account that the device continue to be the whole physical system "Probes+Cantilever". Therefore, in line of principle, there is no contra-indication to set the initial position of the cantilever in any possible angle α i , of course considering also the particular case α i = 0, the input configuration must take into account not only the Boolean pair [V (P 1 )], [V (P 2 )], but also the Boolean value [α i ]. But, since during the transformation the physical device continues to be "Probes+Cantilever", in order to detect the generated output without erasing information about the potentials V (P 1 ) and V (P 2 ), which produce the output angle of the cantilever α o , the real output of the device is the configuration This leads to the following table describing the physical transition different from the previous one relative to the input cantilever angle α i = 0: Of course, this corresponds to a partial reversible non conservative 3in/3out gate, whose complete formulation will be the argument of the forthcoming sections. According to this point of view there is no contradiction with the above discussed Landauer principle: the gate is reversible and so we can expected that according to "the experimental results presented in Fig. 3(a) of [LSNG16] the dissipated heat can be reduced below k B T . Toffoli's box representation of the Or realization according to the CL position. Note that the official Toffoli's terminology is the following: in-put=argument, output=result, ancilla=source, garbage=sink 2.1. A formal analysis of the LNG device behavior and an interesting metatheoretical contraposition. In order to better understand the above (LNG) and (CL) two different points of view about the LNG experimental results, let us introduce a formalization of its behavior. First of all let us denote by V the collection of possible voltages applied to the two probes P 1 and P 2 , and by A the collection of all possible angles assumed by the cantilever with respect to the vertical axis. From a more general point of view we can formalize the functioning of the physical device "Probes+Cantilever" by a function assigning to the input (V 1 , V 2 , α i ) consisting of the voltage V 1 (resp., V 2 ) applied to the probe P 1 (resp., P 2 ) and the initial angle of the cantilever α i the output where, as it happens in any multivalued function, the three component functions are put in clear evidence. Precisely, in order to describe the behavior of the LNG device synthesized by the previous points (Ex1)-(Ex2), the component functions In the particular case depicted in Fig. 2 in which the initial angle is α i = 0 we have Now with respect to this formalization there are at least two possible descriptions: (Po1) According to (LNG) in considering the output results one can neglect what happens in the two probes, that is one disregards the two component functions F 1 and F 2 , considered as hidden, and so in describing the experiment one takes into account the sole function F 3 (V 1 , V 2 , α i ) = α o in which the output α 0 uniquely depends from V 1 and V 2 producing the gate of Table 1 for realizing the Or connective in an irreversible way. (Po2) According to (CL) also in describing the output results one must take into account the whole V × V × A situation of the experimental apparatus "Probes+Cantilever", that is the whole three component functions F 1 , F 2 , besides to F 3 , leading to the gate described by Table 2 for realizing the Or connective in a reversible way. Relatively to the above discussion we have • the theoretical Landauer principle whose validation or falsification can be obtained by experiments; • the LNG experimental device which realizes the connective Or with an arbitrary small dissipation of energy. So, there are two possible contradictory positions: (a) If a priori one is against the Landauer principle, then one accepts the above position (Po1) claiming that the experimental LNG results falsify the Landauer principle. (b) If a priori one accepts the Landauer principle, then one agrees with the above description (Po2) of the experimental LNG results as a corroboration of the Landauer principle. A very interesting situation for an epistemological/philosophical debate where the experimental results, according to one or the opposite other assumption, lead to a falsification or a corroboration of the same theoretic principle.
We of course support the position (Po2). It is out of any doubt that the experimental LNG device is formed by the physical system "Probes (P 1 , P 2 )+Cantilever (L)" and so a formal description of its input state must consist of a triple (V 1 , V 2 , α i ) formed by the two input probe voltages V 1 and V 2 and the input initial angle α i (left side of Figure 2). After the interaction the physical LNG device continue to consist of the pair "Probes (P 1 , P 2 )+Cantilever (L)" and so in order to describe this physical situation the output state must be formed by the complete information about not only the output angle α o , but also of the probe voltages (right side of Figure 2), producing the reversible transition described by the Table 2. In other words, also in the output case we must have a complete description of the physical state of the "Probes+Cantilever" device.
On the contrary, as supported by LNG, if one decides that in the case of the output the physical system collapses in the cantilever component disregarding what happens to the two probes, then the output state consists in the unique variable "output angle" α o , i.e., a hidden variables incomplete description, corresponding to the irreversible transition described in Table 1.
Comparing these two positions, we can say that our description is a reversible completion of the incomplete (with hidden variables) irreversible LNG description. This is the reason which leads us to adopt the reversible completion version of the hidden incomplete irreversible one as interesting argument of investigation in the forthcoming sections. In particular, in the next subsection we confirm the rightness of this choice just on the basis of some experimental results obtained by the LNG device.
2.2.
A first reversible version of the LNG device. Coming back to the LNG device described at the beginning of section 2 and depicted in Figg. 1 and 2, owing to the fact that the input voltages V (P 1 ) = 0 and V (P 2 ) = 0 produce the trivial output α o = 0, the main interesting results regard the measure of the angle deviation of the cantilever α o in the three cases of input interest [V (P 1 )][V (P 2 )] = 01, 10, 11. Since the cantilever is really very small, the position change of its tip, as consequence of the bent, is also very small and subjected to thermal fluctuations. Hence, the statistical distribution of the cantilever tip position is a random quantity well reproduced by a Gaussian curve. It is experimentally observed (see Fig. 2 • logic inputs corresponding to the states 01 e 10 produce very similar results, distributed in a range between 0.8 nm and 1 nm, • whereas the logic input 11 produces a larger displacement around 1.1 nm. Of course, if one agrees with these considerations then one can achieve the following LNG conclusions: (LNG-1) if the similarity of the results obtained by the inputs 01 and 10 is assumed as an element of their indistinguishability, considering them as the production of the same output 1, and also if the larger displacement produced by the input 11 is always associated to the same output 1, one can conclude that "the cantilever-based gate performs like an Or gate that is a logical irreversible device [see Table 1]: in fact there is at least one case [i.e., 01, 10, 11] where, from the sole knowledge of the logic (and physical) output [i.e., 1], it is not possible to infer the status of the logic inputs." [LSNG16]. This is a possible metatheoretical position which can be considered as a joke of a dark night in which all the cows [inputs 01, 10, 11] result of colour black [the same output 1]. On the basis of the obtained results, our position is on the contrary quite different and in agreement with the previous considerations collected in the above statement (CL). Precisely, referring to the experimental results of Fig. 3(a) of [LSNG16], for any fixed protocol time τ p (ms) the average produced heat gives three different results always interpreted as the output 1 but relatively to the inputs 10 (symbol •), 01 (symbol △), and 11 (symbol ). Once adopted the convention of identifying the following symbols it is really true that the experimental outputs produced by 10 and 01 are very near, as said before "very similar", between them, but as evident from the above Figure 4, reproduction of the original Figure 3(a) from [LSNG16], the • are always less in value with respect to △, and furthermore the output furnishes always a value resolutely greater that these two. This behavior is confirmed by the histograms of Fig. 2(c) of [LSNG16,pag. 2] in which the one corresponding to the input 10 (•) assumes the maximum value lightly near to the value 0.8 nm, whereas the one corresponding to the input 01 (△) shows the maximum value lightly near to the value 1.0 nm, in any case greater that the previous one. Lastly, the histogram of the input 11 ( ) has a maximum between 1.0 and 1.2 nm, but near this latter value and in any case clearly distinguishable from the other two. In this paper we assume that they correspond to three "different " values of the logic value 1 according to the following statement: (CL-1) Borrowing the usual terminology from the fuzzy set theory according to Zadeh we can think to three different logic values 1, each of them characterized by the "membership degree" represented by the three symbols •, △, and , formalized as ordered pairs (•, 1), (△, 1), e ( , 1). Then, according to this interpretation, the LNG gate for realizing the logic connective Or is formalized by the transitions: where for formal completeness we used the symbol * associated to the logic value 0 in order to obtain the output ( * , 0), omitting its precise determination which will be made in the sequel. In this way we obtain a reversible gate since the knowledge of any output allows one to uniquely determine the corresponding input generating it. In this way we lose any possible ambiguity, and with respect to this result we can do the following remark.
It is very interesting to note that these pure experimental results are completely described in a right way by (i.e., it is totally coincident with) our Table 2 once adopted in this latter the above conventional substitutions given by equation (2). In this way the experimental results of Fig. 2(b) from [LSNG16]) confirm the rightness of our previous assumptions formalized in (Po2) relative to a 3in/3out reversible gate description of the LNG experiment, and so without any contradiction with respect to the experimental result of arbitrary small dispersion of energy, according to the Landauer principle. At any rate we don't continue to develop this interesting analysis since in the next section we formalize a realization of the LNG device as a 3in/3out self-reversible gate, avoiding any discussion about the dichotomy "01 and 10 distinguishable or indistinguishable", but which satisfies the Landauer principle owing to its reversibility.
3. The self-reversible 3in/3out Cattaneo Leporini (CL) gate In order to obtain this result first of all we have analyzed the main 3in/3out gates which one can found in literature: the conservative self-reversible Fredkin gate, the self-reversible but not conservative Toffoli and Peres gates (see [FT82,Per85]) realizing that no of them satisfies the condition of having as derived gate, that is fixing one of the input as ancilla and considering two of the outputs as garbage, the description of the LNG device. As a consequence we have autonomously construct a gate of this kind arriving to the Cattaneo-Leporini (CL) 3in/3out gate G CL described by the following functional representation (but discovering some times after our formalization that the same gate has been introduced in [KTR12] with the name of TNor gate).
This functional definition is represented by the Table 3. Obviously, this is a reversible non-conservative gate (for instance, in the transition (010) → (011) the number of 1 bites in the input is not preserved). Moreover, it is self-reversible in the sense that Fig. 6 it is represented the role played by self-reversibility in producing as global output the same input as consequence of the transitions (a, b, c) The FanOut (FO) reversible, but non conservative, gate allows the duplication (cloning) of the signal of the third line after the first output, a duplication of which is inserted as third input of the second CL gate, whereas the other duplication is extracted as overall output of the cascade.
From the functioning of the CL gate described in the Table 3 it follows that, if as usual some suitable inputs are fixed as ancilla, one has the following possible cases: There are also some different possibilities of realizing the logic connective Not ¬, of which below we show some of them: Summarizing, the primitive logic connectives which can be obtained from this 3in/3out selfreversible CL gate can be collected in the following list: Let us note a behavior of this CL gate which in some sense is dual with respect to the Toffoli gate, as successively discussed by a comparison of it with this latter, and of which we will study the analogies in the forthcoming section. This behavior consists in the transitions: in which we stress that if the first and the second lines are both fixed in the input 0 (a ∧ b = 0) then on the third line it acts the identity, whereas in all the other cases in which at least one of the two control lines, the first and the second ones, is fixed with the input 1 (a ∨ b = 1) then on the third line it is the Not gate which acts. It is a kind of Controlled-Controlled-(multiple)Not (CCmN) in which a and b are known as the first and second control bits, while c is the target bit.
In conclusion, the gate leaves both control bits unchanged, flips the target bit if at least one of the control bit is set to 1, and otherwise leaves the target bit alone. We speak of multipleNot since we have seen in the second transition of the equation (7) how it is possible to generate in three different modes the Not logic gate when at least one of the inputs a or b is fixed in the bit 1 (see also the equations (6a)).
Let us now analyze the behavior of our self-reversible CL gate when the third input is fixed to x 3 = 0, which turns out to be useful in the sequel for a comparison with the LNG realization of their Or gate. We extract this behavior from the Table 3 when the third input is set to 0, obtaining the partial Table 4 (which is identical to the Table 2 under the Table 4. Realization of the Or connective by the output x ′ 3 = x 1 ∨ x 2 from the CL self-reversible 3in/3out gate fixing the input x 3 = 0 This situation produces the self-reversible transitions: a, b, 0) represented in the block scheme of Figure 7. In literature one can find an interesting reversible, non conservative, gate introduced by Toffoli by the functional representation: x 3 whose comparison with the functional representation of the CL gate of equation (4) stressed the difference consisting in substituting into the third equation of this latter the connective ∨ with the connective ∧. The Toffoli gate formulation in terms of truth table is the following one: From which, by constraining one of the inputs as ancilla, it is possible to obtain some familiar standard logic primitives, according to the correspondences whereas, by constraining two of the inputs, we may get the FanOut and the Not gates, for instance Summarizing the set of logic primitives generated by the self-reversible Toffoli gate is the following one: G T = {Xor, And, Nand, Not, FanOut} Let us recall that this Toffoli gate is the one that Feynman in [Fey96] defines (and presently is recognized as such) Controlled Controlled Not (CCN): "in which the lines x 1 and x 2 acts as control lines, leaving x 3 as it is unless both are one, in which case x 3 becomes Not(x 3 )." This behavior can be described by the correspondences, which can be compared with the analogous of the CL gate expressed by equation (7), Similarly to the CL gate, fixing the input x 3 = 0 as ancilla, the Toffoli gate reduces to the following table producing the output x ′ 3 = x 1 ∧ x 2 with the pair x ′ 1 , x ′ 2 as garbage, i.e., a reversible realization of the And logic connective. Table 6. Realization of the And connective by the output x ′ 3 = x 1 ∧ x 2 from the Toffoli self-reversible 3in/3out gate fixing the input x 3 = 0 This situation of the Toffoli gate produces the self-reversible transitions: The logic connective Nand, ¬(a ∧ b), is obtained by the Toffoli gate if we input x 1 = a and x 2 = b as control bits, and fixing as ancilla bit the third input x 3 = 1. The Nand of x 1 and x 2 is output as the target x ′ 3 considering the pair formed by the first and the second outputs (x ′ 1 = a, x ′ 2 = b) as garbage. Formally, this can be formalized by the transitions, where the second one stresses the self-reversibility of the gate: Quoting Peres "It is well known that the Nand gate is a universal primitive (Nand is the relational operator which gives the "true" value as output if one or both input values are "false.") Therefore the reversible gate [of Table 5] is universal."
LNG Or and And connectives implementation by self-reversible gates
Let us first consider the Or gate realization using the device proposed by López-Suárez et al. in [LSNG16] (schematically depicted in Fig. 1 of our section 2). As previously described the device consists of two electrical probes (denoted by P 1 , P 2 ) close to a cantilever L. An electrode voltage V can be applied to each probe P j . If V = 0, we will say the probe is off (denoted by D), while we will say the probe is on (denoted by A) when V = 0. The corresponding cantilever tip displacement is determined by the angle α formed with respect to the vertical axis as reference axis.
The experimental results obtained by the LNC device represented by Fig. 2(c) from [LSNG16] can be summarized in the following points: (Exp1) Under the fixed input initial angle α i = 0 of the cantilever, a sufficiently large number of single tests, each of which relative to the three non "trivial" inputs of the voltages applied to the two probes DA, AD, and AA, experimentally produces three histograms I DA , I AD , and I AA as mappings α ∈ R + → I hk (α) ∈ N, for hk ranging on {DA, AD, AA}. (Exp2) Each histogram has a support (collection of α ∈ R + for which the corresponding I hk (α) = 0), denoted as supp(I hk ), which is a bounded interval on R + . (Exp3) There is a bounded value of angle α B , such that both the supports I DA < α B and I AD < α B , whereas α B < I AA . (Exp4) The two supports I DA and I AD are quite similar, I DA ≃ I AD , and so according to the LNG assumption described in the point (LNG-1) of section 2.2, they can be submitted to the stronger condition of being equal: I DA = I AD . (Exp5) All the above histograms I hk correspond to statistical distributions well reproduced by Gaussian curve. Denoted byα 1 =< I DA >,α 1 =< I AD >, and α 2 =< I AA >, the corresponding mean value of the involved Gaussian, ad as usual in physics putting the boundary angle α B = 1, the experimental results give the following chain of values: 0 <α 1 ≃α 1 < 1 < α 2 . (Exp6) According to the strong assumption of point (Exp4), this last result can be formalized by the chain of inequalities: We recall that all these experimental results follow by the initial condition on the input angle α i = 0, and so all these results can be collected in the following table where, according to the strong assumption (Exp6) formalized in the chain of inequalities (11), we set α 1 :=α 1 =α 1 . Table 7. Physical behavior of the LNG device when the cantilever is in the initial vertical position α i = 0 and the ordering of the final angles α o given by 0 < α 1 < 1 < α 2 Assuming now the convention of putting A = 1 (probe on, i.e., active) and D = 0 (probe off, i.e., inactive) these results can be translated in a Boolean context under particular assumptions about a normalization function of the angle α according to two possible cases which we will discuss now.
5.1. First normalization function: the Cattaneo-Leporini case for generating Or. In this first case the normalization function of the angle, denoted by u 1 (α), is the following one: From this choice of normalization function, the physical behavior of the LNG device described in the Table 7 when the initial cantilever position is vertical, α i = 0, is translated into the table of equation (12) at the left side. The Boolean table at the right is nothing else than this latter setting A = 1, D = 0, P 1 = x 1 , P 2 = x 2 , u 1 (α i ) = x 3 , and u 1 (α o ) = x ′ 3 : (12) But, looking at these experimental results from the LNG device we con set the following interesting considerations: • The table at the right giving the Boolean formalization of the LNG device behavior when the initial cantilever angle is 0 coincides with the Table 4 discussed in section 3 as selfreversible CL gate realization of the classical Or connective under the ancilla choice x 3 = 0, the third output x ′ 3 = x 1 ∨ x 2 producing the required Or, and with the two outputs x ′ 1 and x ′ 2 as garbage (see also the description given by Fig. 3 and 7). • That is, all the experimental results obtained by the LNG concrete device and collected in the above Table 7 can be described by a 3in/3out self-reversible Boolean gate with a complete agreement with the Landauer principle of arbitrary small energy dissipation. • In other words, there is no experimental contradiction as claimed by LNG in their paper.
Furthermore, encoding the output (x ′ 1 , x ′ 2 ) as follows 00 = * , 01 =△, 10 = •, 11 = , the table (12) at the right side becomes the following (that can be compared with Table (3), according to our point of view about the experimental results obtained from the LNG device described in section 2.2): (13) Therefore, if in the left table (12) we consider the case where at least one of the probes is active, we will have the following three transitions: In these cases, when the result of the interaction of the probes P 1 , P 2 on the cantilever L is observed, one cannot overlook/disregard the two probes to only look at the tip position, but one has to consider the whole apparatus "probes + cantilever tip" (see Fig. 2). Following the right table (12), we have the whole transitions: where the output in {0, 1} 3 (whether it is (01|1) or (10|1) or (11|1)) uniquely determines the input in {0, 1} 3 generating it, and so, stressing this conclusion another time, according to the Landauer principle without any dissipation of energy. All this has nothing to do with the 2in/1out irreversible Or logic gate where the output 1 does not allow one to determine the generating input in {0, 1} 2 , as claimed by LNG in their paper. As a summary of all the above discussion we can state the 1st Conclusion: The LNG device under the assumptions of the initial cantilever angle α i = 0 and the normalized function on angles u 1 is a concrete realization of the CL 3in/3out self-reversible gate with the third input fixed to 0, producing in the third output the connective Or.
5.2.
Second normalization function: the Toffoli case for generating And. As said before, the Boolean formulation of the experimental results collected in Table 7 depends from an arbitrary choice of a normalization function assigning in a conventional way Boolean values to the cantilever angles. In the present subsection we take into account another possible conventional choice formalized by the normalization function: In this case the Table 7, with the usual adopted conventions, leads to the following two tables (14) The table at the right coincides with the Table 5 obtained from the self-reversible 3in/3out Toffoli gate fixing the input ancilla x 3 = 0, with the pair x ′ 1 x ′ 2 considered as garbage, and the output x ′ 3 = x 1 ∧ x 2 furnishing the logic And of the first and second inputs. So also in this case we have a "reversible" generation of the required And gate with arbitrary small dissipation of energy. This result involving the self-reversible Toffoli gate has been achieved adopting a normalization function for describing the experimental results produced by the LNG device different from the one adopted in the CL case. But let us stress that these two different choices correspond to some quite arbitrary Boolean assignments to the involved cantilever angles, no one of which can be considered as a privileged choice from the experimental point of view. This behavior has been also realized by LNC when in the discussion about the Fig. 2(c) they assert that "the threshold value for the Or gate is represented by the dashed line [around 0.1 nm]. By changing the position of the dashed line [around 1.0 nm], the gate can be operated also as an And gate." Precisely, the dashed line around 0.1 nm put the three situations 01, 10, and 11 as associated to the output state 1, whereas the dashed line around 1.0 nm groups the three situations 00, 01, and 10 as associated to the output state 0.
As a summary of the discussion performed in this subsection we can state the 2nd Conclusion: The LNG device under the assumptions of the initial cantilever angle α i = 0 and the normalized function on angles u 2 is a concrete realization of the Toffoli 3in/3out self-reversible gate with the third input fixed to 0, producing in the third output the connective And.
Nor and Nand connectives implementations by LNG device
Let us recall that, as stressed before, the assignment of a Boolean bit value, either 0 or 1, to a deflection angle formed by the cantilever is only a conventional matter of fact. There is no physical reason to state that, for instance, a particular angle α may be labelled with the bit value 1 instead of the bit value 0. This is the reason that allowed us to consider two different normalization functions, u 1 and u 2 , to treat the above cases of subsections 5.1 and 5.2, respectively, in order to prove that the LNG device, suitably normalized, describes the Or and the And logic connectives by self-reversible CL and Toffoli gates.
6.1. Third normalization function: the Cattaneo-Leporini case for generating Nor. Let us apply to the functioning of the LNG device described by the Table 7 the conventional normalization function u 1 := 1 − u 1 , explicitly written as With the usual conventions the Table 7 is translated into the Boolean form: That is, the fixed bit x 3 = 1 is the ancilla input, whereas x ′ 3 = ¬(x 1 ∨ x 2 ) is the required Nor connective produced by the LNG device under the new normalization function. But this is just the same situation described by the self-reversible CL gate depicted in Fig. 8 corresponding to the transitions (a, b, 1) a, b, ¬(a ∨ b)) CL − − → (a, b, 1) discussed in section 3. Therefore we can state the following 3rd Conclusion: The LNG device under the assumptions of the initial cantilever angle α i = 0 and the normalized function on angles u 1 is a concrete realization of the CL 3in/3out self-reversible gate with the third input fixed to 1, producing in the third output the connective Nor.
6.2. Fourth normalization function: the Toffoli case for generating Nand. Analogously to the previous case, one can apply to the LNG device described by the Table 7 the conventional normalization function u 2 := 1 − u 2 , explicitly written as In this case, with the usual conventions the Table 7 is translated into the Boolean form: (16) That is, fixing the third input with the bit x 3 = 1, the third output x ′ 3 = ¬(x 1 ∧ x 2 ) is the required Nand connective produced by the LNG device under the new normalization function. But this is just the same situation described by the self-reversible Toffoli gate corresponding to the transitions (a, b, 1) T − → (a, b, 1) discussed at the end of section 4. Therefore we can state the following 4th Conclusion: The LNG device under the assumptions of the initial cantilever angle α i = 0 and the normalized function on angles u 2 is a concrete realization of the Toffoli 3in/3out self-reversible gate with the third input fixed to 1, producing in the third output the connective Nand.
The particular case of the LNG realization of the connective Xor
In this section we will take into account the self-reversible realization of Xor logic connective by the LNG device, making use as usual of some suitable normalization function. But first of all let us note that if we take into account the CL self-reversible gate of section 3, this connective can be realized either fixing the first input to 0 (and this in the LNG device corresponds to fixing the voltage of the first probe) or fixing the second input to 0 (and also in this case it is the voltage of the second probe in the LNG device which must be fixed). The same considerations can be done in the Toffoli case where the fixed either first or second input must be 1. This gives rise to a problem as we discuss now.
Let us consider the case of the CL gate for the fixed input x 1 = 0, which produces as the third output the Xor logic connective of the second and third inputs, x ′ 3 = x 2 ⊕ x 3 , whose table describing this case is the following: But trying to implement the CL gate inputs x 1 x 2 x 3 into the LNG device, applying the usual physical behaviors described by (Exp1)-(Exp6) in section 5 and making use of the normalization function of subsection 5.1 relative to the CL case, one obtains the following two tables (the left physical table and the corresponding Boolean one at the right): (17) As immediate comparison, the Boolean LNG x ′ 3 of the table (17) at the right has nothing to do with x 2 ⊕ x 3 obtained in the Table 8 Let's see if this choice has some interest or possible physical realization. We will have two cases, each with its two subcases: |u 1 (α i ) − u 1 (α o )| = 0: 00, i.e., if the tip is initially vertical, it remains vertical; 11, i.e., if the tip is deflated by α i = 0, it remains deflected, although the two angles might be different α i = α o . |u 1 (α i ) − u 1 (α o )| = 1: 01, i.e., if the tip is initially vertical, after processing it is deflated; 10, i.e., if the tip is deflated by α i = 0, eventually, it returns to the vertical position. These are, at least in principle, all physically observable, but one has to change (or rather, complete) the left LNG table (17) as follows: where, setting x ′ 3 = |u 1 (α i ) − u 1 (α o )|, one obtains the following table where the third output is the Xor of the second and third input, while the first and second lines retain their value unchanged: Note that even in this case one has to do with the realization of the self-reversible Xor (each output is generated by a single input) as the third output in a 3in/3out gate. 7.2. Coherence of the CL through LNG with the present approach. The question arises if what is obtained in the section 5.1 is still valid when the left table (12), in which the only results of u 1 (α 0 ) are considered, is completed with a further final column relative to the outputs |u 1 (α i ) − u 1 (α o )|. This leas to the following table: Obviously, bothx 3 = u 1 (α o ) and x ′ 3 = |u 1 (α i ) − u 1 (α o )| give always the same value confirming what has been achieved in section 5.1, where no reference has been made to this fourth output.
A second solution for the Xor generation by the LNG device and the induced NXor connective
Let us now consider the second solution for generating the Xor connective consisting in introducing a third self-reversible 3in/3out gate, besides the previously treated CL and Toffoli ones, whose function representation is given by The corresponding tabular representation is given by the Table 9. Table 9. Tabular representation of the 3in/3out self-reversible X-gate This 3in/3out gate is trivially self-reversible: G X • G X = id. Moreover, from the Table 9, choosing as usual some input as fixed ancilla, the following possible cases follow: corresponding to the realization of the logic connectives Xor ⊕ and NXor ¬⊕. From these results we obtain the following realizations of the FanOut gate and the negation connective Not, respectively, (0, a, 0) → (0, a, a) (a, 0, 0) → (a, 0, a) (1, a, 0) → (1, a, ¬(a)) (a, 1, 0) → (a, 1, ¬(a)) (a, 0, 1) → (a, 0, ¬(a)) Also in this X-gate case we have a controlled-controlled behaviour in the sense that if both the two control lines x 1 and x 2 are equal, then to the third target line it acts the identity, while if the two control lines x 1 and x 2 are different, then to the third target line it acts the negation: Now, let us give the full partial table from Table 9 corresponding to the generation as third output of the Xor connective when the third input is fixed as ancilla to the bit 0: Table 10. Generation of the Xor connective as third output form the selfreversible 3in/3out X-gate G X , when the third input is fixed to 0: Now, taking into account the experimental behavior of the LNG device described by Table 7 under the condition of initial cantilever angle α i = 0 we will try to realize the Xor connective of the above Table 10 obtained from the 3in/3out self-reversible X-gate. We need at this purpose to consider a peculiar normalization function assigning Boolean values 0 and 1 to the possible cantilever angles. The required normalization function is the following one: obtaining from the Table 7 the following "normalized" two tables: the one on the left is the physical behavior of the LNG device with the normalization of the cantilever angles and the one on the right corresponding to its Boolean version under the usual conventions D = 0 and A = 1: device behavior presented by the table at the right coincides just with the Table 10 obtained by the 3in/3out self-reversible X-gate under the assumption x 3 = 0. This leads to the following: 5th Conclusion: The LNG device under the assumptions of the initial cantilever angle α i = 0 and the normalized function on angles u 3 is a concrete realization of the 3in/3out self-reversible X-gate with the third input fixed to 0, producing in the third output the connective Xor.
8.1. The NXor connective generation by LNG. Let us note that if in the Table 9, giving the tabular representation of the X-gate, the third input is fixed as ancilla to the bit 1, then one obtains the partial table corresponding to the generation as third output of the NXor connective Table 11. Generation of the NXor connective form the self-reversible 3in/3out X-gate as third output, when the third input x 3 is fixed to 1: Now, if instead of the angle normalization function (23) one considers its "negation" u 3 := 1−u 3 explicitly defined by the rules where the output x ′ 3 = ¬(x 1 ⊕ x 2 ) of the table at the right is the NXor connective of the inputs x 1 and x 2 , negation of the Xor connective described by the tables (22), leading to the further 6th Conclusion: The LNG device under the assumptions of the initial cantilever angle α i = 0 and the normalized function on angles u 3 is a concrete realization of the 3in/3out self-reversible X-gate with the third input fixed to 1, producing in the third output the connective NXor.
Conclusions
In this paper we discussed an experimental device proposed by López-Suárez et al. in [LSNG16], whose essential description has been synthesized by us in section 2, especially their conclusion that it realizes by a micro-electromechanical cantilever the classical irreversible Or logic gate, operating with energy below the expected limit stated in literature as Landauer principle.
Our analysis of the LNG experimental device, performed first in section 2.2 and then more deeply treated in section 3, arrives to the conclusion that the LNG experimental device can be described as the realization of a 3in/3out self-reversible gate whose Or logic connective is obtained with the usual procedure of fixing the third input as ancilla of logic value 0, considering the first two outputs as garbage and obtaining in this way as third output the required connective as shown in Fig. 3 and 7. This is obtained as a Cattaneo-Leporini (CL) 3in/3out self-reversible gate if one adopts a normalization of the experimental angles by a suitable function as discussed in subsection 5.1. Owing to the self-reversibility of this gate there is no contradiction with the results of arbitrary small dissipation of energy, i.e., well below k B T , experimentally obtained by the LNG device. On the other hand, and on the basis of the Toffoli (T) 3in/3out self-reversible gate, making use of another suitable angle normalization function as discussed in subsection 5.2, from the LNG device it is possible to obtain the And logic gate with the usual procedure "ancillainputs-garbage-output" and so, also in this case, with arbitrary small energy dissipation without any contradiction with the Landauer principle.
This procedure consisting of the results of the given LNG device with initial cantilever angle equal to 0 and suitable functions normalizing the cantilever angles, can be extended to obtain also the logic connectives Nor, Nand, Xor and NXor in a self-reversible way.
This leads to consider the pair formed by the LNG device plus a "memory" containing all the necessary normalization functions {u 1 , u 2 , . . . , u j , . . . , u 6 } as a universal logic machine in the sense that based on the LNG device, by the input of a suitable angle normalization function u j , it is possible to obtain one of logic connective from the collection LC LN G = {Or, And, XOr, NOr, NAnd, NXor} This universal logic machine is schematized in the below figure Let us recall that all these logic connectives generated by the LNG device are obtained under the assumption introduced in section 5 that the two output cantilever anglesα 1 andα 1 , as experimentally very similar, are considered as equal (indistinguishable) between them. Let us now Figure 9. Schemata of the LNG-Machine with the memory of the cantilever angle normalized functions suppose, as discussed in section 2.2, that after some technological development these two angles can be detected as different. In this case we must modify the Table 7 in order to take into account this difference in the following way: Table 12. Physical behavior of the LNG device when the cantilever is in the initial vertical position α i = 0 and with the final angles α o ordering 0 <α 1 < α 1 < 1 < α 2 , withα 1 =α 1 Let us now introduce the further cantilever angle normalization function whose simplified version is the following one u 4 (α) := Using this normalization function, the above Table 12 assumes the Boolean form under the usual conventions D = 0 and A = 1: Table 13. Boolean form under the normalization function u 4 of the above LNG experimental behavior given by Table 12 where the third output x ′ 3 = ¬x 1 ∨ x 2 = x 1 → x 2 is the implication connective of the first two inputs x 1 and x 2 . But if one consider the self-reversible 3in/3out gate whose tabular representation is the following one Table 14. Tabular representation of the 3in/3out self-reversible I-gate the partial table corresponding to the third input x 3 = 1 is just coincident with Table 13, which turns out to be a realization of this self-reversible gate by the LNG device with the normalization u 4 . So also the implication connective can be realized in a self-reversible way, i.e., with arbitrary small dispersion of energy, by the LNG device when the two angle outputsα 1 andα 1 can be detected as different between them.
In conclusion, also if the interpretation of the LNG device as a generator in an irreversible way of the connective Or in an experimental situation of arbitrary small energy dispersion (contrary to the Landauer principle), is erroneous, the LNG device is a very powerful tool for being the essential component of a universal logic machine able to produce, with a suitable choice of a normalization function collected in the memory, a great number of logic connectives.
|
2017-06-15T17:49:45.000Z
|
2017-05-15T00:00:00.000
|
{
"year": 2017,
"sha1": "6016145c37add17bb138f5b86fc9986f03106629",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6016145c37add17bb138f5b86fc9986f03106629",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Engineering",
"Computer Science"
]
}
|
73459611
|
pes2o/s2orc
|
v3-fos-license
|
Class-C Linearized Amplifier for Portable Ultrasound Instruments
Transistor linearizer networks are proposed to increase the transmitted output voltage amplitudes of class-C amplifiers, thus, increasing the sensitivity of the echo signals of piezoelectric transducers, which are the main components in portable ultrasound instruments. For such instruments, class-C amplifiers could be among the most efficient amplifier schemes because, compared with a linear amplifier such as a class-A amplifier, they could critically reduce direct current (DC) power consumption, thus, increasing the battery life of the instruments. However, the reduced output voltage amplitudes of class-C amplifiers could deteriorate the sensitivity of the echo signals, thereby affecting the instrument performance. Therefore, a class-C linearized amplifier was developed. To verify the capability of the class-C linearized amplifier, typical pulse-echo responses using the focused piezoelectric transducers were tested. The echo signal amplitude generated by the piezoelectric transducers when using the class-C linearized amplifier was improved (1.29 Vp-p) compared with that when using the class-C amplifier alone (0.56 Vp-p). Therefore, the class-C linearized amplifier could be a potential candidate to increase the sensitivity of echo signals while reducing the DC power consumption for portable ultrasound instruments.
Introduction
Ultrasound instruments have been used widely to obtain anatomical information from targets in automotive, semiconductor, structural health monitoring, renewable energy, and medical applications [1][2][3]. In particular, portable ultrasound instruments are recently highlighted as medical instruments used in ambulances and emergency rooms because they provide real-time, nonionizing, and noninvasive characteristics for patients' diagnosis before utilizing other invasive medical instruments, such as X-ray, computed tomography, and positron-emission tomography to obtain the structural and physiological information [2].
The ultrasound instrument performance when using array transducers is originally affected by the nonlinear acoustic properties, thus generating grating lobes and speckle patterns in the imaging [4,5]. Additionally, portable ultrasound instruments suffer from unwanted heat generated by the large battery consumption of the transmitter [2]. Therefore, efficient battery management is one of key factors to evaluate portable ultrasound instruments. To reduce the battery consumption, the direct current (DC) power consumption needs to be reduced while sustaining reasonable performance, as the DC power consumption could be crucial problem to be handled by amplifier performance. Similar to conventional ultrasound instruments, portable ultrasound instruments are composed of a transmitter, piezoelectric transducer, and receiver [6].
The most DC power consumption comes from the amplifier in the transmitter's analog-to-digital converter and the digital-to-analog converter in the receiver, respectively [6]. The piezoelectric transducer is the most important electromechanical device to produce the acoustic or electrical waveforms in the instruments [7]. The amplifier triggers the piezoelectric transducers, generating acoustic waveforms, and then the reflected acoustic waveforms are converted into electrical signals by the piezoelectric transducers [8,9]. Therefore, the amplifier is also a crucial design factor for the portable ultrasound instruments. Compared with linear amplifiers, nonlinear amplifiers have been shown to reduce DC power consumption in ultrasound instruments [4,10,11]. However, the reduced signal amplitudes of the echo signals caused by the nonlinear amplifiers have limited widespread use of portable ultrasound machines because of low sensitivity. To reduce signal loss, a proper nonlinear amplifier design is very important because the amplifier is a last-stage electronic component in the transmitter to excite the piezoelectric transducers [6].
Several nonlinear amplifiers have been developed for ultrasound applications. A push-pull class-B amplifier was implemented for the 50-kHz ultrasonic transducer [12]. Class-D amplifiers were developed for 41.27-kHz Langevin sample transducer and high-power piezoelectric load [13,14]. The class-E amplifier was used for a 40.25 kHz inductive piezoelectric transducer [15]. The improved performance of these nonlinear amplifiers could enhance the piezoelectric transducer performance if the transmit output signals generated by the nonlinear amplifiers are improved. Additionally, the class-C amplifier is one of the most efficient amplifiers among the nonlinear amplifiers; thus, it could be useful to minimize unwanted heat generation [16,17]. However, the class-C amplifier suffers from nonlinear operations due to low DC operating points. Therefore, a linearizer scheme to increase the voltage gain for class-C amplifiers could be useful to improve the piezoelectric transducers for portable ultrasound instruments. Figure 1 shows the concept of the class-C linearized amplifier for portable ultrasound instruments. The amplifier typically uses the resistor divider network to bias the DC voltages for amplifier operations [18]. However, high-voltage environment affects the bias voltages of the resistor divider network such that it can affect the output performance variance of the amplifier [19]. Additionally, the piezoelectric transducer component itself is capacitive-type device such that the nonlinear behavior under a high-voltage environment is related to the amplifier performance [20]. In particular, class-C amplifiers are critically affected by the DC bias voltages due to low DC operating points. Therefore, a transistor linearizer, dedicated to improve the class-C amplifier output performance, was developed by stabilizing the DC bias voltages under high-voltage environments. For the amplifier design, the expected simulation libraries of the power metal-oxidesemiconductor field-effect transistors (MOSFETs) do not have signal-distortion accuracy for the subdecibel level [21]. Additionally, the temperature model parameters for power MOSFETs in the simulation tool are sometimes unpredictable under high-voltage environments [22]. For power MOSFETs, hot-carrier injection effects generate inaccurate gate-source voltage variances under highvoltage environments [23]. Therefore, the amplifier design needs to be started at the hands-on printed circuit board level to produce proper amplifier performance. Section 2 describes the schematic diagrams and operating mechanisms of the class-C amplifier with a transistor linearizer scheme. Section 3 shows the measured results of the class-C amplifier with the resistor divider network and transistor linearizer network, including pulse-echo responses using the piezoelectric transducer. Section 4 provides the concluding remarks of the paper.
Class-C amplifier
Transducer Class-C amplifier Transistor linearizer network Transducer Figure 1. Concept of the class-C linearized amplifier for portable ultrasound instruments.
For the amplifier design, the expected simulation libraries of the power metal-oxide-semiconductor field-effect transistors (MOSFETs) do not have signal-distortion accuracy for the sub-decibel level [21]. Additionally, the temperature model parameters for power MOSFETs in the simulation tool are sometimes unpredictable under high-voltage environments [22]. For power MOSFETs, hot-carrier injection effects generate inaccurate gate-source voltage variances under high-voltage environments [23]. Therefore, the amplifier design needs to be started at the hands-on printed circuit board level to produce proper amplifier performance. Section 2 describes the schematic diagrams and operating mechanisms of the class-C amplifier with a transistor linearizer scheme. Section 3 shows the measured results of the class-C amplifier with the resistor divider network and transistor linearizer network, including pulse-echo responses using the piezoelectric transducer. Section 4 provides the concluding remarks of the paper. Figure 2 shows the fabricated printed circuit board of the class-C amplifier with resistor divider network and transistor linearizer network. The class-C amplifiers are working for high-voltage environments such that power resistors, electrolytic capacitors, and high-power choke inductors were used. Cooling fan system noises may affect the performance of the portable ultrasound instruments and ultrasound probes have limited structures and sizes to be contained with cooling fan systems. Additionally, class-C amplifiers have low heat [16]. Therefore, cooling fan systems might be used in addition to the 1-cm 2 heat sinks attached to the top of the power MOSFETs for the experimental measurements. Figure 2 shows the fabricated printed circuit board of the class-C amplifier with resistor divider network and transistor linearizer network. The class-C amplifiers are working for high-voltage environments such that power resistors, electrolytic capacitors, and high-power choke inductors were used. Cooling fan system noises may affect the performance of the portable ultrasound instruments and ultrasound probes have limited structures and sizes to be contained with cooling fan systems. Additionally, class-C amplifiers have low heat [16]. Therefore, cooling fan systems might be used in addition to the 1-cm 2 heat sinks attached to the top of the power MOSFETs for the experimental measurements. Figure 3 shows the schematic diagrams of the class-C amplifier with resistor divider network and transistor linearizer network. The class-C amplifiers were composed of the two-stage amplifiers. In Figure 3, the typical resistor divider network was composed of resistors (RL2 and RL3). The resistor (RL1) was used for blocking the alternating current (AC) signals from the input port. The shunt choke inductor (Ld1) was used to minimize the DC voltage drop, as the class-C amplifiers had low maximum output voltage swing. The electrolytic capacitors (CG1 = 10μF and CD1 = 220 μF) with three additional capacitors (CG2 = CD2 = 0.1 μF, CG3 = CD3 = 1000 pF, and CG4 = CD4 = 47 pF) were used to reduce the noise signals from DC power supplies. Power MOSFETs (PD57018, STMicroelectronics, Geneva, Switzerland) were used because the operating frequency and drain-source voltage ranges of the power MOSFET are 1 GHz and 65 V, respectively. All electronic components were guaranteed to work under a high-voltage environment in the printed circuit board.
Materials and Methods
In Figure 3, a transistor divider network, instead of a resistor divider network, was used to improve the bias voltage conditions for the class-C amplifier. The transistor linearizer network was designed to handle large voltage amplitudes from the input port of the amplifier, as the class-C amplifier has low DC operating point, resulting in reduced output voltage amplitudes with low input signal amplitudes. This phenomenon is undesirable for the piezoelectric transducers with low sensitivity in the portable ultrasound instruments [7]. Therefore, the large-signal pulsed-sinusoidal inputs are needed for class-C amplifier used such that it could affect the DC bias voltages because amplified large signals on the drain-source voltages of the power MOSFETs (P1 and P2) could reduce the maximum allowances of the gate-source voltages of the power MOSFETs [21]. Therefore, the linearizer circuit is needed to improve the linearity performance of the amplifiers. Figure 3 shows the schematic diagrams of the class-C amplifier with resistor divider network and transistor linearizer network. The class-C amplifiers were composed of the two-stage amplifiers. In Figure 3, the typical resistor divider network was composed of resistors (R L2 and R L3 ). The resistor (R L1 ) was used for blocking the alternating current (AC) signals from the input port. The shunt choke inductor (L d1 ) was used to minimize the DC voltage drop, as the class-C amplifiers had low maximum output voltage swing. The electrolytic capacitors (C G1 = 10 µF and C D1 = 220 µF) with three additional capacitors (C G2 = C D2 = 0.1 µF, C G3 = C D3 = 1000 pF, and C G4 = C D4 = 47 pF) were used to reduce the noise signals from DC power supplies. Power MOSFETs (PD57018, STMicroelectronics, Geneva, Switzerland) were used because the operating frequency and drain-source voltage ranges of the power MOSFET are 1 GHz and 65 V, respectively. All electronic components were guaranteed to work under a high-voltage environment in the printed circuit board.
In Figure 3, a transistor divider network, instead of a resistor divider network, was used to improve the bias voltage conditions for the class-C amplifier. The transistor linearizer network was designed to handle large voltage amplitudes from the input port of the amplifier, as the class-C amplifier has low DC operating point, resulting in reduced output voltage amplitudes with low input signal amplitudes. This phenomenon is undesirable for the piezoelectric transducers with low sensitivity in the portable ultrasound instruments [7]. Therefore, the large-signal pulsed-sinusoidal inputs are needed for class-C amplifier used such that it could affect the DC bias voltages because amplified large signals on the drain-source voltages of the power MOSFETs (P 1 and P 2 ) could reduce the maximum allowances of the gate-source voltages of the power MOSFETs [21]. Therefore, the linearizer circuit is needed to improve the linearity performance of the amplifiers. Figure 4 describes the operating mechanism of the resistor divider and transistor linearizer networks for class-C amplifiers. In the designed class-C amplifiers, the DC bias circuits require the amplifiers to be capable of sustaining an output voltage because the large-signal input voltage amplitudes up to 5 V p-p is used as an input signal. As shown in Figure 4a, the DC bias voltage is defined as: where V DD is the supply voltage of the class-C amplifier and class-C linearized amplifier. Figure 4 describes the operating mechanism of the resistor divider and transistor linearizer networks for class-C amplifiers. In the designed class-C amplifiers, the DC bias circuits require the amplifiers to be capable of sustaining an output voltage because the large-signal input voltage amplitudes up to 5 Vp-p is used as an input signal. As shown in Figure 4a, the DC bias voltage is defined as: where VDD is the supply voltage of the class-C amplifier and class-C linearized amplifier. The large input signal can be passed through the blocking resistor (RL1) such that large-signal input signals around 5 Vp-p could affect the DC bias voltages of the class-C amplifier. Additionally, the resistance values (RL2 and RL3) in the resistor divider network are dependent on the temperature variance. Therefore, the resistor divider network might be undesirable when using large-signal input signals around 5 Vp-p for class-C amplifiers. This is because developed class-C amplifiers might have cooling fan systems for the portable ultrasound instruments. Additionally, the hot-carrier injection could make it difficult to maintain constant DC bias voltages for the amplifier because large input signals are sensitive to power MOSFET devices [22]. The large input signal can be passed through the blocking resistor (R L1 ) such that large-signal input signals around 5 V p-p could affect the DC bias voltages of the class-C amplifier. Additionally, the resistance values (R L2 and R L3 ) in the resistor divider network are dependent on the temperature variance. Therefore, the resistor divider network might be undesirable when using large-signal input signals around 5 V p-p for class-C amplifiers. This is because developed class-C amplifiers might have cooling fan systems for the portable ultrasound instruments. Additionally, the hot-carrier injection could make it difficult to maintain constant DC bias voltages for the amplifier because large input signals are sensitive to power MOSFET devices [22].
In Figure 4b, the high frequency and large input signal could be filtered out by the low-pass filter network, which is composed of the 120 pF capacitor (C b1 ) and 1 µH inductor (L b1 ). The cut-off frequency of this low-pass filter network is defined as: The calculated and measured cut-off frequencies of the low-pass filter were 14.54 MHz and 14.15 MHz, respectively, and the input signals are 25 MHz, such that input signals above 15 MHz could be suppressed in the transistor linearizer network. The resistance values (R b2 and R b4 ) were selected to be higher than the parasitic impedances of the power MOSFET (T 1 , SQ2318AES-T1, Vishay Intertechnology, Malven, PA, USA) to reduce the undesirable parasitic effects and provide a DC bias voltage. Figure 4c shows the equivalent circuit model of the transistor linearizer network using the large signal power MOSFET library model [24]. The parasitic resistances and inductances of the power MOSFET could be removed because they were small values [24]. Figure 4d shows the simplified equivalent circuit model of the transistor linearizer network. Because the inverse value of the transconductance of the power MOSFET (1/g mT1 ) was smaller than the combined resistances (R b2 and R b4 ), the DC bias voltage for transistor linearizer network (V G2 ) is simplified as follows.
where g mT1 is the transconductance value of the power MOSFET (T 1 ). In Figure 4b, the high frequency and large input signal could be filtered out by the low-pass filter network, which is composed of the 120 pF capacitor (Cb1) and 1 μH inductor (Lb1). The cut-off frequency of this low-pass filter network is defined as: The calculated and measured cut-off frequencies of the low-pass filter were 14.54 MHz and 14.15 MHz, respectively, and the input signals are 25 MHz, such that input signals above 15 MHz could be suppressed in the transistor linearizer network. The resistance values (Rb2 and Rb4) were selected to be higher than the parasitic impedances of the power MOSFET (T1, SQ2318AES-T1, Vishay Intertechnology, Malven, PA, USA) to reduce the undesirable parasitic effects and provide a DC bias voltage. Figure 4c shows the equivalent circuit model of the transistor linearizer network using the large signal power MOSFET library model [24]. The parasitic resistances and inductances of the power MOSFET could be removed because they were small values [24]. Figure 4d shows the simplified equivalent circuit model of the transistor linearizer network. Because the inverse value of the transconductance of the power MOSFET (1/gmT1) was smaller than the combined resistances (Rb2 and Rb4), the DC bias voltage for transistor linearizer network (VG2) is simplified as follows. The DC bias voltages were dependent on the transconductance (g mT1 ) of the power MOSFET and resistance (R b3 ). Additionally, the power MOSFET transconductance was relatively less dependent on temperature variances and was constant with large signal inputs [25]. In the resistor divider network, two resistors (R L2 and R L3 ) could be sensitive to temperature variances because the temperature variances were dependent on the resistance values. In the transistor linearizer network, one resistor could be sensitive to temperature variances. Compared with the resistor divider network, the transistor linearizer network might be less susceptible to temperature variances. Therefore, the transistor divider network might be stable for temperature variances caused by large input signals.
To reduce the temperature variances caused by the large input signals, a digitally programmed lookup table memory using analog to digital converter and analog to digital convert electronics is another solution [26,27]. However, these electronics are not desirable when using array transducers for portable ultrasound instruments as they are bulky and power consuming; instruments with limited size and architecture are preferred instead. For the amplifier measurement, the initial measured DC bias voltages for the resistor divider network and transistor linearizer network were initially both 2.70 V for the class-C amplifiers. As described in the introduction section, expected, and simulated data of the amplifier does not contain the accurate temperature parameters such that the measured bias voltages of the class-C amplifier and class-C linearized amplifiers for each hour were presented in Table 1. These amplifiers were implemented to reduce the system sizes for portable ultrasound instruments. Therefore, the temperature dependences would affect the performances of the class-C amplifiers. The DC bias voltages of the class-C amplifier were reduced from 2.70 V and 2.41 V. The DC bias voltages of the class-C linearized amplifier were reduced from 2.70 V and 2.67 V. Therefore, the class-C linearized amplifier was less dependent with temperatures. In the result section, all the amplifier performances were measured just after 4 hr. Because of higher DC bias voltages the voltage gains and power consumptions could be expected to have higher values for class-C linearized amplifier.
Compared to the class A amplifiers, the class C amplifiers generate the output currents during on and off transition periods [16]. In addition, the output current and conduction angles have different values for class C amplifier and class C linearized amplifiers. Therefore, we needed to consider the different conduction angle and the output current values to calculate the voltage gain and DC power consumption for class C amplifier and class C linearized amplifier. Therefore, the output peak-to-peak voltage for the amplifiers can be represented as [11,16,23]: where i out is the output current, R load is the load resistance, and θ is the conduction angle of the class C amplifiers. The voltage gain of the class C amplifiers (G) can be represented as the output peak-to-peak voltage divided by the input peak-to-peak voltage: The DC power consumption (P DC ) for the class C amplifier and class C linearized amplifier are represented as [16,18,22]: For the amplifiers, the values of the outputs (i out ) and conduction angles (θ) in the DC power consumption are different. The power MOSFET transistor library provided from the manufacturer was inaccurate under high-voltage environments to generate the theoretical model parameter such that the measured performances were presented to characterize the class C amplifier and class C linearized amplifier.
Results
The performance of the class-C amplifiers with resistor divider and transistor linearizer network need to be checked because small-sized piezoelectric transducers for portable ultrasound instruments have low sensitivity [28]. The class-C amplifier was designed for portable ultrasound instruments such that voltage gain and DC power consumption were evaluated. Class-C amplifiers are designed to reduce unwanted heat; therefore, the tests were carried out. Considering the temperature variance effects, all the performance of the class-C amplifiers were also measured. Figure 5a shows the measurement setup and its photo for the voltage gains, gain deviations, and DC power consumption versus input voltages. One power supply and another power supply provided the gate-source DC bias voltage of the class-C amplifiers. Function generator (DG5071, Rigol Technologies, Beijing, China) fed the pulsed sinusoidal waveforms up to 5 V p-p into the designed class-C amplifiers. The amplified signals were reduced using attenuators and were recorded in the oscilloscope (MSO2024B, Tecktronics Inc., Beaverton, OR, USA). Figure 5b-d show the voltage gain, gain deviation, and DC power consumption versus input voltage of the class-C amplifier and class-C linearized amplifier, respectively. As shown in Figure 5b, the lowest voltage gain of the class-C linearized amplifier (17.14 dB) was still higher than the maximum voltage gain of the class-C amplifier (14.80 dB). Therefore, the measurement data could confirm that the transistor linearizer network was less dependent on temperature variances such that the linearizer can increase voltage gain for same input voltages. In Figure 5c, the absolute value of the voltage gain deviation of the class-C linearized amplifier (−2.82 dB) is still lower than that of the class-C amplifier (8.78 dB) at 5 V p-p input voltage. Therefore, the linearity of the class-C amplifier is improved by the linearizer. In Figure 5d, the DC power consumption of the class-C linearized amplifier (0.975 W) is slightly higher than that of the class-C amplifier (0.775 W). However, the DC power consumption of each amplifier is lower than 1 W; thus, both could be useful for portable ultrasound instruments. Figure 6 shows the measurement results of the voltage gain, gain deviations, and DC power consumption versus frequencies of the class-C amplifier and class-C linearized amplifier, respectively, because some piezoelectric transducers have relatively wide bandwidths. As shown in Figure 6a, the maximum voltage gains of the class-C amplifier and class-C linearized amplifier were 14.80 dB and 17.14 dB, respectively. As shown in Figure 6b, the maximum voltage gain deviation of the class-C linearized amplifier (1.95 dB) is lower than that of the class-C amplifier (−15.93 dB) at 50 MHz. Therefore, the transistor linearizer can also improve the linearity performance of the class-C amplifier with less temperature dependence for wide frequency ranges.
The voltage gain of the class C amplifiers (G) can be represented as the output peak-to-peak voltage divided by the input peak-to-peak voltage: The DC power consumption (PDC) for the class C amplifier and class C linearized amplifier are represented as [16,18,22]: For the amplifiers, the values of the outputs (iout) and conduction angles (θ) in the DC power consumption are different. The power MOSFET transistor library provided from the manufacturer was inaccurate under high-voltage environments to generate the theoretical model parameter such that the measured performances were presented to characterize the class C amplifier and class C linearized amplifier.
Results
The performance of the class-C amplifiers with resistor divider and transistor linearizer network need to be checked because small-sized piezoelectric transducers for portable ultrasound instruments have low sensitivity [28]. The class-C amplifier was designed for portable ultrasound instruments such that voltage gain and DC power consumption were evaluated. Class-C amplifiers are designed to reduce unwanted heat; therefore, the tests were carried out. Considering the temperature variance effects, all the performance of the class-C amplifiers were also measured.
Figures 5a shows the measurement setup and its photo for the voltage gains, gain deviations, and DC power consumption versus input voltages. One power supply and another power supply provided the gate-source DC bias voltage of the class-C amplifiers. Function generator (DG5071, Rigol Technologies, Beijing, China) fed the pulsed sinusoidal waveforms up to 5 Vp-p into the designed class-C amplifiers. The amplified signals were reduced using attenuators and were recorded in the oscilloscope (MSO2024B, Tecktronics Inc., Beaverton, OR, USA).
Figures 5b,c,d show the voltage gain, gain deviation, and DC power consumption versus input voltage of the class-C amplifier and class-C linearized amplifier, respectively. As shown in Figure 5b, the lowest voltage gain of the class-C linearized amplifier (17.14 dB) was still higher than the maximum voltage gain of the class-C amplifier (14.80 dB). Therefore, the measurement data could confirm that the transistor linearizer network was less dependent on temperature variances such that the linearizer can increase voltage gain for same input voltages. In Figure 5c, the absolute value of the voltage gain deviation of the class-C linearized amplifier (-2.82 dB) is still lower than that of the class-C amplifier (8.78 dB) at 5 Vp-p input voltage. Therefore, the linearity of the class-C amplifier is improved by the linearizer. In Figures 5d, the DC power consumption of the class-C linearized amplifier (0.975 W) is slightly higher than that of the class-C amplifier (0.775 W). However, the DC Figure 6 shows the measurement results of the voltage gain, gain deviations, and DC power consumption versus frequencies of the class-C amplifier and class-C linearized amplifier, respectively, because some piezoelectric transducers have relatively wide bandwidths. As shown in Figures 6a, the maximum voltage gains of the class-C amplifier and class-C linearized amplifier were 14.80 dB and 17.14 dB, respectively. As shown in Figures 6b, the maximum voltage gain deviation of the class-C linearized amplifier (1.95 dB) is lower than that of the class-C amplifier (−15.93 dB) at 50 MHz. Therefore, the transistor linearizer can also improve the linearity performance of the class-C amplifier with less temperature dependence for wide frequency ranges. the DC power consumption between 10 MHz and 50 MHz is still lower than 1 W, such that the class-C amplifiers are still useful to reduce the DC power consumption for the portable ultrasound instruments. However, higher frequency and higher input voltages, resulting in increasing the temperatures were used for designed class C amplifiers because of lower voltage gains. These configuration increases the temperatures of the power MOSFETs and power resistors such that they can reduce the performances of the class C amplifiers as frequency increases as shown in Figure 6. As shown in Figure 6c, the DC power consumption of the class-C amplifier is constant between 10 MHz and 50 MHz (0.775 W). The DC power consumption of the class-C linearized amplifier at 50 MHz (0.987 W) is a little bit higher than that of the class-C amplifier at 50 MHz (0.975 W). However, the DC power consumption between 10 MHz and 50 MHz is still lower than 1 W, such that the class-C amplifiers are still useful to reduce the DC power consumption for the portable ultrasound instruments. However, higher frequency and higher input voltages, resulting in increasing the temperatures were used for designed class C amplifiers because of lower voltage gains. These configuration increases the temperatures of the power MOSFETs and power resistors such that they can reduce the performances of the class C amplifiers as frequency increases as shown in Figure 6.
The theoretical equations of the class C amplifiers were presented. However, the theoretical and modeled data of the amplifiers are unpredictable when using the electronic amplifier devices for high-voltage environments, because the power MOSFETS do not have signal-distortion accuracy even for the sub-decibel levels [21]. Additionally, the temperature model parameters for power MOSFFETs in the simulation model are inaccurate under high-voltage environments [22]. Therefore, pulse-echo measurements were performed to evaluate the characteristics of the amplifier. Figure 7 shows typical pulse-echo measurement setup to evaluate class-C amplifier performance with a piezoelectric transducer [29]. The five-cycle sinusoidal waveforms generated from the function generator (DG5071) were fed into the class-C amplifiers with DC bias voltages provided from DC power supplies. The expander circuit was constructed to reduce the fluctuation of the amplified signals generated from the amplifiers. The limiter circuit was used to protect the preamplifier (AU-1525, L3 NARDA-MITEQ Inc., Hauppauge, NJ, USA) and oscilloscope (MSO2024B). The amplified sinusoidal waveforms trigger the focused transducer (Olympus NDT, Waltham, MA, USA) to generate the acoustic waves. The transmitted acoustic waveforms were reflected from the target. The reflected acoustic waveforms were converted by the piezoelectric transducer into electrical waveforms while the discharged electrical sinusoidal waveforms were suppressed by the limiter circuit. The received electrical waveforms by the piezoelectric transducer were amplified by a preamplifier and recorded in the oscilloscope. The theoretical equations of the class C amplifiers were presented. However, the theoretical and modeled data of the amplifiers are unpredictable when using the electronic amplifier devices for high-voltage environments, because the power MOSFETS do not have signal-distortion accuracy even for the sub-decibel levels [21]. Additionally, the temperature model parameters for power MOSFFETs in the simulation model are inaccurate under high-voltage environments [22]. Therefore, pulse-echo measurements were performed to evaluate the characteristics of the amplifier. Figure 7 shows typical pulse-echo measurement setup to evaluate class-C amplifier performance with a piezoelectric transducer [29]. The five-cycle sinusoidal waveforms generated from the function generator (DG5071) were fed into the class-C amplifiers with DC bias voltages provided from DC power supplies. The expander circuit was constructed to reduce the fluctuation of the amplified signals generated from the amplifiers. The limiter circuit was used to protect the preamplifier (AU-1525, L3 NARDA-MITEQ Inc., Hauppauge, NJ, USA) and oscilloscope (MSO2024B). The amplified sinusoidal waveforms trigger the focused transducer (Olympus NDT, Waltham, MA, USA) to generate the acoustic waves. The transmitted acoustic waveforms were reflected from the target. The reflected acoustic waveforms were converted by the piezoelectric transducer into electrical waveforms while the discharged electrical sinusoidal waveforms were suppressed by the limiter circuit. The received electrical waveforms by the piezoelectric transducer were amplified by a preamplifier and recorded in the oscilloscope. The echo amplitude when using the class-C linearized amplifier (1.29 Vp-p) was higher than that when using the class-C amplifier (0.569 Vp-p). The -6 dB bandwidth of the normalized echo spectrum when using class-C linearized amplifier is similar (18.19%) to that when using the class-C amplifier (17.88%). The center frequency of the normalized spectrum when using the class-C linearized amplifier is still similar (23.35 MHz) to that when using class-C amplifier (24.12 MHz). Therefore, the linearizer circuit can increase the sensitivity of the piezoelectric transducer, which is useful for portable ultrasound instruments because the miniaturized array-type transducer typically generates low sensitivity.
Conclusions
Class-C amplifier is one of the most efficient amplifiers, thus, it is able to enhance the battery life of portable ultrasound instruments. However, reduced linearity of the class-C amplifier could affect the sensitivity of the piezoelectric transducer, which is main component of the instrument; this could be a critical issue because the echo signals generated by the piezoelectric transducers are relatively low. Therefore, a class-C amplifier transistor linearizer scheme was proposed for the instruments to increase sensitivity while maintaining low DC power consumption and reducing unwanted heat.
Compared with the resistor divider network, the transistor linearizer network for class-C amplifiers could effectively reduce the large pulsed sinusoidal signals using a low-pass filter network and provide a stable DC bias voltage with respect to the temperature variances. To reduce the temperature variances caused by large input signals, the DC bias adjustment using a programmed lookup table memory, analog-to-digital converter, and digital-to-analog converter could be another solution. However, it is not suitable for piezoelectric transducers in portable ultrasound instruments because these electronics are bulky and power-consuming devices.
To verify the capability of the transistor linearizer network for the class-C amplifiers, the voltage gain and DC power consumption versus input voltage amplitudes were measured. The measured voltage gain of the class-C amplifier and class-C linearized amplifier were 14.80 dB and 17.14 dB, respectively. Thus, compared with class-C amplifier, class-C linearized amplifier could obtain higher voltage gain for wide input voltage ranges, thus, improving sensitivity performance of the piezoelectric transducers. The DC power consumption of the class-C amplifier and class-C linearized amplifiers were 0.775 W and 0.975 W, respectively. However, the DC power consumption of both amplifiers is still less than 1 W.
To be capable of the suitability for the portable ultrasound instruments, the performance of the class-C amplifiers with a resistor divider and transistor linearizer networks were tested. The echo amplitude when using the class-C linearized amplifier (1.29 Vp-p) was improved compared with that when using the class-C amplifier (0.569 Vp-p). However, the -6 dB bandwidth of the normalized echo The echo amplitude when using the class-C linearized amplifier (1.29 V p-p ) was higher than that when using the class-C amplifier (0.569 V p-p ). The −6 dB bandwidth of the normalized echo spectrum when using class-C linearized amplifier is similar (18.19%) to that when using the class-C amplifier (17.88%). The center frequency of the normalized spectrum when using the class-C linearized amplifier is still similar (23.35 MHz) to that when using class-C amplifier (24.12 MHz). Therefore, the linearizer circuit can increase the sensitivity of the piezoelectric transducer, which is useful for portable ultrasound instruments because the miniaturized array-type transducer typically generates low sensitivity.
Conclusions
Class-C amplifier is one of the most efficient amplifiers, thus, it is able to enhance the battery life of portable ultrasound instruments. However, reduced linearity of the class-C amplifier could affect the sensitivity of the piezoelectric transducer, which is main component of the instrument; this could be a critical issue because the echo signals generated by the piezoelectric transducers are relatively low. Therefore, a class-C amplifier transistor linearizer scheme was proposed for the instruments to increase sensitivity while maintaining low DC power consumption and reducing unwanted heat.
Compared with the resistor divider network, the transistor linearizer network for class-C amplifiers could effectively reduce the large pulsed sinusoidal signals using a low-pass filter network and provide a stable DC bias voltage with respect to the temperature variances. To reduce the temperature variances caused by large input signals, the DC bias adjustment using a programmed lookup table memory, analog-to-digital converter, and digital-to-analog converter could be another solution. However, it is not suitable for piezoelectric transducers in portable ultrasound instruments because these electronics are bulky and power-consuming devices.
To verify the capability of the transistor linearizer network for the class-C amplifiers, the voltage gain and DC power consumption versus input voltage amplitudes were measured. The measured voltage gain of the class-C amplifier and class-C linearized amplifier were 14.80 dB and 17.14 dB, respectively. Thus, compared with class-C amplifier, class-C linearized amplifier could obtain higher voltage gain for wide input voltage ranges, thus, improving sensitivity performance of the piezoelectric transducers. The DC power consumption of the class-C amplifier and class-C linearized amplifiers were 0.775 W and 0.975 W, respectively. However, the DC power consumption of both amplifiers is still less than 1 W.
To be capable of the suitability for the portable ultrasound instruments, the performance of the class-C amplifiers with a resistor divider and transistor linearizer networks were tested. The echo amplitude when using the class-C linearized amplifier (1.29 V p-p ) was improved compared with that when using the class-C amplifier (0.569 V p-p ). However, the −6 dB bandwidth of the normalized echo spectrum when using class-C amplifier (17.88%) is similar compared to that when using class-C linearized amplifier (18.19%). Additionally, the center frequency of the normalized spectrum when using class-C amplifier is still similar (24.12 MHz) compared to that when using class-C linearized amplifier (23.35 MHz). Therefore, improved sensitivity could be beneficial for portable ultrasound instruments, owing to the higher voltage gains of the class-C linearized amplifier.
|
2019-03-11T17:19:53.934Z
|
2019-02-01T00:00:00.000
|
{
"year": 2019,
"sha1": "16b3ea96f6130451f229306875e77419abe568a3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/19/4/898/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "16b3ea96f6130451f229306875e77419abe568a3",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Medicine"
]
}
|
218872335
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Age on Mortality in Patients With COVID-19: A Meta-Analysis With 611,583 Subjects
Objectives Initial data on COVID-19 infection has pointed out a special vulnerability of older adults. Design We performed a meta-analysis with available national reports on May 7, 2020 from China, Italy, Spain, United Kingdom, and New York State. Analyses were performed by a random effects model, and sensitivity analyses were performed for the identification of potential sources of heterogeneity. Setting and participants COVID-19–positive patients reported in literature and national reports. Measures All-cause mortality by age. Results A total of 611,1583 subjects were analyzed and 141,745 (23.2%) were aged ≥80 years. The percentage of octogenarians was different in the 5 registries, the lowest being in China (3.2%) and the highest in the United Kingdom and New York State. The overall mortality rate was 12.10% and it varied widely between countries, the lowest being in China (3.1%) and the highest in the United Kingdom (20.8%) and New York State (20.99%). Mortality was <1.1% in patients aged <50 years and it increased exponentially after that age in the 5 national registries. As expected, the highest mortality rate was observed in patients aged ≥80 years. All age groups had significantly higher mortality compared with the immediately younger age group. The largest increase in mortality risk was observed in patients aged 60 to 69 years compared with those aged 50 to 59 years (odds ratio 3.13, 95% confidence interval 2.61-3.76). Conclusions and Implications This meta-analysis with more than half million of COVID-19 patients from different countries highlights the determinant effect of age on mortality with the relevant thresholds on age >50 years and, especially, >60 years. Older adult patients should be prioritized in the implementation of preventive measures.
a b s t r a c t
Objectives: Initial data on COVID-19 infection has pointed out a special vulnerability of older adults. Design: We performed a meta-analysis with available national reports on May 7, 2020 from China, Italy, Spain, United Kingdom, and New York State. Analyses were performed by a random effects model, and sensitivity analyses were performed for the identification of potential sources of heterogeneity. Setting and participants: COVID-19epositive patients reported in literature and national reports. Measures: All-cause mortality by age. Results: A total of 611,1583 subjects were analyzed and 141,745 (23.2%) were aged 80 years. The percentage of octogenarians was different in the 5 registries, the lowest being in China (3.2%) and the highest in the United Kingdom and New York State. The overall mortality rate was 12.10% and it varied widely between countries, the lowest being in China (3.1%) and the highest in the United Kingdom (20.8%) and New York State (20.99%). Mortality was <1.1% in patients aged <50 years and it increased exponentially after that age in the 5 national registries. As expected, the highest mortality rate was observed in patients aged 80 years. All age groups had significantly higher mortality compared with the immediately younger age group. The largest increase in mortality risk was observed in patients aged 60 to 69 years compared with those aged 50 to 59 years (odds ratio 3.13, 95% confidence interval 2.61-3.76). Conclusions and Implications: This meta-analysis with more than half million of COVID-19 patients from different countries highlights the determinant effect of age on mortality with the relevant thresholds on age >50 years and, especially, >60 years. Older adult patients should be prioritized in the implementation of preventive measures.
Ó 2020 AMDA e The Society for Post-Acute and Long-Term Care Medicine.
In December 2019, in South China, a new type of acute respiratory infection caused by a novel type of coronavirus (SARS-COV-2) was discovered, known as coronavirus infectious diseasee19 (COVID-19).
A soon as by the end of February 2020, it had become a pandemic and a public health emergency worldwide. 1 Clinical severity of the infection ranges from asymptomatic or mildly symptomatic patients to critical situation with bilateral pneumonia leading to multiorgan failure 2 and, therefore, it is essential to identify prognostic factors related to more severe forms of the disease and mortality.
Initial data have pointed out a special vulnerability of older adults. Case series have identified age as an independent prognostic factor for mortality. 3 Also, national registries have shown a high mortality rate The authors declare no conflicts of interest. among patients older than 80 years. 4e6 Therefore, older adults seem to have a higher proportion of severe cases of COVID-19 and fatal outcome. The present study aims to analyze the available data of mortality in the older adult population compared with its younger counterparts.
Methods
We performed a systematic search [using PubMed, Embase, and Cochrane Central Register of Controlled Trials (CENTRAL), and Google Scholar], without language restriction, for papers using the Medical Subject Headings terms "Coronavirus," "Covid-19," "Mortality," "Clinical outcomes" and "Clinical course" up to May 7, 2020. We also searched for national reports in the official health services' website of all European Countries. Primary outcome was all-cause death. As a result, of the 17 studies that reported clinical features of patients who died vs survivors, most were hospital registries, 3,7e9 4 were national reports (from China, 4 Italy, 5 and Spain, 6 and United Kingdom 10 ), and 1 was a publication from Northwell Health, the largest academic health system in New York State. 11 Hospital registries did not include age distribution and, therefore, could not be included. Older adult patients were defined as those aged 80 years or older.
We performed a meta-analysis in line with recommendations from the Cochrane Collaboration and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement. 12 Clinical features and mortality rates were available in all studies. Relative risk reductions and percentage incidences were used. The studyspecific standard errors for the estimated odds ratio were used to model the within-study variation. The percentage of variability across studies attributable to heterogeneity beyond chance was estimated using the I 2 statistic. Once heterogeneity was observed and assuming that the study effect sizes were different as well as that the collected studies represented a random sample from a larger population, all the analyses were performed by a random effects model. Sensitivity analyses were performed for the identification of potential sources of heterogeneity between studies with meta-regression analyses and the Harbord test to assess the small-study effects. 13 All analyses were performed using Stata, release 14.3 (StataCorp LP, College Station, TX).
Results
A total of 611,583 subjects were analyzed; the mean age was 61.3 years and 192,786 (31.5%) were male (Table 1). A total of 141,745 patients (23.2%) were 80 years old; the percentage of older adults was different in the 5 reports, the lowest being in China and the highest in the United Kingdom and New York State. The overall mortality rate was 12.10% and it varied widely between countries, the lowest being in China and the highest in the United Kingdom and New York State.
According to age, mortality was <1% in patients aged <50 years, and it increased exponentially after that age ( Figure 1). As expected, the highest mortality rate was observed in patients aged 80 years. All age groups had significantly higher mortality compared to the immediately younger age group (Figure 2). The largest increase in mortality risk was observed in patients aged 60 to 69 years compared with 50 to 59 years. Patients aged >80 years had 60% higher risk of dead compared to patients with age 70 to79 years but it was 6-fold higher (odds ratio: 6.25, 95% confidence interval 5.38-7.25; P < .001) if they were compared to all patients aged <80 years.
Significant heterogeneity (P < .001) was observed. The funnel plot is presented in Figure 2. Meta-regression identified sample size (P ¼ .002), countries (P ¼ .001), and mean age (P ¼ .001) as significant sources of heterogeneity; the small-effect study was also observed (Harbor test, P ¼ .013).
Discussion
The meta-analysis of currently available national and regional reports of patients with COVID-19 infection highlights the effect of age on mortality. These results have important clinical implications such as on specific preventive measures and the clinical management of COVID-19 patients.
Since the start of the pandemic, age has been outlined as the key prognosis determinant in COVID-19 patients. Based on the early statistical data of China, the case-fatality rate (CRF) increases markedly from the age of 60, reaching 14.8% in those older than 80 years. 14 Initial data from Italian patients also described that mortality increased significantly in septuagenarian patients and almost tripled in octogenarians. 15 In a Chinese cohort study, age was identified as an independent predictor of mortality, with an odds ratio of 1.1 (95% confidence interval 1.03, 1.17) for each year. 3 Our analysis of 611,583 patients shows a mortality increase related to age; this is evident in patients aged 60 years, increasing significantly in each decade of life. Therefore, the highest mortality occurs in the patients aged 80 years in whom it was 6 times higher than in younger patients.
These findings are consistent with a higher susceptibility to the infection and severe clinical manifestations observed in older adult patients. 3,16 This fact could be influenced by both the physiological aging process and, especially, the greater prevalence in older adult patients of frailty and comorbidities that contribute to a decrease in functional reserve that reduces intrinsic capacity and resilience and hinders the fight against infections. 17 In this line of thought, comorbidities such as cardiovascular disease, hypertension, and diabetes are highly prevalent in older adults and have been associated with worse outcomes in COVID-19. 3 Many mechanisms underlying this worse prognosis in older adults with COVID-19 might explain our results that might lead to further research. 18 Our study has several limitations, mainly derived from the data source. National reports might be designed and performed with different methodologies in each country. Population characteristics might also be quite different, especially between Europe and China. 19 Specifically, in older adults the percentage of infected and dead in nursing homes and socio-sanitary centers is not published, therefore the real incidence and mortality of COVID-19 may be underestimated.
Future studies are necessary to analyze the factors that, beyond age, make this population especially susceptible and vulnerable to having a serious infection with complications and a higher mortality rate.
Conclusions and Implications
The meta-analysis of currently available data suggest a determinant effect of age on mortality of COVID-19 patients with a relevant threshold on age >50 and especially >60. Nevertheless, more clinical and basic research is needed to elucidate the mechanism involved in the COVID-19 infection in older adults and to develop strategies to improve outcomes in these patients.
|
2020-05-26T13:06:28.595Z
|
2020-05-25T00:00:00.000
|
{
"year": 2020,
"sha1": "0e65f9a7a5541fc0da7076ea9a6114f045fd3267",
"oa_license": null,
"oa_url": "http://www.jamda.com/article/S1525861020304412/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "7147cef9450184ca7008ef9092ba9aed0f9a08d9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236937144
|
pes2o/s2orc
|
v3-fos-license
|
Social Enterprises of Immigrants: A Panacea for the Finnish Labour Market?
Research questions: The objectives of this study are to identify the need for Social Enterprises (SEs) as an alternative form of working in the Finnish labour market, what alternative forms of co-operation between such types of SEs could be and how larger local companies can support the sustainable operation of these small SEs by employing immigrants and the long-term unemployed. Theory: This article draws on the corporate social responsibility CSR theory of traditional enterprises to better explain the factors that can facilitate co-operation between SEs and private enterprises, thereby reaching the sustainable operation of SEs that are run by socially disadvantaged groups of people in Lapland. Phenomenon studied: SE is a new phenomenon in Lapland. There are a few SEs in the region that are struggling to maintain their existence in a sustainable manner. It is harder for the members of such SEs to run businesses or to work and to become successful in Lapland; however, obtaining employment or running a proprietorship is not an alternative (Yeasmin, 2016) for these disadvantaged people. Case context: The article contributes to the studies on the economic integration and labour market sustainability of immigrants and long-term job seekers, and particularly to the socio-economic integration of the long-term unemployed by focusing on the necessity-driven social entrepreneurship networking model in a sparsely populated region, that being the region of Lapland in Northern Finland. Findings: The disadvantaged groups need access to the labour market in Lapland, and social alliances between various partners (e.g., private, public and SE) under different social circumstances (e.g., CSR) can generate alternative options for co-operation to sustain the existence of such SEs in Lapland. An analysis explores future recommendations for co-operation that might sustain SE’s existence and development and might also increase long-term prospects for targeted SEs. Discussion: Lapland-based SEs operated by immigrants or the long-term unemployed are issues and themes that do not fall within the responsibility of any single authority or any single sector. Successfully running SEs demands positive interaction and social innovation strategies among many social partners. Networking with a variety of public stakeholder groups alongside the private sector needs to see an investment of social resources for mapping the phenomena on the topic of social economy, which is a social innovation process that might enable such SEs to have successful outcomes in Lapland.
maintain their existence. It is harder for the members of such SEs to run businesses or to work and to become successful in Lapland; however, obtaining employment or running a proprietorship is not an alternative (Yeasmin, 2016) for these disadvantaged people. Consequently, these disadvantaged groups need access to the labour market in Lapland, and social alliances between various partners (e.g., private, public and SE) under different social circumstances (e.g., CSR) can generate alternative options for co-operation to sustain the existence of such SEs in Lapland. SE can also bring potential taxpayers into municipalities in the future.
There is no generally accepted definition of social responsibility. In Europe, the European Commission's definition of CSR is for companies to voluntarily incorporate social and environmental considerations into their business and interaction with their stakeholders (World Economic). The main objective of this study is to explore what kind of help and support measures can be expected from companies? And how larger local companies can support the sustainable operation of these small SEs by employing immigrants and the long-term unemployed?
The philanthropic responsibility of the companies is to do the common good for society by cooperating with local SEs as to enable the long-term unemployed to start low-threshold work. The study gathered companies' views on their CSR policies and aimed to ascertain whether companies comprehend that this kind of consideration towards SEs supports the employment opportunities of people with disabilities and the long-term unemployed, which is a greater social relief for sustaining the local labour market. The aim was to ascertain whether companies see that they have a responsibility towards vulnerable immigrants and for the sustainable development of the surrounding society, and how they integrate CSR in their procurement activities.
Theoretical Background
In the Scandinavian context, there is a question of whether CSR or any traditional or economic appraisal supports or hinders societal well-being (Stiglitz et al., 2009;Strand et al., 2015). CSR is generalized as a concept that, oftentimes, focuses on social issues and is social sustainability related only to environmental issues (Carroll 1999;Dahlsrud, 2008;Dyllick & Hockerts, 2002;Schwartz & Carroll, 2008) in Scandinavia (Strand et al., 2015). CSR, indeed, includes expressions on stakeholder engagement (Freeman et al., 2010;Rhenman, 1968) and creating shared values (Porter & Kramer, 2011) for the development of the society as a whole. Though, theoretically, stakeholder engagement has a long tradition in Scandinavia (Rhenman, 1968), it needs to practically demonstrate its commitment to reducing the social exclusion of a certain group of people in Scandinavia who are at risk of labour market marginalization. Arctic countries such as Finland, Sweden and Denmark have a good reputation for institutional influences on CSR, which can undoubtedly facilitate socially responsible corporate behaviour (Strand, 2013(Strand, , 2014Strand et al., 2015). CSR policies should also create a dialogue between the business and civil society (MEEF, 2020), which has recently been taken into account in Finland and Denmark, along with other Nordic countries. However, supporting social entrepreneurship by utilizing CSR is a relatively new issue in Finland, since SE per se is a new phenomenon in Finland (European Commission, 2014). The SE idea emerged in 1970 in many European countries; however, SE has been identified as an institution that supports people who are in a disadvantaged position in the Finnish labour market in the literature of Pättiniemi (2006), who has linked SE for the integration of a certain group of people in the society.
The share of long-term unemployed immigrants was 27.6% in Finland in the year 2018. The share was higher among those groups of immigrants who have resided in Finland for more than 10 years (see Table 1) due to immigrants' participation in integration courses and training during their first 3-5 years of residence. After finishing those training courses, immigrants usually obtain and complete an internship. Even after all this training, the unemployment remains higher among immigrants, and the rate also differs among immigrants according to their country of origin and gender (see Figure 1). SEs still need visibility among business societies through CSR practices. If the disadvantaged group could receive employment or business support through SEs, this could decrease the risk of poverty (see Figure 2) of foreign-born people in Finland, especially males, who are at higher risk than females.
Although CSR has increased significantly over the past decade, there are a variety of attitudes towards CSR in companies in Lapland. There are also various definitions of the concept of social responsibility at the theoretical level.
Many opinions are in line with Frooman's (1997, p. 227) view that companies have different forms of CSR aimed at increasing social well-being. McWilliams and Siegel (2001, p. 117) state that how CSR is done can do well to support the companies' own goals. According to them, social responsibility goes beyond compliance with the law. Demonstrating CSR requires that companies strive to improve ethics in their work by increasing social, environmental and economic well-being. In doing so, while respecting the ethical values of the community, the communities themselves and the surrounding natural environment, the local people have confidence in the company while businesses continue to thrive.
In our corporate interviews, CSR turned out to be very different (see the Discussion section). SEs in Lapland need to have more co-operation with the private sector. Some very tangible and practical perceptions can ease the workload of SEs and the private sector. Broadening mindfulness of CSR is something that can develop the corporate world's understanding of the positive value of SEs. Engaging corporations with SEs can build cross-sector partnerships and can cover the philanthropic responsibilities of the corporate world. Such co-operation will meet the goals of both parties, which will, in turn, facilitate social benefit (Szegedi et al., 2016). Here, we define CSR as rather asocial aspects that need to be integrated into the operation of SEs.
CSR can create a model for social innovations as well as new solutions for market products and services (The young foundation social innovation overview, 2012). CSR will build a new market for the traditional business of immigrants and provide immigrants and the long-term unemployed access to the local labour market, which can improve vulnerable people's capabilities to work and increase the target group's networking possibilities. CSR can open visions for the private sector by merging social and commercial value creation (SIG, 2015;Szigeti & Csiszárik-Kocsir, 2014). In the study, CSR is a measure for remedying the social problems of the disadvantaged groups of people in a particular society. These remedies can increase the economic sustainability of the long-term unemployed by establishing new markets and services via SEs. This integrated model can reinforce initiatives to tackle the objectives of both parties, which could have a follow-on social impact (Szegedi et al., 2016).
In the literature of Beckman et al. (2006), the social dynamics of CSR are demonstrated more concretely than ever before when CSR leads in a different direction that influences the business to further explore ways to support society and societal agenda(s) in the Arctic. Some works of literature, indeed, state that CSR is two-way communication, and a community should invite the corporate world to building a community. Networking and co-creating CSR need a response from all its stakeholders to bridge the relational gap between business and society (Crane & Glozer, 2016;Nielsen & Thomsen, 2012). CSR-driven innovation has been highlighted in Nordic studies as a way of strengthening the unique cooperation between the five Nordic countries that can transform society and achieve economic success (Norden, 2010). However, this concept of CSR-driven innovation leads businesses towards growth such as small and medium enterprise (SME) development. A similar concept of CSR-driven innovation in terms of social development has been embedded in recent Nordic works of literature. CSR has been applied as a lens to address the role of entrepreneurs, not only in the line of economic and environmental development but also the societal development context that supports sports clubs in Sweden and Finland (Ahonen & Persson, 2020).
Many unsolved problems of societies can be solved through social entrepreneurship (Dees, 2007;Thompson & Doherty, 2006). It is described as how the new engines that can reform society (Dees, 2007) need to tempt the corporate world into adopting a different approach of co-operation that will support social values. The co-operation does not necessarily include CSR-theory; however, the application of CSR theory can incorporate a common understanding between the parties in order to reach a common goal by creating shared values, thereby fulfilling their respective missions.
Method
The study mapped the CSR policies of a variety of Lapland-based companies, ranging from micro-sized businesses with a sole entrepreneur and annual sales of less than €2 million to large-scale companies with over 250 employees and an annual turnover exceeding €40 million.
According to the Lapland regional authority (see Table 2), there are four different categories, based on the number of employees and turnover in 2018.
We made a random comparison to ascertain what the social responsibility of 15 companies (i.e., involved in hospitality and tourism industries, mining, energy and water supplies, construction, manufacturing, transportation and warehousing, banking services) in Lapland looks like and whether they have a strategy that covers social responsibility by supporting the long-term unemployed. In addition to these comparisons, we conducted three focus group discussions, the duration of each being 2-3 h. Creating questionnaires (see Table 3) helped us to discuss the phenomenon. The discussions were held in Finnish and were translated by the author (the author who did the translations has a good command of both Finnish and English) in the later phases of analysis. Certain criteria were set up for involving participants. We invited those who would be beneficial to our research, such as companies, relevant stakeholders, SEs and three company leaders (CLs) from three different companies. These invited participants took part in the discussions, and the whole group comprised different stakeholders (S) along with four representatives (RS) from four different SEs. Two of the CLs were interviewed by the stakeholders (S) and representatives of the SEs (RS) on two different occasions. Two CLs were interviewed by the researchers-one face to face and the other over the phone. In total, seven CLs were interviewed in the study.
In addition to the focus group discussion, we conducted in-depth interviews of four different company owners (chief executive officers-CEOs) with a similar semi-structured set of questions regarding their CSR strategies and their support measures for SEs operated by the vulnerable members of society. The sampling strategies were influenced by the stakeholder groups (12 participants) who were invited based on their professional tasks that could be directly related to the study topic.
Some of the interviews were recorded, and some were noted on paper. The study followed a systematic coding of the data (see Table 4), based on the research insights.
A relational approach (Josselson, 2013) of coding was taken into account in the analysis phase. To ensure the validity and reliability (Lincoln & Guba, 1985;Shenton, 2004) of the research, the findings were re-checked by the stakeholder groups. concrete support would they like to provide to SEs so that the SEs can operate sustainably? 6. How do companies operating in Lapland see the features of socially responsible procurement?
Networking support Source: Authors' elaboration. Note: ü = feasible, x = not yet thinking of possible co-opertion within a quick time frame.
Results
The companies were asked what concrete support they would like to provide to the co-ops so that SEs could operate sustainably. The responses of the in-depth interviews revealed that many companies support the extracurricular activities of children and young people as a part of their philanthropic responsibility (CL, 2019). For example, some companies have an annual allocation of €3,000 towards such activities, and the use of the funds is redefined each year (FGD:S, 2019). Thus, support can be focused on an acute emerging need, like the fight against climate change (CL, 2019). Some companies are sceptical about the concept of CSR, and they feel that Finnish business taxes are high. According to them, by paying high taxes, they are fulfilling their societal responsibility (CL, 2019). With that tax money, the government supports the welfare of the long-term unemployed, which is considered by the company to be enough support (CL, 2019). Companies do not see that they have to take on any other philanthropic responsibility than that of paying taxes. It is the responsibility of the state to respond to societal challenges and to allocate resources to support SEs or co-ops using tax revenue. Based on the interviews, societal work very much lies with the state, not the companies (FGD:CL, 2019). As the enterprises revealed, specific forms of co-operation (see Table 5) between traditional enterprises and SEs are a significantly important subject matter that needs relevant actors to encompass the realm of co-operation, although they deviate from the idea that such co-operation should not necessarily be driven by the CSR policy.
Traditional enterprises already have many strategies for running a business, taking care of their employees and delivering quality services to their customers, and they believe that providing social services is not one of their primary responsibilities, nor is it one of the main criteria towards fulfilling their philanthropic responsibilities (CL, 2019). According to our interviewees, co-operating with SE is possible in various manners, all of which could be part of CSR and do not necessitate that enterprises have sufficient or particular CSR strategies (CL, 2019).
At a general level, municipalities do not support companies; rather, entrepreneurship is their responsibility. Business representatives believe that the government must have policies in place to support the sustainable business activities of private companies (CL, 2019). The role of companies is to improve the well-being and working conditions of their employees and to strengthen their skills through further training (FGD:CL, 2019).
Some of the interviewees believe that their companies use a lot of resources to protect the environment and act in many ways to reduce environmental pollution (CL, 2019). Businesses are also adequately equipped for the rapidly growing need to solve environmental problems. Environmental protection is costly, and an environmentally friendly company is part of the CSR of these companies, which is simultaneously classed as practical sustainable economic management (FGD:CL, 2019).
In addition to demonstrating environmental responsibility, companies allocate resources towards the well-being of children and young people. Some companies also regularly support positive social campaigns and events, provide advice on promoting small entrepreneurship and increase the skills and entrepreneurial skills of small entrepreneurs. There are also entrepreneurship lectures for school-aged children (CL, 2019;FGD:CL, 2019).
None of the companies we interviewed have previously supported co-opsin Lapland. This opportunity with the project was a new way for them to support, inspire and improve the living conditions of their members.
We collected some random data via a web-based open survey of about 15 traditional enterprises in Lapland. We found only five traditional enterprises that maintain a CSR policy. A more evolved form of using CSR was not revealed through the web-based survey. The adoption of a CSR strategy is not the main concern of many traditional enterprises (FGD, 2019), and, indeed, according to a recent OECD report, Finnish companies have not focused on CSR as much as their European counterparts have (European Commission/OECD, 2018).
Also, according to the study, CSR strategies could add significant encouragement for enterprises to co-operate with local SEs but are not something that can accelerate suitable policy measures soon (FGD:RS, 2019). While discussing increasing corporate charity to support CSR and entrepreneurship, few companies felt that this could be possible in the coming years. Such co-operation could help bridge the gap between doing business and the society. Companies expect to be presented with a variety of ways in which to collaborate so that they can more easily make decisions to join with SEs. New ideas for cooperation with SEs could lead companies to pursue socially responsible policies in the future. Some actors or sectors need an incentive to establish co-operation between enterprises and SEs (FGD:RS, 2019).
It always demands more resources to thoroughly investigate a community before supporting them. Enterprises, to a great extent, lack the resources to increase their understanding of a particular problem in a certain social sector (FGD:CL, 2019). The generation of measurable objectives of co-operation between an SE and traditional business is possible only if a third party can stimulate incentives for possible measures to enhance the perception of social innovation between these two parties. Enterprises demand a precise recommendation of specific forms of co-operation from any party, be it SE or a third party. Co-operation could be highly circumstantial and requires regular updating (FGD:CL, 2019).
This explorative research included interviews that sought to ascertain which services small and medium-sized businesses could obtain from a co-op or SE. The aim was to ascertain whether companies are taking social responsibility if they are interested in supporting co-ops, and if so, how they would like to help them operate. The aim was to ascertain whether companies see that they have a responsibility towards vulnerable immigrants and the surrounding society and how they integrate CSR with their procurement activities.
Though our respondents' views on the concept of CSR were varied, both the respondents and stakeholder groups generated some recommendations for future co-operation. The aforementioned recommendations could support co-op activities in an immediate manner (Table 6).
Discussion
Based on the findings of the study, social alliances between these two parties are something that require the involvement of a third-party mediator to advance the practices of CSR. The third-party mediator works like an advisory board to ease the social alliances between SEs and traditional enterprises. According to Figure 3, the role of the advisory board is to enable efficient internal processes and to structure the co-operation model, thereby encompassing a professional way to work with a focus on well-implemented forms that can enable respective partners to manage a sustainable relationship and responsibilities to each other (see Figure 3). The role of the third party is to discover shared valueoriented objectives with both parties.
Many companies lack a CSR department, and establishing CSR departments in traditional enterprises is important but demands resources. If there are concrete policy measures aimed at accelerating regional social innovation and partnership, these have an added value for the co-operation.
Based on our findings, every party seeks a sustainable way of co-operating that has a better social impact and win-win circumstances for all parties involved. This sustainable co-operation can broaden the positive impact of their business growth, and this kind of business counselling can support the business growth of small SEs and can also create a willingness for partners to work together to develop social co-operation. The third-party involvement maximizes the spread of social innovations by negotiating with companies on what kind of support the company can provide to the SEs based on their social responsibility. Subsequently, the third party will manage conflicting situations and share good experiences among social partners (such as private companies and the public sector) about the actions and needs of SEs.
The factors that could impede co-operation are identified in Figure 3, and according to the findings, third-party involvement with these factors can either enable or hinder co-operation, which can, in turn, shape proper private social policies regarding the opportunities for business growth and creating jobs.
As many traditional enterprises lack explicit mindfulness of CSR strategies, support is required to rethink the emergence of merging CSR with the social economy. Third-party involvement could easily improvise a set of values on the social economy-factors that could be driven by CSR. This might be the way to take initiative towards mutual learning, sharing responsibilities and raising awareness on the social economy.
Conclusion
According to the results of the project, co-operation between SEs and all-sized enterprises could be successful, co-operation with a larger enterprise could be possible, but not within a quick time frame. Larger enterprises usually correspond with multinational partners and lack policy interventions to protect such small SEs by procuring services from them. However, arguing in favour of supporting local and small SEs is not challenging; rather, it needs time for framing co-operation. Conversely, in all cases of co-operation with various-sized enterprises would require SEs to be able to strike a balance between social and economic gain.
An SE should first ascertain what kind of concrete co-operation model they are expecting with other enterprises. If necessary, a joint advisory group of SEs or a third-party could draw up guidelines for cooperation, which would show what kind of co-operation serves both parties.
According to the results of the study, co-operation with large companies could bring greater common benefits and social impact. On the other hand, larger companies also have internal barriers to working with co-ops or SEs at the individual level, as many already have close subcontracting relationships with other enterprises.
In contrast, it is easier for a co-op to enter into a contract or subcontract with a smaller business owner, who often has an urgent need for a service. Sometimes, a common understanding quickly emerges with co-ops on how to work sustainably and successfully in both the short and/or long term.
It is also easier to communicate with small enterprises and receive approval for proposals, as small entrepreneurs are themselves responsible for profit generation, and they do not have to finalize any decisions through the governing board. Small or micro-entrepreneurs do not necessarily need to maintain a governing board in the way that larger business corporations do. Micro and small business enterprises are free to make decisions on establishing social co-operation with SEs, whereas larger organizations usually handle all kinds of procurement through the procurement department. They have resources that are allocated for yearly procurement. It is not always easy to decide on co-operation types for larger enterprises, since they have to follow the strategy of the governing body.
On the other hand, SEs do not always have enough skills to produce the services provided by large companies when there is a lack of sufficient skills. SEs must be empowered, and their skills should be strengthened.
SEs should be seen as an alternative model for self-employment and marketing self-products. To sustain their economic integration, co-operation is needed not only between SEs and the private sector but also with the public side. The role of municipalities to support the emergence of social entrepreneurship in Lapland is argued. Support can come in the form of the provision of resources. The role of municipalities as a third-party was discussed by the respondents. Every regional government can play a key role in developing the activities of SEs alongside the social economy. Support can be either direct material support or intangible support that increases responsibilities and builds trust between actors. Civil society actors such as the municipal administration can help social innovations between SEs by highlighting the challenges they face. By creating a model of co-operation in which various actors become acquainted with one another, they can gain plausibility among the private sector to deal with social alliances. This will encourage SEs and other actors to meet and find common economic interests.
Creating a model of social co-operation requires capable parties and networkers from both SEs and the private and public sectors. Social alliances can potentially bring together all actors and social economy perspectives, which require further social impact analysis for effective social alliances.
Research Limitations and Further Research
There is a need for further research on this topic at a bigger scale in Finland. The study focuses on the Finnish Arctic region. The rate of unemployment, underemployment and long-term unemployment of immigrants varies between Finnish cities and among immigrant populations because of the gap between cities on the share of population, socio-economic opportunities and amenities. The Arctic is shrinking as its population ages and also due to other factors. The results of the research are not necessarily representative of the bigger scale across other cities in Finland.
|
2021-08-07T13:15:30.819Z
|
2021-08-01T00:00:00.000
|
{
"year": 2021,
"sha1": "06a0f12f82abe17c3a2266408e4ad40e3dacbf3f",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/22779779211014656",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "06a0f12f82abe17c3a2266408e4ad40e3dacbf3f",
"s2fieldsofstudy": [
"Business",
"Sociology",
"Economics"
],
"extfieldsofstudy": []
}
|
8779498
|
pes2o/s2orc
|
v3-fos-license
|
The Cationic Antimicrobial Peptide LL-37 Modulates Dendritic Cell Differentiation and Dendritic Cell-Induced T Cell Polarization1
Dendritic cells (DC) are instrumental in orchestrating an appropriately polarized Th cell response to pathogens. DC exhibit considerable phenotypic and functional plasticity, influenced by lineage, Ag engagement, and the environment in which they develop and mature. In this study, we identify the human cationic peptide LL-37, found in abundance at sites of inflammation, as a potent modifier of DC differentiation, bridging innate and adaptive immune responses. LL-37-derived DC displayed significantly up-regulated endocytic capacity, modified phagocytic receptor expression and function, up-regulated costimulatory molecule expression, enhanced secretion of Th-1 inducing cytokines, and promoted Th1 responses in vitro. LL-37 may be an attractive therapeutic candidate for manipulating T cell polarization by DC.
D endritic cells (DC) 4 are uniquely potent sentinel leukocytes that can capture Ag in the peripheral tissues and then initiate and orchestrate appropriate primary Th cell responses (1). This process is critical in generating a successful defense against harmful microbial nonself while maintaining tolerance to self, and is dependent upon the Ag-capturing and -presenting capabilities of DC.
Immature DC (iDC) are highly effective Ag-capturing cells, derived from circulating hemopoietic precursor cells and pre-DC populations (monocytes and plasmacytoid cells) under the influence of specific cytokines and growth factors (2,3). Following Ag uptake, these DC are activated into Ag-processing and -presenting mature DC (mDC), causing them to migrate to the secondary lymphoid organs and interact with naive T lymphocytes (4). The activation characteristics of the mDC define the nature and consequences of this interaction, resulting in proliferation and differentiation, or deletion of T cells, and determine the polarization of the Th response (5). Recognition of pathogens by receptors of the innate immune system is an important activating signal for DC maturation, thus directly linking the innate and adaptive immune systems (6).
It has been proposed that, in contrast to the steady-state conditions that maintain tolerance, generation of an effective T cell proliferative response requires the sustained trafficking of large numbers of highly stimulatory mDC to the T cell areas in the lymphatic tissue (5). Extensive, repeated recruitment of circulating pre-DC to the peripheral tissues, and differentiation to replace the first-line resident iDC are then required. These second-line DC must be capable of sustained Ag sampling and highly stimulatory presentation to generate a robust immune response against pathogens. The host factors and mechanisms involved in the development of these enhanced second-line DC at a site of inflammation have not yet been defined. The stimulatory nature of mDC is subject to dynamic temporal regulation (7) and influenced by lineage, the Ag captured, the receptors engaged during Ag capture, and the developmental and maturational microenvironment (2,3,8,9).
Cationic peptides, found in abundance at sites of inflammation, might represent one factor in second-line DC development. These naturally occurring cationic peptides with potent, broad-spectrum antimicrobial activities contribute to the innate host defenses of animals, insects, and plants (10 -14). LL-37 is a human (h) cationic peptide derived from the cathelicidin hCAP-18 (15). hCAP-18 is constitutively expressed by neutrophils (ϳ630 g per 10 9 cells), lymphocytes, macrophages, and a range of epithelial cells (16 -19), and LL-37 can be detected at 1 M (ϳ5 g/ml) in both adult sweat and bronchoalveolar lavage fluid of healthy infants (20,21). Expression is significantly up-regulated in the inflamed skin (22), with a median concentration of 304 M (ϳ1.5 mg/ml) in skin lesions from patients with psoriasis (23) and increased by 2-to 3-fold in bronchoalveolar lavage fluid from infants with either systemic or pulmonary inflammation (21). Expression of LL-37 has also been reported in the Langerhans cells of infants with erythema toxicum (24). In addition to its antimicrobial and antiendotoxic activities, it has been reported to be chemotactic for monocytes, T lymphocytes, neutrophils, and mast cells (25,26), and capable of modulating the expression profile of chemokines, chemokine receptors, and additional genes in macrophages and other mammalian cells (27).
Thus, pre-DC recruited to sites of inflammation are likely to be exposed to high levels of LL-37 that has been produced by recruited neutrophils and resident epithelial cells. We propose that exposure to this gradient of LL-37 alters the gene expression profile and differentiation of these cells. The consequent phenotypic modifications of potential second-line DC derived in this inflammatory milieu would then alter the nature of the T cell response. To test this hypothesis, we studied the impact of LL-37 exposure on the development of monocyte-derived DC morphology, Ag uptake, maturation, Ag presentation, and T cell-stimulatory capacity.
Cell purification and culture
Monocyte-derived DC were prepared based upon standard techniques (29). Briefly, 100 ml of fresh human venous blood was collected in sodium heparin Vacutainer collection tubes (BD Biosciences) from volunteers according to University of British Columbia Clinical Research Ethics Board protocol C02-0091. The blood was mixed, at a 1:1 ratio, with RPMI 1640 medium (supplemented with 10% v/v FCS, 2 mM glutamine, and 1 nM sodium pyruvate) in an E-toxa-clean (Sigma-Aldrich)-washed, endotoxinfree bottle. PBMC were separated using Ficoll-Paque Plus (Amersham Pharmacia Biotech, Baie D'Urfé, Quebec, Canada) at room temperature and washed with PBS. Monocytes were enriched with the removal of T cells by rosetting with fresh SRBC (PML Microbiologicals, Wilsonville, OR) pretreated with Vibrio cholerae neuraminidase (Calbiochem Biosciences, La Jolla, CA) as described (30) and repeat separation by Ficoll-Paque Plus. The enriched monocytes were washed with PBS, and then cultured (1 ϫ 10 6 per well) for 1 h at 37°C followed by the removal of nonadherent cells; monocytes thus purified were Ͼ95% pure as determined by flow cytometry (data not shown). Cells were cultured in Falcon tissue culture 24-well plates (BD Biosciences) or, for immunohistochemistry, on 0.4-m pore, 24-mm Transwell-Clear culture chamber inserts (Corning Costar, Cambridge, MA). The adherent monocytes were cultured in 1 ml of medium supplemented with LL-37 (or other peptides) dissolved in endotoxin-free water (Sigma-Aldrich), or the same volume of endotoxin-free water as a control, and incubated for 1 h at 37°C, before the addition of 100 ng/ml IL-4 and 100 ng/ml GM-CSF to establish differentiation to DC phenotype. Unless otherwise stated, LL-37 was used at 50 g/ml, previously described as optimal for monocyte chemotaxis (25). For studies using pertussis toxin, adherent monocytes were cultured for 1 h at 37°C with 100 ng/ml toxin, and then washed twice with medium before being treated as described above. For studies using WKYMVM, this synthetic peptide ag-onist of FPRL1 and FPRL2 was used at a concentration equimolar to 50 g/ml LL-37 (10 M), or at 1 M, a dose previously shown to induce maximal neutrophil NADPH oxidase activity (31). No difference was observed between these doses. Cells were cultured at 37°C in a humidified incubator for 7 days before analysis or stimulation. Adherent cells were harvested with gentle cell scraping. LL-37-pulsed studies were performed as above; however, the medium was removed, and the cells were washed 24 h after addition of LL-37, followed by culture in fresh IL-4-and GM-CSF-supplemented medium (preincubated for 24 h in the absence of cells) for a further 6 days.
Monocyte-derived macrophages were generated from fresh monocyteenriched PBMCs isolated as described and cultured in medium supplemented with 10% v/v autologous serum. Adherent cells were cultured for 7 days in Transwell-Clear culture chamber inserts. Monocyte-derived macrophages were not exposed to antimicrobial peptides in this study.
For the isolation of T lymphocytes, PBMC were isolated as described above and resuspended at 5 ϫ 10 7 cells/ml. T cells were then isolated using StemSep with Human T Cell Enrichment mixture (StemCell Technologies, Vancouver, British Columbia, Canada). Purified T cells were resuspended at 2 ϫ 10 6 cell/ml. T cells were Ͼ95% pure as determined by flow cytometry (data not shown).
Analyses of cytotoxicity and cell viability
Peptide cytotoxicity was assessed by collecting culture supernatants after 24 h and 7 days of culture in which the concentration of lactate dehydrogenase-1 was quantified using a Cytotoxicity Detection kit (Roche Diagnostics, Laval, Quebec, Canada) according to the manufacturer's instructions. Following the removal of nonadherent cells the number of viable adherent cells was quantified using the WST-1 assay (Roche Diagnostics) according to the manufacturer's instructions.
Cytology and immunohistochemistry
The immunohistochemical analyses of adherent monocyte-derived macrophages and DC were performed using cells cultured on semipermeable Transwell-Clear culture chamber inserts (Corning Costar) as described. After 7 days of culture, adherent cells were washed twice in PBS at 4°C, submerged in blocking buffer (PBS, 0.1% w/v sodium azide, 0.1% v/v mixed human serum, 10% v/v FCS), stained with FITC-labeled mAb, according to the manufacturer's instructions, washed, and fixed with 2% formaldehyde solution at 4°C. For labeling of filamentous actin (F-actin), nonadherent DC were harvested, washed, cytospun onto glass slides, fixed with a 1:1 mix of acetone and methanol at Ϫ20°C, and labeled with Oregon Green 488 phalloidin according to manufacturer's instructions. Specimens were all mounted in Vectashield (Vector Laboratories, Burlington, Ontario, Canada) with 1 g/ml 4,6-diaminido-2-phenylindole. Imaging was performed using an Axioplan 2 fluorescent microscope (Carl Zeiss, Thornwood, NY), DXC-390P digital camera (Sony, Tokyo, Japan), and Northern Eclipse, version 6.0, software. To assess cell size, nonadherent DC were harvested, washed, cytospun onto glass slides, fixed, stained with Diff-Quik (Dade Behring, Newark, DE), and examined by light microscopy. Imaging was performed as described above; 30 cells per sample were measured along an identical axis using Northern Eclipse, version 6.0, software.
Scanning electron microscopy
DCs cultured in 24-well plates or on 0.4-m pore, 24-mm Transwell-Clear culture chamber inserts were washed with PBS, fixed in 2.5% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4) for 1 h, and washed three times in phosphate buffer. Nonadherent cells were syringe filtered onto 0.4-m nuclepore filters (Whatman, Clifton, NJ), which were transferred to petri dishes, while adherent samples on culture chamber inserts were processed directly. Samples were fixed using 1% osmium tetroxide in 0.1 M phosphate buffer for 1 h, washed three times in distilled water, and dehydrated through a graded ethanol series. Following critical-point drying, the specimens were mounted on aluminum stubs, sputter coated with gold/palladium, and examined with a Hitachi (Tokyo, Japan) S4700 scanning electron microscope.
FACS
Cells were harvested, counted by hemocytometer, washed twice in PBS at 4°C, and resuspended in FACS buffer (PBS, 0.1% w/v sodium azide, 0.1% v/v pooled human serum, and 10% v/v FCS). Aliquots of 1 ϫ 10 5 cells were labeled with fluorescently labeled mAb or the appropriate isotype controls, according to the manufacturer's instructions, in the dark at 4°C for 1 h, washed twice in PBS and resuspended in 2% formaldehyde in PBS. Analysis was performed based on a minimum of 10,000 cells for each condition using a FACSCalibur system and CellQuest, version 3.1, software (BD Biosciences). Data were analyzed using WinMDI 2.8 software. The mean fluorescence intensity (MFI) was established and corrected by subtraction of the MFI for the appropriate isotype control.
Endocytosis and phagocytosis assays
For quantitative analysis of the endocytic activity of both LL-37-derived and control monocyte-derived DC, 1 ϫ 10 5 cells were resuspended in HBSS and incubated with 1 mg/ml FITC-labeled dextran (molecular mass, 40,000 Da) for 1 h at either 37 or 4°C. The reaction was stopped by washing with ice-cold PBS, and mean FITC fluorescence intensity was determined by flow cytometry. CD11b-mediated and Fc␥R-mediated adhesion and phagocytosis were assessed using complement-coated SRBC (IgMC SRBC) and IgG-coated sheep erythrocytes (IgG SRBC), respectively, prepared as previously described (32). DC were suspended in HBSS as above, with 0.1% w/v gelatin. IgMC SRBC or IgG SRBC were added at a ratio of 20:1 and incubated, gently rotating, at 37°C for 1 h. PBS at 4°C was added to stop the reaction, and the cells were washed and resuspended in 200 l of PBS. Three drops of distilled water were added rapidly to half of each sample, to lyse the exposed erythrocytes, followed immediately by 5 ml of PBS to prevent DC lysis. Samples were washed, fixed in 2% formaldehyde solution, cytospun onto glass slides, and stained with Diff-Quik (Dade Behring). The number of particles associated with each DC were counted by light microscopy for 60 cells per sample and performed in triplicate for every condition.
Induction of DC maturation
Monocyte-derived DC were stimulated at day 7 of culture. Cells were harvested, washed twice with PBS, resuspended in fresh medium (without IL-4, GM-CSF, or peptides), and counted. A total of 1 ϫ 10 6 cells per well were incubated for 24 h in Teflon vials (Savillex, Minnetonka, MN) in medium containing 200 ng/ml S. typhimurium LPS (repurified as previously described (28)) or the same volume of endotoxin-free water as a control. Alternatively, 5 ϫ 10 4 cells per well were incubated in 24-well tissue culture plates for 48 h for ELISA analysis of supernatants.
Chemotaxis assays
DC chemotaxis to recombinant human chemokine MIP-3 was performed using a Transwell chemotaxis assay with cells preincubated for 24 h with S. typhimurium LPS or endotoxin-free water as described above. A total of 5 ϫ 10 4 cells was added in 100 l of RPMI 1640 medium, supplemented with 0.5% w/v filtered BSA, to the apical compartment of a 5-m pore, 24-mm Transwell polycarbonate culture chamber insert (Corning Costar). A volume of 600 l of the same medium, additionally containing 100 ng/ml MIP-3, or the same volume of 0.1% w/v BSA in PBS (as a carrier control) was added to the basal compartment. After 2-h incubation at 37°C, the inserts were removed, their basal surfaces were washed, and the number of cells in the lower chamber was assessed by light microscopy, counting five defined fields of view per well. Studies were performed in triplicate for each condition.
DC-derived cytokine analysis
Following 7 days of culture, 5 ϫ 10 4 monocyte-derived DC per well were exposed to S. typhimurium LPS or endotoxin-free water in triplicate as described above. Supernatants were collected after 48 h and stored at Ϫ70°C for analysis by ELISA. Supernatants were analyzed using commercial ELISA kits for IL-12 p70, IL-4, IL-6, TNF-␣, and IL-10 (BD Biosciences) performed according to the manufacturer's instructions and read using a Model 3550 Microplate reader (Bio-Rad Laboratories, Mississauga, Ontario, Canada).
T cell proliferation assays
Assays were set up in triplicate using round-bottom 96-well plates containing 200 l of complete medium (RPMI 1640 (Biofluids, Rockville, MD) supplemented with 10% FBS (Invitrogen), 20 mM HEPES, and 2 mM L-glutamine). A total of 1 ϫ 10 4 DC in each well was coincubated with T cells over a range of ratios, performed in triplicate with additional T cellalone and DC-alone negative controls. T cell-derived cytokine analysis T cells (1 ϫ 10 5 ) and DC (1 ϫ 10 4 ) were coincubated in each well as described above. A volume of 100 l of supernatant was removed from each well of the proliferation assay plates immediately before the addition of [ 3 H]thymidine and stored at Ϫ70°C for analysis by cytometric bead array (BD Biosciences) following the manufacturer's recommended protocol.
Statistical analysis
All data were expressed as mean Ϯ SEM. Statistical significance of differences between groups was established using paired Student's t tests comparing matched control and treated DC populations generated simultaneously from the same donors. A value of p Ͻ 0.05 was taken to denote statistical significance.
LL-37 modifies iDC morphology
Freshly purified human monocytes were cultured for 7 days with IL-4 and GM-CSF (29) to derive iDC, in the presence or absence of 50 g/ml LL-37 or Bac2a (a related cationic peptide derived from the bovine cathelicidin bactenecin (33)). No cytotoxicity resulted from peptide exposure under these conditions, as measured using a lactate dehydrogenase-1 detection assay (data not shown). As expected, control cells differentiated into nonadherent iDC (Fig. 1a). In contrast, a proportion of LL-37-derived iDC were strongly adherent, with confluency of ϳ30% at day 7, after the removal of nonadherent cells (Fig. 1b). This phenotype was observed with cells from all 11 donors evaluated. Bac2a-derived iDC did not develop the adherent phenotype. A WST-1 tetrazolium salt cleavage assay confirmed the viability of adherent LL-37-derived iDC and the absence of viable adherent cells on washed wells of control iDC, and Bac2a-derived iDC (data not shown).
Monocytes from the same donors were cultured to generate either LL-37-derived adherent iDC or untreated monocyte-derived macrophages (cultured in autologous serum). Adherent LL-37-derived iDC were strongly positive for cell surface CD1a, but negative for CD14, whereas those cultured in autologous serum were strongly positive for cell surface CD14, but negative for CD1a (Fig. 1, c-f). This suggested that the former were indeed DC, and that LL-37 was not inducing the development of macrophages. Scanning electron microscopy revealed LL-37-derived iDC to be larger cells with more numerous surface filopodia (Fig. 1, j and k), in contrast to the control iDC on which small lamellae were more prominent (i). F-actin labeling also demonstrated the increased size of LL-37-derived iDC, with punctate staining that could represent the filopodia observed by scanning electron microscopy ( Fig. 1, g and h). The mean cell size by light microscopy of stained cytospins of pooled harvested adherent and nonadherent LL-37-derived iDC was significantly greater than control iDC ( p ϭ 0.008) with a mean difference of 33 Ϯ 6% (16.9 Ϯ 1.8 and 12. 9 Ϯ 1.7 m, respectively; n ϭ 4 donors). FACS analysis of these cells also clearly demonstrated a significant increase in both forward scatter (FSC) ( p ϭ 7 ϫ 10 Ϫ5 ; Fig. 2, a and b) and side scatter (SSC) ( p ϭ 2 ϫ 10 Ϫ5 ; a and b) in comparison with controls, with mean increases of 20 Ϯ 9 and 75 Ϯ 8%, respectively (n ϭ 13 from 6 donors). These effects were dose dependent, with cell size (as indicated by FSC) significantly increased by exposure to as little as 5 g/ml LL-37 and maximally increased by 50 g/ml, whereas SSC (perhaps representing the increased surface structure complexity) increased significantly in a dose-dependent manner over the range of 25-100 g/ml LL-37 (Fig. 2c). LL-37 MODULATES DC DIFFERENTIATION
Cell surface receptor expression is altered on LL-37-derived iDC
To further characterize LL-37-derived iDC and confirm their expression of surface markers characteristic of iDC, FACS analysis was performed using a panel of specific mAbs (Table I, Fig. 3b). As previously reported, control iDC expressed CD1a, but little, if any, CD14, CD83, or CCR7 (3). The level of expression of these surface markers on LL-37-derived iDC was not significantly different from controls, suggesting that these were indeed iDC. In contrast, LL-37-derived iDC expressed significantly enhanced surface levels of CD86, CD11b, CD11c, and CD18, and significantly decreased surface expression of CD209, CD16, and CD32 (Table I). No significant differences were observed in the expression of CD80, CD40, CD206, HLA-DR, or CD54. Changes in surface marker expression were dose dependent over the range of 1-100 g/ml LL-37, with significant effects observed for some markers even at 5 g/ml (Fig. 3a). Surface expression of the costimulatory molecule CD86 in LL-37-derived iDC increased dramatically with increasing LL-37 concentration. The increases in expression of CD11b and CD18 (cocomponents of complement receptor (CR)3) were observed to be proportional when comparing the percentage change in individual donors. The percentage decreases in CD16 closely replicated those in the more highly expressed CD32, with the maximal effect upon these Fc␥Rs observed at 25 g/ml.
Ag uptake is altered in LL-37-derived iDC
To address the functional significance of the altered receptor expression on LL-37-derived iDC, the Ag uptake capabilities of these cells was studied. The majority of both control and LL-37-derived iDC associated with at least one complement-coated SRBC (IgMC SRBC); however, the proportion of LL-37-derived iDC associated with Ն5 or Ն10 particles was significantly greater than for control iDC ( p ϭ 0.01 and 0.03, respectively; Fig. 4a). No significant internalization of IgMC SRBC was observed for either control or LL-37 iDC, confirming that LL-37 did not activate these iDC. In contrast to this increased IgMC SRBC binding, the proportion of LL-37-derived iDC associated with at least 1, or Ն5, IgG-coated SRBC (IgG SRBC) was significantly reduced in comparison to control iDC ( p ϭ 0.02 and 0.01, respectively; Fig. 4b). Although the percentage of LL-37-derived iDC that had internalized IgG SRBC was lower than that of control iDC (23 Ϯ 11 and 38 Ϯ 13%, respectively), this did not reach statistical significance. In addition, the endocytic capacity of these cells was studied by examining the binding and uptake of FITC-labeled dextran. A significantly greater uptake was observed in LL-37-derived iDC ( p ϭ 0.005), with a 105 Ϯ 15% increase in mean FITC-labeled dextran internalization in comparison with control iDC (Fig. 4, c and d). A trend toward greater binding of FITC-labeled dextran was also observed (at 4°C), but did not reach statistical significance. Thus, these data indicate that LL-37-derived iDC have a functionally modified profile of Ag uptake as predicted by the alterations observed in their surface receptor expression.
Cell surface receptor expression is altered on LL-37-derived mDC
To examine the maturation of LL-37-derived DC, iDC were stimulated with LPS. Both the control and LL-37-derived mDC thus generated had a normal maturation profile by FACS, with increased expression of CD86, CD80, CD83, HLA-DR, CD54, and CCR7 in comparison with iDC (Table I, Fig. 3c). However, LL-37-derived mDC displayed significantly greater expression of CD11b, CD86, and CD83 in comparison with controls. No significant differences were observed in the expression of CD80, HLA-DR, CD54, or CCR7.
Chemotaxis of LL-37-derived mDC is normal
In response to maturation, DC alter their expression profile of chemokine receptors, down-regulating expression of CCR5 and CCR6, but up-regulating expression of CCR7 (4). No surface expression of CCR7 was observed on control iDC or LL-37-derived iDC by FACS analysis (Table I, Fig. 3b). A significant increase in expression was observed following maturation ( p Ͻ 0.05; n ϭ 5 donors), with no significant difference between control mDC and LL-37-derived mDC ( Table I, Fig. 3c). CCR7 up-regulation was also demonstrated functionally in both LL-37-derived and control mDC as chemotaxis across a gradient of the chemokine MIP-3. Chemotaxis was induced by MIP-3 in mDC, but not in iDC. No significant difference was observed between LL-37-derived and control mDC or iDC (Fig. 5a).
LL-37-derived mDC produce a characteristic Th-1-inducing cytokine profile
The release of cytokines following maturation with LPS was quantified by ELISA (Fig. 5, b-f). LL-37-derived mDC secreted significantly more IL-12 and IL-6 ( p Ͻ 0.05; n ϭ 10 donors) and significantly less IL-4 ( p Ͻ 0.05; n ϭ 10 donors), than paired control mDC from the same donors. In addition, assessed on an individual donor basis, LL-37-derived mDC secreted significantly more TNF-␣ ( p Ͻ 0.05) in 9 of the 10 donors evaluated; however, considerable variation in absolute levels of cytokine expression was observed between different donors, as previously reported (7). No consistent relationship between IL-10 secretion and LL-37 derivation was observed. In contrast to these LPS-matured DC, LL-37-derived and control immature cells did not demonstrate substantial expression of any of the cytokines studied.
LL-37-derived mDC stimulate enhanced proliferation of IFN-␥-secreting T cells
The capacity of LL-37-derived mDC to activate and induce the proliferation of T lymphocytes, and the functional significance of their altered cytokine and CD86 expression were studied using allogenic T cells. Both LL-37-derived and control DC induced proliferation, but no significant difference was observed over a range of DC/T cell ratios (Fig. 6a). However, T cells stimulated with LPS-matured LL-37-derived mDC produced significantly more IFN-␥ than controls ( p ϭ 0.03; Fig. 6b). This difference was observed for all five donors tested. No significant T cell IL-4 production was detected. No significant IFN-␥ was detected from mDC alone.
LL-37-induced DC modulation occurs early in differentiation, via a G i protein-coupled receptor
To establish the temporal contribution of LL-37 to the modulation of DC development, monocytes were exposed to a pulse of LL-37 for only the first 24 h of culture. These LL-37-pulse-derived iDC displayed the same adherent phenotype and significant cell size increase ( p ϭ 1 ϫ 10 Ϫ6 ) comparable with LL-37-derived iDC, with intermediate SSC, significantly greater than controls ( p ϭ 2 ϫ 10 Ϫ8 ), but less than LL-37-derived iDC ( p ϭ 0.01). Significantly enhanced expression of CD86 (Fig. 7) and CD11b (data not shown) were also replicated using this pulse exposure with almost identical magnitude. These data suggest that many of the modifications observed in LL-37-derived DC result from peptide interaction with the pre-DC in the first day of differentiation. To begin dissecting the mechanism underlying these observations, monocytes were pretreated with pertussis toxin to inhibit G i proteincoupled receptor activity before a 24-h LL-37 pulse. Pertussis toxin inhibited LL-37-dependent up-regulation of both CD86 ( p ϭ 0.04) and CD11b ( p ϭ 0.02) significantly, but incompletely, with a significant degree of up-regulation still observed in comparison to pertussis toxin-treated control cells ( p ϭ 0.04 and 0.005, respectively; Fig. 7 and data not shown). LL-37-induced changes in FSC and SSC were also partially, but significantly inhibited (data not shown). There were no significant changes in any of these markers in cells treated only with pertussis toxin in comparison with control cells. A previous study implicated formyl peptide receptor-like (FPRL)1 as a G i protein-coupled receptor for LL-37 (25). However, a synthetic peptide activator of FPRL1 and FPRL2 (WKYMVM; Ref. 31) substituted for LL-37 had no significant effects on iDC expression of CD86 (Fig. 7) or CD11b, nor on FSC or SSC (data not shown). Finally, characterization of Bac2a-derived iDC suggests peptide specificity, with no significant increase in CD86 (Fig. 7) or CD11b expression nor SSC (data not shown), but cells were larger, with significantly increased FSC ( p ϭ 0.03). These data suggest that the development of iDC can be specifically modulated by the interaction of pre-DC with LL-37, and that at least some of these modifications result from the activation of an as-yet-undefined G i protein-coupled receptor.
Discussion
The ability of DC to perform their physiological role is dependent upon appropriate development from pre-DC, Ag capture, maturation, chemotaxis, and Ag presentation to T cells. We have demonstrated that LL-37-derived DC had significantly up-regulated endocytic capacity, modified expression of phagocytic receptors, enhanced costimulatory molecule expression and secretion of Th-1 inducing cytokines, and generated an enhanced Th1 response in vitro. These modifications were superimposed upon retention of basic DC phenotype and appropriate maturational modifications, including changes in chemokine receptor expression that facilitate mDC migration to the T cell areas. Thus, we have demonstrated that the cationic peptide LL-37 is a multipotent, tissue microenvironmental modifier of DC differentiation, capable of affecting all temporal stages of the DC life cycle.
Endocytic and phagocytic Ag capture are crucial sentinel functions of iDC. We found that LL-37 significantly and selectively altered the processes of endocytosis and phagocytosis and the expression of phagocytic receptors by iDC. The endocytic capacity of LL-37-derived iDC was significantly enhanced. This is thought to increase the density of Ag presentation, which will enhance T cell stimulation (7). Thus, iDC differentiation in the presence of high concentrations of LL-37 at sites of inflammation may directly impact on the Ag loading and presentation capabilities of DC.
LL-37 profoundly affected the expression and function of several phagocytic receptors. The decreased expression of DC-specific ICAM-3-grabbing nonintegrin (CD209) on LL-37-derived iDC may have important consequences for pathogen clearance. (7) This DC-specific lectin has been implicated as a receptor used by various microorganisms associated with chronic infection, including HIV and Mycobacterium tuberculosis (34). Thus, LL-37-induced down-regulation at sites of inflammation might be advantageous to the host, by denying pathogens a protected niche within mononuclear cells. We also demonstrated marked LL-37-induced alterations in the expression and function of CR3 and CR4, and Fc␥RII and -III. Both CR3 and CR4 are important cell adhesion molecules and also function as competent opsonic and nonopsonic phagocytic receptors (35). The dramatic enhancement of  2 integrin expression by LL-37 could substantially impact upon DC migration (36). It may also enhance their capacity to phagocytose complement-opsonized and unopsonized pathogens with consequences for maturation and activation of such cells in vivo. In contrast to CR3 and CR4, Fc␥R expression and activity was significantly reduced. However, the consequences of Ag recognition by Fc␥RII, the predominant Fc␥R on iDC, depends on the relative contributions of the activating and inhibitory cytoplasmic regions (37). Thus, further studies are required to assess the functional implications of this decrease.
The effective Ag-presenting function of mDC requires the establishment of an immunological synapse with the T cell, and three primary signals as follows: 1) cognate presentation of Ag by MHC class II molecules, 2) expression and engagement of costimulatory molecules, namely CD80 (B7.1) and CD86 (B7.2), amplifying the signaling processes by up to 100-fold (5), and 3) the production of specific polarizing cytokines predisposing to a Th1, Th2, Th3, or regulatory T (Tr) cell response (1,38). LL-37-derived iDC displayed normal maturation in response to LPS, with an increase in HLA-DR expression on LL-37-derived mDC comparable with control cells. This suggests that the capacity of these cells to present Ag, and hence provide signal 1, was normal. In contrast, the expression of CD86 (signal 2) was significantly altered. Whereas control iDC normally only express high levels of CD86 upon maturation (4), LL-37-derived iDC showed a dramatic, dosedependent enhancement of CD86 expression in all donors, without changes to other markers associated with maturation. CD86 expression by these cells was further up-regulated by exposure to LPS, confirming that these DC were immature before activation. This also resulted in the enhanced CD86 phenotype observed in LL-37-derived iDC being carried across to maturation. These observations are in marked contrast to the recently described activity of another cationic peptide, murine -defensin-2, reported to directly mature DC in a Toll-like receptor-4-dependent manner (39), although it should be noted that this peptide was studied in the form of fusion proteins constructed from murine -defensin-2 and tumor Ags and does not represent an endogenous ligand. Thus, LL-37 constitutes a modifier of DC differentiation but does not alter maturation, nor directly activate and mature iDC.
Enhanced CD86 expression on mDC would be expected to confer amplified T cell stimulatory capacity to these cells (5) and possibly favor a Th2 response (40). However, LL-37-derived mDC (1). This is in contrast to IL-4, a cytokine antagonistic of Th1 responses and down-regulated in LL-37-derived mDC, and IL-10, a cytokine involved in the generation of a Tr cell response and capable of acting on iDC to prevent full maturation (1,38,41). Although the balance of IL-12 vs IL-4 levels is likely to be critical in determining Th cell polarization, the expression of other polarizing cytokines, such as IL-18 and IL-23, remains to be determined. In addition to enhanced IL-12, LL-37derived mDC consistently produced increased levels of IL-6. This cytokine is known to enhance B cell proliferation and might block the suppressive effects of Tr cells (42). Furthermore, LL-37-de-rived mDC had consistently enhanced production of TNF-␣, a proinflammatory cytokine known to influence many innate immune responses including the induction of DC maturation. Enhanced expression of these cytokines might therefore provide additional mechanisms for LL-37-derived DC modulation of adaptive immune responses.
Thus, LL-37-derived DC exhibited enhancement of costimulatory molecule expression and Th1-promoting cytokine release, two of the three primary signals required for Ag presentation and stimulation of a Th1 response. These effects of LL-37 are in marked contrast to the various developmental modifiers previously described, including PGE 2 and IL-10, which all inhibit iDC maturation and IL-12 production, and consequently promote tolerogenic or Th2 responses (43,44). That the up-regulation of MHC class II molecules (signal 1) in LL-37-derived mDC was not significantly different from that observed in control, suggests that the process may function independently of Ag, and LL-37 may therefore constitute a novel adjuvant.
It should be noted that, although LL-37-derived DC significantly enhanced T cell IFN-␥ responses, they did so only after exposure to a LPS maturation signal. Thus, differentiation in the presence of LL-37 augments DC-induced Th1 responses but does not initiate them. The consequences of LL-37-induced modifications to the critical DC signals remain to be determined in the context of a broader range of Ag-dependent responses. This will establish whether LL-37-derived DC amplify the expression of polarizing cytokines of a nature defined by the maturing stimulus, or are primed to skew the magnitude and the nature of the cytokine response, and consequent T cell polarization. In addition, the T cellpolarizing capacity of DC is temporally controlled. LPS-matured DC produce an initial IL-12, IL-6, and TNF-␣ response with Th1-generating capacity, but over time, this IL-12 release has been shown to diminish, with an increased IL-10 response, and these same exhausted mDC then promote Th2 polarization (7). In our study, DC supernatants were collected 48 h after LPS stimulation to assess patterns of change in total cytokine production over that period, and the temporal control of DC cytokine release remains to be established. Finally, the role of T cell-DC interactions in stimulating LL-37-derived DC cytokine production, and thus the Th polarization, remains to be explored, including the effects of CD40 ligation and IFN-␥. Nevertheless, it is evident from our study that this innate host defense peptide, LL-37, has the capacity to influence adaptive immunity via modulation of DC differentiation. Further studies are required to develop this model in vivo.
The recruitment of pre-DC to sites of inflammation is likely to be a rapid event, and thus, any potential DC modulator must also act quickly. We demonstrated that LL-37 modulation of DC required only a short exposure at an early stage of differentiation from pre-DC to manifest a wide spectrum of phenotypic changes. Furthermore, the concentrations of LL-37 at which we observed these effects were consistent with those observed in vivo during inflammation (21)(22)(23), probably produced predominantly by neutrophils and epithelial cells (16,19,22). Although LL-37 expression has also been reported in Langerhans cells, which might also contribute (24), no LL-37 expression was evident at the protein or RNA level in monocyte-derived DC in the immature or mature state (data not shown). It seems likely that this can be attributed to differences between Langerhans cells and monocyte-derived DC, and the cellular milieu. Modulation of DC differentiation was not a nonspecific consequence of exposure to a cationic peptide, but rather was mediated, at least partly, by a specific G i -coupled receptor or receptors. This suggested a role for (c and d). The percentage of cells associated with IgMC SRBC or IgG SRBC after a 1-h incubation was assessed by counting 60 cells per sample by light microscopy and performed in triplicate for every condition. The x-axis indicates the proportion of cells categorized as associated with a specified minimum number of particles. The uptake of FITC-labeled dextran was performed at 37°C to establish total cell association and internalization, and at 4°C, at which temperature internalization will not occur. Uptake was determined by flow cytometry (d shows a representative FACS plot; the solid gray area represents control iDC at 37°C, the black line represents LL-37-derived iDC at 37°C, the gray line represents control iDC at 4°C, and the broken gray line represents LL-37-derived iDC at 4°C) and displayed as the mean percentage increase in LL-37-derived iDC compared with control iDC in c. Values represent mean Ϯ SEM. ,ء p Ͻ 0.05; ,ءء p Ͻ 0.01; n ϭ 4 donors for each study. FPRL1, the only LL-37 receptor identified to date (25). However, FPRL1 stimulation failed to induce a similar DC phenotype, suggesting the involvement of as-yet-unidentified receptors. Future studies are required to define these receptors and the downstream signaling cascades responsible for the LL-37dependent DC modulation.
Interestingly, overexpression of GM-CSF in mice has been shown to recruit DC, secreting high levels of TNF-␣ and IL-6 with increased Ag capture and enhanced T cell and NK cell stimulatory capacities (45). In our in vitro human model, repeated medium supplementation with GM-CSF failed to replicate the LL-37-derived DC phenotype, and GM-CSF receptor expression was unaltered (data not shown). Nevertheless, given the critical nature of FIGURE 5. Chemotaxis and cytokine production by LL-37-derived mDC and control cells. a, Chemotaxis of LL-37-derived mDC and control mDC was not significantly different in response to 100 ng/ml MIP-3 in a Transwell assay. Minimal chemotaxis was observed using iDC or toward a BSA carrier control. Values represent mean Ϯ SEM, n ϭ 2 donors. b-f, Cytokine production in triplicate wells of 5 ϫ 10 4 LL-37-derived DC or control cells was assessed by ELISA after 48-h incubation with 200 ng/ml repurified S. typhimurium LPS. Box plots represent the median, 25th percentile, 75th percentile, and range of cytokine concentrations from LL-37-derived and control DC (n ϭ 10 donors). Paired t tests were performed comparing LL-37-derived and control DC derived from the same donor (n ϭ 10 donors); ,ء p Ͻ 0.05. Donor-specific variation in absolute values required logarithmic y-axes to display IL-12 and TNF-␣ (b and e). Modulation of CD86 expression. DC were derived from monocytes over 7 days in the presence of 50 g/ml LL-37, 50 g/ml Bac2a, or the FPRL1 agonist WKYMVM (10 M), or over 7 days with a 50 g/ml LL-37 pulse exposure for the first 24 h, with or without pertussis toxin (PTx) pretreatment. iDC were fluorescently labeled with specific mAb and analyzed by flow cytometry. Mean CD86 surface expression is shown and compared with the appropriate matched control iDC prepared in parallel from the same donors. Statistical comparison of the MFI was by paired t test. ,ء p Ͻ 0.05; ,ءء p Ͻ 0.005; n ϭ 11 from 6 donors (control, LL-37 pulse, and LL-37 study); n ϭ 4 from 4 donors (control, LL-37 pulse, PTx, PTx LL-37 pulse study); n ϭ 5 from 3 donors (control, Bac2a study); and n ϭ 3 from 3 donors (control, WKYMVM study). GM-CSF in DC differentiation and the similarity between LL-37derived DC and this murine DC subset, it seems likely that LL-37 impacts upon the GM-CSF pathway. Indeed, recent data demonstrate that LL-37 and GM-CSF act synergistically to induce phosphorylation and activation of the mitogen-activated protein kinases extracellular signal-regulated kinase 1/2 and p38 in human peripheral blood-derived monocytes. 5 In conclusion, we propose that LL-37-derived DC may represent highly stimulatory second-line DC, generated in an LL-37-rich inflammatory milieu in vivo. This modification of DC differentiation may enhance DC production of Th1 cytokines in response to maturational stimuli, establish prolonged T cell stimulation, and generate a more robust Th1 response to harmful Ags. Our data implicate LL-37 as a potent modifier of DC differentiation. Thus, it appears to function as a bridge between the innate and adaptive immune systems, indirectly facilitating the generation of an enhanced Th1 response. This endogenous host modification could be very valuable in defending against potential pathogens, particularly at sites where LL-37 has shown to be concentrated in inflammation. LL-37 has tremendous therapeutic potential in the development of DC-based immunotherapies for infectious diseases and cancer.
|
2018-04-03T00:00:36.746Z
|
2004-01-15T00:00:00.000
|
{
"year": 2004,
"sha1": "a3bbd1b749588d638d5228c5b0f92febbc3beb32",
"oa_license": "CCBY",
"oa_url": "http://www.jimmunol.org/content/172/2/1146.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "3ec82e0833526265d7d6ae035da5ad1ac691c4df",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
92980218
|
pes2o/s2orc
|
v3-fos-license
|
Toward Content-Based Hyperspectral Remote Sensing Image Retrieval ( CB-HRSIR ) : A Preliminary Study Based on Spectral Sensitivity Functions
With the emergence of huge volumes of high-resolution Hyperspectral Images (HSI) produced by different types of imaging sensors, analyzing and retrieving these images require effective image description and quantification techniques. Compared to remote sensing RGB images, HSI data contain hundreds of spectral bands (varying from the visible to the infrared ranges) allowing profile materials and organisms that only hyperspectral sensors can provide. In this article, we study the importance of spectral sensitivity functions in constructing discriminative representation of hyperspectral images. The main goal of such representation is to improve image content recognition by focusing the processing on only the most relevant spectral channels. The underlying hypothesis is that for a given category, the content of each image is better extracted through a specific set of spectral sensitivity functions. Those spectral sensitivity functions are evaluated in a Content-Based Image Retrieval (CBIR) framework. In this work, we propose a new HSI dataset for the remote sensing community, specifically designed for Hyperspectral remote sensing retrieval and classification. Exhaustive experiments have been conducted on this dataset and on a literature dataset. Obtained retrieval results prove that the physical measurements and optical properties of the scene contained in the HSI contribute in an accurate image content description than the information provided by the RGB image presentation.
Introduction
Content-Based Remote Sensing Image Retrieval (CBRSIR) has been an active research field during the last decade [1][2][3][4].Indeed, data content extraction and quantification are key steps for CBRSIR approaches requiring high quality images and efficient processing methodologies.Traditional RGB-based image representation has been widely used for earth remote sensing scene retrieval and classification [5][6][7].Nevertheless, such image representation lacks precision that profiles the physical content and properties of the scene.
Hyperspectral Imaging (HSI) is an emerging imaging modality that contains hundreds of contiguous narrow spectral bands covering a wide range of the electromagnetic spectrum from the visible to the infrared domain [8].Combining the spectral resolution in the visible range (optical properties) and in the infra-red range (physical properties) with a high spatial resolution allows to establish a direct relationship between the spectral image and the physical content of the surface [9] (i.e., vegetation, water, soil, etc.).In fact, the materials of the various components of a scene reflect, absorb, and emit electromagnetic radiation depending of their physical and chemical composition.This radiance measurement data extracted from HSI (here, we consider hyperspectral measurement data as spectral and regular sampling of the captured spectrum, with reduced overlap between the spectral samples/bands, rather than multi-spectral images, which are based on the product of the acquired spectrum by some overlapped spectral functions) requires the use of new methodologies that process and analyze appropriately such massive amount of data.HSI processing techniques have known a rapid development leading to new emerging and active research trends and applications, e.g., remote sensing [10], medical diagnosis [11], and cultural heritage [12].Therefore, HSI technologies have assisted remote sensing Earth Observation (EO) to stride forward in the past few decades [13].
Extracting deep features from earth remote sensing data has been recently investigated for data classification [14] and retrieval [15].However, most of the existing approaches were proposed for RGB data description and quantification.Recently, Zhou et al. [16] proposed two deep feature extraction schemes for high-resolution remote sensing image retrieval.In the first scheme, they extracted features from a pre-trained CNN, and in the second one they trained a novel CNN architecture on a large remote sensing dataset in order to learn low dimensional remote sensing features.They concluded that deep features achieve better performance compared to the state-of-the-art hand-crafted features.Deep metric learning for remote sensing image retrieval from large data archives was investigated in [17].The authors trained a hashing network using a triplet loss for compact binary hash codes representation with a small number of annotated training images.In addition to RGB data, Multispectral data, which involves the acquisition of visible, near infrared, and short-wave infrared images in a relatively small number of spectral bands, has attracted much attention for spectral content extraction from remote sensing data.For example, Li et al. [18] proposed a deep hashing convolutional neural networks to automatically extract the semantic feature for multispectral data.In [19], the authors proposed a content-based retrieval framework for large scale multispectral data.They used a public satellite image dataset, where each image contains four RGB-Near Infrared (NIR) spectral channels on four land cover categories.
However, even if multispectral data provides additional information compared to the RGB data, it usually lacks spectral resolutions necessary to identify chemical and physical structures of a remote sensing scene.Indeed, having a higher level of spectral details in remote sensing images gives better capability to detect such details.Hyperspectral data contains hundreds of spectral bands (varying from the visible to the infrared ranges), hence allowing one to profile materials and organisms that are not available with multispectral data.Recently, when applied to HSI data analysis and description, Deep Neural Networks (DNN) achieved promising results [20].In order to deal with high-dimensional HSI data and the correlations between spectral bands, a group of traditional approaches start, before learning or extracting features with a Convolutional Neural Network (CNN), by reducing the data dimension or by selecting some bands [21][22][23].Another group of methods processes the full-band data to extract features from HSI data [20,24,25].Yet, such band-selection methods lead to important information loss where the full-band ones extend the CNN training time and the feature extraction time.
In the domain of color vision, the process of image content discrimination involves the so-called "Spectral Sensitivity Functions" (SSFs), akin to the animal vision's system sensitivities [26].Spectral data projections onto a set of spectral sensitivity functions have been successfully used for HSI data dimensionality reduction and feature extraction [27].A recent work of Ying et al. [28] designed a CNN-based method, with a selection layer which selects the optimal camera spectral sensitivity functions for HSI data recovery.
In this paper, we present two main contributions: The first one is the study of the discriminating power of spectral sensitivity functions in a content-based hyperspectral image retrieval framework.The first hypothesis to validate is that for a given category, each image content could be better extracted through a specific set of SSFs.The second hypothesis is that the whole spectral range is required for image category recognition.To do so, we take advantage of recent advances in Convolutional Neural Networks [29], and particularly deep feature methods [30] to represent a hyperspectral image as a signature.The second contribution of this paper is the introduction of a new hyperspectral image dataset to the remote sensing community.To evaluate the performance of our proposed framework on our dataset, we first propose to focus our study on a multi-level selection of one SSF.This first study highlights important bandwidths best discriminating to the scene content.The second step of our study consists of building trichromatic images by combining three SSFs.Hence, this makes it possible to use an RGB-based pre-trained CNN for features extraction and also to display a color image for a later result interpretation and understanding.The remainder of the paper is organized as follows: Section 2 gives a brief overview of recent studies linked to our research.Section 3 presents the proposed framework based for HSI data representation and introduces our HSI dataset ICONES-HSI.Section 4 gives the experimental results of our two studies.The first one analyzes the multi level behavior of only one selected SSF.Then, we apply our proposed approach to obtain trichromatic images and study the performance and behaviour of such image representation compared with a retrieval system based on the RGB color space.Finally, Section 5 concludes the paper and gives some perspectives.
Related Work
With the development of remote sensing acquisition techniques and the rapid growth of earth observation data, remote sensing image retrieval technology has drawn more and more attention in recent years [31].Indeed, Content-Based Image Retrieval (CBIR) systems have been developed for archive management of remote sensing data [10].Several kinds of features have been investigated to represent image content and retrieve remote sensing images from a database, such as spectrum signature [14], texture [32], and spectral pattern [1].Despite the important progress of CBIR for remote sensing data for RGB imagery and multispectral Imaging [33], few works have addressed the hyperspectral image retrieval problem.Most of the existing works are based on spectral unmixing of the HSI data [34,35].For instance, Veganzones et al. [36,37] extracted end-members as spectral features by end-member induction algorithms and then defined an end-member based image distance to measure the similarity between two hyperspectral images.Most similar images are retrieved based on the similarity of each end-member based signature pairs from query and target images.They used their own HSI dataset to evaluate the performance of the proposed method.However, this data is not yet available for a public use.Ömrüuzun et al. [38] proposed to describe the image as a bag of end-members image descriptors for similarity retrieval.The latter presented a new dataset for HSI data retrieval: The HSI ANKARA Benchmark (http://bigearth.eu/datasets.html).To the best of our knowledge, only the two aforementioned HSI datasets have been proposed for HSI data retrieval.
Hyperspectral remote sensing images contain both spatial and spectral information.Some recent works proposed to integrate the textural features with spectral [14] or with color features to improve the performance of HSI retrieval.Alber et al. [39] used spectral (mean and variance) and textural (local orientation) features for HSI spatio-spectral data description.The extracted features have been used to retrieve spatial locations of hurricane eyes in GOES satellite images with a relevance feedback loop.Recently, Tekeste et al. [40] presented a comparative study of Local Binary Pattern (LBP) descriptor for remote sensing data retrieval.They adapt properties of LBP variants for different types of remote sensing data (multispectral, hyperspectral, and SAR images).However, extracting both spectral and spatial discriminating features to improve the hyperspectral image retrieval problem is still a challenging task.Recently, deep learning approaches have exploded to deal with this issue and extract more effective deep features for hyperspectral data classification [15,20,24] using both spatial and spectral information.In the work of Zhao et al. [41], Convolutional Neural Networks have also been used to encode pixels' spectral and spatial information.Santara et al. [21] presented a deep neural network architecture that learns band-specific spectral-spatial features for land cover classification in HSI data, while Mei et al. [23] designed supervised and unsupervised learning models to learn sensor-specific spatial-spectral features from HSI data.The work of Zhang et al. [24] focused on spectral-spatial context modelling in order to address the problem of spatial variability of spectral signatures.Lee et al. [42] proposed a deep CNN that learns local spectral and spatial information embedded in hyperspectral images by using a multi-scale filter bank at the initial stage of the network.
In [20], the authors proposed a hyperspectral data classification method using deep features extracted by Stacked Autoencoders.Chen et al. [43] introduced Deep Belief Networks (DBN) to extract the deep and invariant features of hyperspectral data.More recently, in [25], a three-dimensional (3D) CNN model was proposed in order to extract the spectral-spatial features from HSI data.
Deep feature extraction for HSI description is still a challenging problem because of the high dimensional nature of the HSI data, the lack of hyperspectral image datasets, and the spectral correlations that exist between bands in hyperspectral data.In the literature, various works have been carried out to overcome the high-dimensional and highly correlated feature space issues for HSI data classification [44].Many HSI dimensionality reduction techniques, including Principal Component Analysis (PCA) and Linear Discriminant Analysis (LDA), have been used for spectral band selection and reduction [21][22][23].For example, Zhao et al. [22] proposed a framework that joins dimension reduction and deep learning for hyperspectral image classification based on spectral-spatial features.However, those linear transformation-based methods are not suitable for analyzing inherently nonlinear hyperspectral data.Some works proposed alternative solutions for band selection by designing a band selection layer neural network architecture.In [45], the authors proposed a supervised CNN architecture based on a Siamese learning loss scheme to learn a reduced HSI data representation.Lee and Kwon [42] proposed a contextual deep CNN, which optimally explores contextual interactions by jointly exploiting local spatial-spectral relationships of neighboring pixels.Specifically, the joint exploitation of spatial-spectral information is achieved by a multi-scale convolutional filter bank.
Most of the aforementioned deep learning works are pixel-based approaches that process each pixel using its reflectance values from different spectral bands.Those approaches perform pixel-wise detection and classification followed by a post-processing step to group pixels or to segment an image into regions.Furthermore, the rich spectral information contained in the hyperspectral image makes them well suited for accurate computer vision tasks like scene understanding or object recognition from remote sensing data.Moreover, the lack of available annotated HSI data could be a challenge for training such CNN from scratch.In addition, most of state-of-the-art CNN architectures are designed and trained for extracting features from RGB data, and hence their direct use for HSI data might lead to sub-optimal results for analyzing the spatial-spectral data.
Proposed Framework
In this section, we present the proposed framework for HSI content representation, illustrated by Figure 1.First, we introduce the proposed trichromatic image construction scheme based on the scalar product of the hyperspectral image by three spectral sensitivity functions.Then, we detail the CNN features extraction process from the obtained images.Next, we explain the objective of our study using our HSI representation.Finally, we present our HSI dataset used to evaluate the proposed approach.
Spectral Sensitivity Functions-Based HSI Content Representation
A spectrum is mathematically defined as a continuous function f (λ) over the wavelengths expressing the acquired energy coming from a surface, a scene, or a source [46].In the three cases, the spectrum is directly related to the physical and optical properties of the acquired object.In remote sensing, some physical discontinuities are induced by the atmosphere, meaning that f is not continuously differentiable.The Spectral Sensitivity Function (SSF) is at the core of any spectral or color sensor specification and construction.The sensitivity defines the relative efficiency of the sensor for a given wavelength, expressed in percentage.Then, the radiance measurement data associated with a particular spectrum F is defined as the accumulation of energy weighted by the sensitivity s(λ) at each wavelength λ (Equation ( 1)).From signal processing point of view, the SSF can be considered as a spectral sampling function defining the spectral range to acquire and sample.The measure quantity value [47] m is defined by: where [λ min , λ max ] defines the spectral support of F and S.
For display purposes, the three SSFs are associated with the Red, Green, and Blue channels.The spectral sensitivity functions corresponding to these channels are based on the standard CIE Color Matching Functions (CMF).Nevertheless, the constraints and limits in the sensor fabrication induce the existence of several hundred of different sets of trichromatic functions, and consequently, the same numbers of color spaces.
A hyperspectral image I(x, λ) associates to each x pixel location a spectrum F. This spectrum characterizes the sum of signals transmitted by the pixel.We propose to transform the hyperspectral image I(x, λ) into a trichromatic measured quantity value M i (x) without constraints due to the CMF foundations or limits in the visible range.The trichromatic measured quantity value M i (x) is formed by the ordered triplet (m i 0 (x), m i 1 (x), m i 2 (x)) obtained from a sequence S i of three spectral sensitivity functions (S i 0 , S i 1 , S i 2 ) using Equation (1).At the end of this process, the hyperspectal image is transformed into a trichromatic image denoted M i .
Deep features F = {d f 0 , d f 1 , . . ., d f n } of the different trichromatic images M i are extracted using a deep features approach based on a CNN (ResNet) [30] pre-trained on ImageNet [48], without the fully connected layer originally tailored for the image classification task.The used pre-trained model itself is a Residual Network architecture.We use the ResNet-50 version, which consists of 16 bottleneck structures, where each one is composed of 3 convolutional layers followed by batch normalization (BN) layers.The output of the last global average pooling layer is used as features for HSI data indexing and retrieval.The obtained signatures are 2048-Dimensional vectors.Euclidean distance is used for computing the similarity between a given pair of image signatures.
To preserve the optical and physical properties, we do not restrict our approach to only three spectral bands among the several hundred acquired ones.In this work, we assume that each category presents a particular spectral response and then can be associated to a particular set of SSFs.Therefore, we are looking for the sequence of N SSFs that achieves best retrieval results for each category.In the current work, we study the use of only one SSF and then we set N to 3 to compare the retrieval results with the RGB-based results.
The ICONES Hyperspectral Satellite Imaging Dataset (ICONES-HSI)
In this paper, we present our dataset of hyperspectral satellite data, the ICONES-HSI.Images were generated from several HSI from the NASA Jet Propulsion Laboratory's Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) (https://aviris.jpl.nasa.gov/).Spectral radiance measurement data is sampled in 224 contiguous spectral channels (bands) between 365 and 2497 nanometers.We have extracted a dataset of 486 patches of 300 × 300 pixels from AVIRIS data.We grouped the obtained HSI cubes by visual inspection and google maps content verification into 9 categories (Agriculture (50), Cloud (29), Desert (54), Dense-Urban (73), Forest (69), Mountain (53), Ocean (68), Snow (55), and Wetland (35).For all patches, we have added their corresponding RGB images (considered as baseline in our experiments) obtained from the corresponding RGB images provided by AVIRIS.
To ensure the correct annotation of the patches, an interactive interface has been developed.This interface allows to select the patch to annotate.In parallel, with the use of the GPS coordinates included in the AVIRIS metadata and the (x,y) position in the whole image, the interface extracts the local Google map.Thus, aerial views with different angles and Google street views may help the user in annotating correctly each patch.Figure 2 presents some examples of patches from all categories of the ICONES-HSI dataset.The ICONES-HSI also includes an RGB version of all patch images extracted from the AVIRIS RGB full image.The ICONES-HSI dataset is available for download from http://xlim-sic.labo.univ-poitiers.fr/datasets/ICONES-HSI/index.php?lang=en.
SSFs Analysis for HSI Content Discrimination: A Multi-Level Study
Before trying to find the best triplet of SSFs that provides the best performance in terms of retrieval accuracy, we first focus our work on the study of each SSF individually using a multi-level analysis scheme.
Multi-Level SSFs Construction
To evaluate the image content description on a specific band selection using one SSF, we define the selected spectral bands by Gaussian and Complementary Error function windows along the spectral range.Thus, 5 hierarchical spectral levels with 31 SSFs are used to filter the HSI spectra.
Figure 3 shows the selected SSFs by level.Level 1 contains only one SSF that covers the whole spectral range and other levels are the results of the division by 2 of the higher level.Thus, similarly to first level, the SSFs of other levels, when combined, also cover the whole spectrum range.The goal here is to see if one optimal SSF is enough to build a discriminate and efficient image signature compared to the use of an RGB-based content description approach.
Results and Discussion
Figure 4 presents for each class, the RGB version of the hyperspectral image, its corresponding spectra, the best selected SSF according to the retrieval performance for the category, and the resulting monochromatic image obtained with the best SSF.We note that for most categories, the best SSF is from level 5.It means that the average best descriptive wavelengths for all categories are contained in a small window range, and thus using a wider one produces noise that decreases the accuracy of image description.Let us focus on some categories and explain the possible reasons of the obtained performance.Only the Agriculture category uses a level 4 SSFs covering the visible range.A focus on patch content of this category may explain this result.First, agriculture fields contain repeated geometric shapes with specific gradients induced by limits of the cultivated fields.Thus, a larger spectral bandwidth is needed to better describe this topological characteristic.Moreover, as acquisitions have been performed during the whole year, the step of germination of field may be different.Thus, multiple colors may be useful for an accurate description of the patches.It of course includes the specific band of minimal absorption of chlorophyll, i.e., the maximum of its reflectance (500 nm).But not only this, as the Agriculture class includes surface with and without vegetation, and a part of the observed spectra are induced by the soil reflectance, which is slightly higher than the chlorophyll reflectance.
Class
To conclude on Agriculture category, we observed that the radiance of the Forest, Ocean, and Snow categories are lowest, and for the others, highest for the specific spectral range of 500 nm.It is also interesting to note that the class Forest is not well represented by the first peak of chlorophyll reflectance at 500 nm, but by the third peak around 1700/1800 nm, establishing probably a difference between cultivated vegetation and forest.
The categories Dense-Urban and Desert are better discriminated by a spectral bandwidth around 600-700 nm, which corresponds to the limit of the "red edge" specific to the chlorophyll concentration modification assessment used in the NDVI criteria (Normalised Difference Vegetation Index).It illuminates that this specific bandwidth allows one to discriminate these categories from others by their lack of vegetation.
Table 1 gives an overview of the retrieval performance.In particular top 20 (P@20) and Mean Average Precision (MAP) for the best studied SSFs with respect to different sampling levels of SSFs.Cells with bold font represent the best results over the different levels.At first glance, we observe that for most categories Level 5 SSFs perform better than the remaining levels.The few observed exceptions have retrieval accuracies very close to the one obtained with SSFs of level 5.We may conclude from this first set of results that a tiny spectral range window contains enough information to achieve better retrieval performance than with a wider range.We may also conclude that the information contained in the rest of the spectrum is misleading or noisy.The reported conclusions are based only on the best SSF by category.However, compared with best level 5 SSF results for each category, the RGB results are clearly better.Average result shows a 11% increase for the MAP results (see Table 2).Exceptions appears for Cloud and Wetland categories; for the Cloud category, the best spectral bandwidths to describe the cloud category are located in the infrared bands, and they cannot be captured by the RGB image.For the Wetland category, the retrieval for RGB is tougher as this category covers a large variety of image contents (lake, river, or swamp mixed with a portion of land).More information than RGB is needed for this category to improve the results.All these observations on the result show the importance of using more than one SSF to describe the patches.
In the next section, we show that combining information from the whole spectrum using more spectral sensitivity functions leads to better retrieval performance.
Trichromatic Image Content Description for HSI Retrieval
In this section we first present our rules to generate a triplet of SSFs, then we detail and discuss our experiment results, which compare the retrieval performance using three SSFs covering the whole spectral range against deep features based RGB images.
Rules of Spectral Sensitivity Function Generation
In order to ensure a complete use of the acquired spectral range we define two constraints for the definition of the SSF.First, the wavelengths must be taken into account.Second, the three selectivity functions must be ordered following their spectral range to construct a trichromatic image preserving the physical and optical properties.We propose to define the sensitivity functions from combinations of Gaussian, Error, and Complementary Error functions [49].Their combinations construct spectral windows with unitary sensitivity and cut-off based on Gaussian functions.Two spectral sampling cases are considered:
•
Whole spectral range: We consider both the visible and the IR ranges as a whole.• Partial spectral range: We reduce the sampling process to a selected spectral range and we consider our study for the visible range and the IR range separately.
Trichromatic Image Content Extraction
From each of the two spectral samplings previously detailed, we obtain a set of triplets of SSFs.The following steps to extract the HSI content representation follow the proposed framework detailed in Section 3.Each possible combination of three SSFs is then used to build all hyperspectral images.Finally, the signatures of those trichromatic images are extracted using the bottleneck layer of the pre-trained CNN ResNet, giving a 2048-dimensional vector by image.
Results and Discussion
To study the discriminating power of the selected sensitivity functions with respect to the data categories, we evaluate their retrieval performance on the presented HSI dataset.Thus, we compute the precision at top N retrieved images, in particular top 10 and top 20 (denoted P@10 and P@20).We also compute the Mean Average Precision (MAP) over the retrieved data.We summarize the obtained results in Table 3.Each cell of this table represents the results obtained for the best triplet of sensitivity functions S i that we compare with the results obtained with the RGB images (considered as a baseline) results.We present our results according to the 3 spectral ranges: Whole, Visible, and InfraRed (IR).Gray cells highlight the best global retrieval results compared to the whole reported results.Cells with bold font represent the best results that outperform the baseline (RGB).
Table 3. Retrieval results for the original RGB images (baseline), the Partial spectral sampling case (Visible and Infra-red ranges), and the Whole spectral sampling case according to the best set of three spectral sensitivity functions for each category.For some categories, the visible range allows better discrimination, and for others, the infrared range is the dominant one.
RGB (Baseline)
Visible InfraRed Whole (%) P@10 P@20 MAP P@10 P@20 MAP P@10 P@20 MAP P@10 P@20 A brief overview of the grey cells of Table 3 highlights the importance of the whole range of spectrum as they contain most of best results.Only two categories have better results in the visible range, in particular, the Agriculture (A) category where the Visible MAP result has a 20% increase compared to the MAP of Whole.Our hypothesis for this category are that IR range adds noise information and the similarity is mainly based on the visible color shape information of agriculture fields as previously explained.For the Wetland (W) category, the low reported retrieval results do not lead to any conclusion.This would be justified by the fact that this category is very heterogeneous, representing natural scenes containing lake, river, or swamp mixed with a portion of land.The Average line (Avg) of Table 3 presents the best results from a specific triplet of SSF over the whole dataset.For the whole spectral sampling, it corresponds to the one presented in Figure 5. Theses results highlight the fact that many possible triplets of SSFs could contain more discriminative information than RGB image even when using a CNN specifically trained on RGB images as the ResNet.Those experiments also show that the baseline (RGB) result is ranked 7th in the list of possible SSF triplet combinations in terms of average MAP, i.e., there are six sets of three SSFs which outperform the baseline in term of average performance.Values with asterisk (*) in Table 3 denote the best obtained results with IR range compared to the visible range.A closer observation between Visible and IR points out only two categories (Forest (F) and Mountain(M)), where IR presents significantly better results than Visible (marked with a ( * ) in Table 3).This observation justifies the need to the IR spectral range.Figure 5 shows some examples (which are good representative) of the best SSFs with respect to a category.It presents, from left to right, an example of an image from a selected category, its corresponding random set of spectra, and the triplet of SSFs that best discriminates the image in terms of retrieval performance.From Figure 5, we can note that the SSFs triplets are different but mainly contain one function in the visible range, one in the IR range, and the last one in the short-wavelength IR.Hence, in order to obtain a discriminating image representation and thus a good retrieval performance, the whole spectrum is mandatory.Moreover, we note that only one sensitivity function is needed to represent the visible range.Surprisingly, the color information is less relevant compared to the texture and shape content for the proposed retrieval.
In order to evaluate the performance of our method on an external dataset, we perform experiments on the hyperspectral ANKARA archives [38], which is to the best of our knowledge the only available HSI dataset available for CBIR.The dataset contains Land-Use and Land-Cover annotations for respectively multi-class and single class retrieval tasks.Since in our work we are focusing on single label retrieval task, we used the Land-Use annotation to test our approach.The data is composed of 216 images with a size of 63 by 63 pixels organized into 4 Land-Use categories (Rural Area (43), Urban Area (37), Cultivated Land (126), and Forest (10)).Table 4 presents the retrieval results in terms of P@5 and MAP metrics for both RGB and HSI data.We observe that our approach performs well compared to RGB.It is worth noting that the Ankara dataset was originally acquired with 220 bands and only 119 bands have been retrained after noisy bands removing.Hence, due to the lack of information about the wavelength ranges and the spectral ranges (Visible, IR), we cannot perform experiments on the visible and IR ranges.Therefore, we present retrieval results only for the Whole range.The missing information about the wavelengths is also problematic for constructing the SSFs which are made for continuous wavelengths.From Table 4, one can see an improvement in terms of MAP and P@5 for all categories except for the Forest one.This may be due to the limited number of samples in this category (10).The average results are when using the Whole spectral range compared to RGB images (81.03% versus 77.3% for the P@5 and 66.54% versus 61.8% for the MAP).Hence, we note 3.73% and 4.74% increases, respectively, for the P@5 and the MAP average results.
One would ask about the performance (in terms of accuracy and computational time) of the proposed SSF-based HSI data description scheme compared to a baseline method in a content-based HSI retrieval task.Hence, to verify the superiority of the presented method (SSF-based deep HSI description), we compare it with a popular method for HSI data transformation enabling deep features extraction: the PCA method which is widely used for HSI data classification [22].Hence, the original HSI was reduced into a trichromatic image using the first three Principal Components.Then, we use the same retrieval framework including the ResNet deep features extraction from the obtained trichromatic images and the Euclidean distance for the signatures comparison.All the computations have been performed on a system with Intel core i7/16GB RAM with Python.Table 5 presents the retrieval performances obtained by the considered SSF-based image description approach and the related computational time required for HSI signature generation (including the trichcromatic image construction and the deep ResNet feature extraction).The proposed SSF-based content description approach outperforms the baseline PCA-based approach both in terms of Precision of retrieval and signature generation time.The SSFs approach enables selecting more discriminating information from the HSI data than the PCA method.It is also worth noting that the computed retrieval time is the same for the two approaches and it is equal to 0.3 s/image.
Conclusions
In this paper, we have proposed two main contributions: The first one is the study of discriminating the power of spectral sensitivity functions in a content-based hyperspectral image retrieval framework.The second contribution of this paper is the introduction of a new hyperspectral images dataset to the remote sensing community.Our proposed framework focuses on image representation, with the most relevant spectral bands using SSFs.Then, deep features are extracted from the obtained trichromatic representation of HSI data to build a discriminating image signature.A first experiment highlights the best descriptive bandwidths for each category but also shows that one SSF is not enough to represent to scene content.Next, results confirm that the Hyperspectral image retrieval system has to take advantage of the information given by the whole image spectrum to improve its performance.Results also prove that physical measurements and optical properties of the scene contained in the remote sensing HSI better contribute in an accurate image content description than the information provided by the RGB image presentation.Our framework has also been tested on the ANKARA archives, showing its potential compared to other hyperspectral datasets.Further improvement of the proposed approach will include the study of more complex sensitivity functions.
Figure 1 .
Figure 1.Block diagram of the proposed framework.
Figure 4 .
Figure 4. Best SSF for each class according to the retrieval performance.
Figure 5 .
Figure 5. Examples of RGB-display images (form left to right) of Mountain, Snow, Forest, and Dense-Urban categories (line 1) with their corresponding spectra (line 2) and best triplet of SSFs (line 3).
Table 1 .
Retrieval results for different sampling levels of Spectral Sensitivity Functions (SSFs) according to the better spectral sensitivity function for each category from the ICONES-Hyperspectral Satellite Imaging (HSI) dataset.
Table 2 .
RGB image retrieval results (Baseline) compared to level 5 best SSFs by category.
Table 4 .
Retrieval results for the ANKARA dataset (Land-Use categories).
Table 5 .
Performance evaluation of our hyperspectral images content description approach with a method based on Principal Component Analysis (PCA) on the ICONES-HSI dataset.
|
2019-04-01T13:03:14.676Z
|
2019-03-12T00:00:00.000
|
{
"year": 2019,
"sha1": "80f5facb35648f873ca74a48749826480d6a0e82",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/11/5/600/pdf?version=1552458990",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "80f5facb35648f873ca74a48749826480d6a0e82",
"s2fieldsofstudy": [
"Environmental Science",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
267469512
|
pes2o/s2orc
|
v3-fos-license
|
Effects of peak ankle dorsiflexion angle on lower extremity biomechanics and pelvic motion during walking and jogging
Objective Ankle dorsiflexion during walking causes the tibia to roll forward relative to the foot to achieve body forward. Individuals with ankle dorsiflexion restriction may present altered movement patterns and cause a series of dysfunction. Therefore, the aim of this research was to clearly determine the effects of peak ankle dorsiflexion angle on lower extremity biomechanics and pelvic motion during walking and jogging. Method This study involved 51 subjects tested for both walking and jogging. The motion capture system and force measuring platforms were used to synchronously collect kinematics and kinetics parameters during these activities. Based on the peak ankle dorsiflexion angle during walking, the 51 subjects were divided into a restricted group (RADF group, angle <10°) and an ankle dorsiflexion-unrestricted group (un-RADF group, angle >10°). Independent-Sample T-tests were performed to compare the pelvic and lower limb biomechanics parameters between the groups during walking and jogging test on this cross-sectional study. Results The parameters that were significantly smaller in the RADF group than in the un-RADF group at the moment of peak ankle dorsiflexion in the walking test were: ankle plantar flexion moment (p < 0.05), hip extension angle (p < 0.05), internal ground reaction force (p < 0.05), anterior ground reaction force (p < 0.01), pelvic ipsilateral tilt angle (p < 0.05). In contrast, the external knee rotation angle was significantly greater in the RADF group than in the un-RADF group (p < 0.05). The parameters that were significantly smaller in the RADF group than in the un-RADF group at the moment of peak ankle dorsiflexion in the jogging test were: peak ankle dorsiflexion angle (p < 0.01); the anterior ground reaction force (p < 0.01), the angle of pelvic ipsilateral rotation (p < 0.05). Conclusion This study shows that individuals with limited ankle dorsiflexion experience varying degrees of altered kinematics and dynamics in the pelvis, hip, knee, and foot during walking and jogging. Limited ankle dorsiflexion alters the movement pattern of the lower extremity during walking and jogging, diminishing the body’s ability to propel forward, which may lead to higher injury risks.
Introduction
The range of motion of ankle dorsiflexion was defined as the talus rolls forward relative to the leg and at the same time slides posteriorly (talocrural dorsiflexion) (1).Adequate ankle dorsiflexion range of motion is necessary for daily functional activities such as walking, jogging, landing, and walking up and down stairs (2).During the stance phase of gait, dorsiflexion reaches the peak just before heel rise.It was shown that the magnitude of ankle dorsiflexion varies among individuals, it is generally in the range of 5-15 degrees, with a minimum of 10 degrees reported by Root et al. (3).Ankle dorsiflexion during walking causes the tibia to roll forward relative to the foot to achieve body forward (1,4).Jogging, on the other hand, requires a greater angle of ankle dorsiflexion to achieve forward rolling (5).
Reduced ankle dorsiflexion is primarily caused by tightness in the gastrocnemius and soleus and insufficient posterior gliding of the talus and is also associated with musculoskeletal injuries of the foot and ankle joint (6).Some researchers have identified ankle dorsiflexion restriction has been indicated as a dangerous factor for lower extremity injuries (7)(8)(9)(10) and can lead to compensatory movements that alter lower extremity movement patterns and generate excessive stress.These biomechanical changes can result in injuries such as plantar fasciitis (8,11), Achilles tendinitis (12), and knee injuries due to altered knee alignment (13,14).Moreover, limited ankle dorsiflexion leads to changes in pelvic movement patterns (15), and studies have indicated (16) that the lumbar-pelvic movement patterns are altered in patients with low back pain compared to those without low back pain, and that inadequate individual control of gait and abnormal lower limb biomechanics can produce excessive stress on the upper lumbosacral region, leading to the development of low back pain (17)(18)(19).Previous studies have shown that a decrease in ankle dorsiflexion angle leads to an increase in foot progression angle during the gait cycle (20), an earlier heel-off time (21), and a shorter stride length (22).From the point of view of the coupling pattern of the kinematic chain, the movement of the ankle may affect the temporal and movement parameters of the knee, hip, and pelvis (23).Inadequate ankle dorsiflexion affects the ability to move forward (24), preferring to land on the arch of the foot and the front foot, affecting ground reaction forces and the torque of the lower extremity joints (25,26).It also alters peak hip and knee flexion and pelvic movement patterns during the swing phase (15).Therefore, the limitation of ankle dorsiflexion restriction can cause a series of dysfunction, and it is important to clarify the specific effect of ankle dorsiflexion angle on lower extremity biomechanics and pelvic movement for the prevention and treatment of functional impairment.
The influence of limited ankle dorsiflexion on certain lower extremity joint motion biomechanical parameters during walking has been investigated in the literature, but no study has yet investigated the effect of different angular range on overall lower extremity biomechanics as well as pelvic motion during jogging and further compared it with lower extremity biomechanics during walking.Therefore, the aim of this research was to compare the lower limb and pelvic biomechanics during the stance phase of gait between individuals with lower and higher peak ankle dorsiflexion angle and clearly determine the effects of different peak ankle dorsiflexion angle on the kinematics and kinetics of the hip, knee, and ankle joints in different planes of motion during walking and jogging, as well as on pelvic motion.The main hypothesis of this study is that individuals with limited peak ankle dorsiflexion angles have reduced knee and hip motion in the sagittal plane during walking and jogging tests, altered pelvic motion patterns, and reduced ground reaction forces corresponding to peak moments of ankle dorsiflexion in gait.
Participants
This study was approved by the Ethics Committee of Peking University Third Hospital.And the study authorization number was M2023360 (June 23, 2023).All participants read and signed an approved informed consent document before data collection.51 subjects (35 men and 16 women) volunteered for this cross-sectional study.The inclusion criteria were (1) age 18-40, BMI (body mass index) in the normal range (18.5-24.9)(2) no neurological disorders (3) no musculoskeletal disorders within the last 6 months that limited their physical activity (4) no surgery or acute injury history to the lower extremities or pelvis.All of these 51 subjects had sufficient physical strength to perform at least 5 sessions of walking and jogging tests and no complaints of pain or discomfort during data collection.13 subjects showed limited dorsiflexion in squatting, which is defined as that the knee joint could not fully flexed or the heel would have to raise during squatting with the feet shoulder-width apart.The other 38 people were able to complete the squat test successfully without limited dorsiflexion.
Previous research experiments have used ankle dorsiflexion range of motion measurement techniques mostly in passive flexion of the ankle joint under non-weight-bearing conditions (3,15,20) or using a weight-bearing lunge position for measurement (27,28).In contrast, the present study innovatively selected the peak ankle dorsiflexion angle during the support phase of the walking test as the criterion for differentiating whether subjects had limited ankle dorsiflexion.This method can more accurately confirm whether an individual has an appropriate ankle range of motion during walking or other functional movements.
Fifty-one subjects were divided into groups based on the peak ankle dorsiflexion angle during the support phase of the walking test.Subjects with a peak ankle dorsiflexion angle of less than 10° on either side during the walking test were included in the ankle dorsiflexionrestricted group (ankle dorsiflexion angle less than 10°, n = 30, hereinafter referred to as RADF group), while other subjects were included in the ankle dorsiflexion-unrestricted group (ankle dorsiflexion angle greater than 10°, n = 21, hereinafter referred to as un-RADF group).The sample size was calculated using G*Power software in this study, with an α level of 0.05 and statistical power of 80%, and an estimated effect size of 1.0.Based on the difference between groups on the main outcome measures peak knee external rotation obtained in a pilot study with ten individuals.A minimum of 20 participants per group was needed to detect between-subject differences.10 subjects in the pilot study were from the Outpatient Department of Sports Medicine, Peking University Third Hospital.They all received ankle physical examination and questionnaire survey, and 5 subjects were limited in squatting.
Data collection
The subject's static and dynamic 3D motion information was collected with an 8-camera infrared high-speed motion capture system (Vicon, T40) at 100 Hz.Kinetic parameters were collected with 2 3D force platforms (AMTI, BP400600) at 1000 Hz.Kinematic and kinetic data were synchronized by a synchronization box (AMTI, GEN5).Subjects were labeled with reflective marker dots on the bony parts and the model was optimized using the international general model plug-in-gait.Subjects wore exercise shorts to fully expose the waist and mid-thigh below.After the reflective markers were fixed, subjects followed the test procedure to first familiarize themselves with the collection exercise requirements and process.The subjects stood in the center of the chamber with their feet shoulder-width apart and both upper extremities placed naturally on both sides of the body, maintaining a neutral position of the talofibular joint for three static tests to collect static data for defining the coordinate system of the skeletal segments.Subsequently, the subjects were tested by walking and jogging at a self-selected speed.The interval between the two tests was such that the subjects did not feel exerted.5 valid data were collected for each movement and the average of the 5 tests was used for analysis.The whole tests were carried out in a space of 10 m long, 8 m wide and 3 m high, and the length of test tracking area was about 6 m.
Data processing
The lower extremity kinematic data from the subjects' walking test and jogging test were processed, and the subjects were divided into the RADF group (<10°, n = 30; 22 men) and un-RADF group (>10°, n = 21; 8 men) according to the peak ankle dorsiflexion angle during the support period in the walking test.The biomechanical model of the rigid body was developed using a static test with the talocrural joint in a neutral position.The force platform determines the occurrence of heel-strike and toe-off the ground by using ground reaction forces to determine the stance phase of the entire gait process.The coordinate data were filtered using a low-pass butter-worth filter at 12 Hz.The ground-reaction force data were filtered using a lowpass butter-worth filter at 100 Hz.Time-series data for the kinematics and kinetics variables in the coronal, sagittal, and horizontal planes of the pelvis, hip, knee, and ankle joints were calculated using Visual 3D software (Cmotion, Germantown, MD version v6.00.18).
Data analysis
All statistical analyses were completed using SPSS 26.0 (IBM, New York, USA).Quantitative data were first tested for normality, and if they conformed to a normal distribution, they were expressed as mean ± standard deviation and subjected to a two-sample t-test; if they did not conform to a normal distribution, they were expressed as median and quartiles and subjected to a two-sample rank sum test.The significance level was set at a class I error probability of no greater than 0.05.
Participant information
A total of 51 subjects participated in the study, including 25 men and 16 women.The RADF group (<10°, n = 30; 22 men) and un-RADF group (>10°, n = 21; 8 men).17 of the 38 subjects who were not limited in squatting were classified in the RADF group based on the results of the walking test.The 30 subjects in the RADF group included 13 with passive limited dorsiflexion and 17 without passive limited dorsiflexion during squat test.There was no significant difference in age, height, and weight between the RADF group and un-RADF group (p > 0.05; see Table 1).
Walking
Figure 1 shows the variations of joint motion angles in the coronal, sagittal, and horizontal planes of the pelvis and lower extremities during the stance phase of the walking process in the two groups of subjects.Figure 2 shows the variations of moments in the coronal, sagittal, and horizontal planes of each joint of the lower extremity during walking in the two groups of subjects.Table 2 shows the results of comparing the lower limb biomechanical parameters at the moment of peak ankle dorsiflexion angle during the stance phase of gait.The parameters that were significantly smaller in the RADF group than in the un-RADF group were: peak ankle dorsiflexion angle (RADF group: 6.20 ± 2.59°, un-RADF group: 13.52 ± 1.96°, p < 0.01); ankle plantarflexion moment corresponding to this peak moment (RADF group: 0.75 ± 0.15 BW* BH, un-RADF group: 0.84 ± 0.05 BW* BH, p < 0.05), hip extension angle (RADF group: 5.73 ± 6.72°, un-RADF group: 9.93 ± 6.21°, p < 0.05), internal ground reaction force (RADF group: 0.05 ± 0.02 BW, un-RADF group: 0.06 ± 0.02 BW, p < 0.05), anterior ground reaction force (RADF group: 0.10 ± 0.04 BW, un-RADF group: 0.14 ± 0.02 BW, p < 0.01), and pelvic ipsilateral tilt angle (RADF group: 0.82 ± 1.53°, un-RADF group: 1.81 ± 1.66°, p < 0.05).In contrast, the external knee rotation angle was significantly greater in the RADF group than in the un-RADF group (RADF group: 3.34 ± 2.84°, un-RADF group: 1.04 ± 4.46°, p < 0.05).No significant differences were found between other biomechanical parameters of the lower extremities and the pelvis.
Jogging
Figure 3 shows the variations in the joint motion angles of the pelvis and lower limbs in the coronal, sagittal, and horizontal planes during the stance phase of the jogging process in both groups of subjects.Figure 4 shows the variations of moments in the coronal, sagittal, and horizontal planes of each joint of the lower extremity during jogging in the two groups of subjects.Table 3 shows the results of comparing the lower limb biomechanical parameters corresponding to the moment of peak ankle dorsiflexion angle during the stance phase, and the parameters that were significantly smaller in the RADF group than in the un-RADF group were: peak ankle dorsiflexion angle (RADF group: 17.22 ± 3.43°, un-RADF group: 22.79 ± 2.98°, p < 0.01); the anterior ground reaction force corresponding to this peak moment (RADF group: 0.02 ± 0.03 BW, un-RADF group: 0.06 ± 0.04 BW, p < 0.01), and the angle of pelvic ipsilateral rotation (RADF group: 0.65 ± 2.89°, un-RADF group: 2.56 ± 3.77°, p < 0.05).No significant differences were found between other biomechanical parameters of the lower extremities and the pelvis.
Discussion
The objective of this research was to investigate the biomechanical characteristics of the lower extremity in individuals with limited ankle dorsiflexion during walking and jogging.Based on the peak ankle dorsiflexion angle during the stance phase measured in the walk test, the subjects were grouped and the differences in pelvic kinematics and lower extremity biomechanics during walking and jogging were investigated in individuals with different peak ankle dorsiflexion angles during the stance phase of gait in the walk and jogging tests.The results showed that during walking, the angles of the pelvis, hip, knee, and ankle joints were significantly different and the dynamics of the foot and ground reaction forces in the RADF group compared with that in the un-RADF group.During jogging, the pelvis and foot angles were significantly reduced in the RADF group.The results showed that there was a significant difference in pelvis kinematics during walking between the RADF group and the un-RADF group in the walking test.Specifically, the angle of pelvic tilt to the ipsilateral side was significantly smaller in the RADF group than in the un-RADF group.This result suggests that important motor changes in the pelvis can exist in individuals with reduced ankle mobility.In gait, the pelvis rotates in all three planes, helping to decrease the movement of the center of mass in the vertical and horizontal direction thus being energetically economical (29).The pelvic tilt is one of the determinants of the mediolateral displacement of the center of mass (COM) and also helps to reduce the vertical displacement of the center of gravity (30).Therefore, the reduction in the angle of ipsilateral tilt of the pelvis in the group with limited ankle dorsiflexion affects the change in the center of gravity in gait, which in turn has an impact on walking.Previous literature has reported that the horizontal plane motion of the pelvis occurs less during walking in those with limited ankle dorsiflexion compared to those without (15), whereas the literature has rarely addressed the frontal plane motion of the pelvis, so this study extends the study of the effect of limited ankle dorsiflexion mobility on the motion of the frontal plane of the pelvis, that is the angle of the pelvis tilted to the ipsilateral side during walking was significantly less in the group with limited ankle dorsiflexion than in the non-limited group.During jogging, the angle of pelvic rotation to the ipsilateral side was significantly smaller in the group with restricted ankle dorsiflexion than in the unrestricted group (p < 0.05).The results suggest that individuals with smaller ankle dorsiflexion angles will have less movement in the horizontal plane of the pelvis during exercise, and a previous study (31) has shown that the smaller the pelvic rotation relative to the supporting foot during the support phase of gait, the greater the torsional stress on the lower extremity, which correlates more with lower extremity injury (32).
The RADF group had a significantly lower hip extension angle in the walking test.It was indicated that limitation of ankle dorsiflexion was significantly associated with limitation of hip extension during walking.Peak ankle dorsiflexion occurs at the moment of heel lift at the end of the stance phase of gait when the hip is in extension (33).Ankle push-off contributes to leg swing and propels the body over the supporting lateral limb (24), while a decrease in peak ankle dorsiflexion may decrease ankle stirrup strength and hip extension.Meanwhile, hip extension more appropriately loads the ankle in dorsiflexion, creating better muscular and mechanical energy, which is essential for stance-to-swing transition and thus forward propulsion (34).Therefore, the results of this study suggest that a reduction in peak ankle dorsiflexion affects the movement of the sagittal plane of the hip joint, which in turn adversely affects the transition from the stance to the swing phase in gait.
Differences in knee motion during walking were observed between the two groups of subjects, with the RADF group having a significantly greater angle of external knee rotation.The external rotation of the knee that occurs at the end of the support phase can be explained according to the "screw-home mechanism" (35), where the final extension of the knee during the gait cycle is normally accompanied by the external rotation of the tibia relative to the femur.In contrast, the RADF group showed greater external knee rotation at the moment of peak ankle dorsiflexion.When the knee joint is extended, the anterior cruciate ligament (ACL) gets tangled and tightened if the tibia is rotated externally with respect to the femur (screw-home movement) (36), which may increase the risk of ACL injury.This is because the ACL not only prevents knee hyperextension but also stabilizes the knee against tibial rotation (37).Many researchers have reported that knee rotation is significantly associated with ACL injury (38)(39)(40)(41) and that external knee rotation combined with knee abduction may cause the ACL to impinge on the femoral condyle, which in turn increases the load on the ACL (42).Therefore, greater external knee rotation angles in individuals with limited ankle dorsiflexion may increase the risk of a knee injury.However, no changes in knee biomechanical parameters other than knee external rotation angle were found in this study, which is not consistent with the hypothesis of this study and the results in the literature (15,43) and may be related to the different grouping methods and intersubject differences.
In the present study, during the walking test, the RADF group had a smaller ankle dorsiflexion moment.Meanwhile, the RADF group also had smaller anterior ground reaction forces in both walking and jogging test.In gait, the body is propelled forward mainly through plantar flexion of the stirrups off the ground to generate thrust (32).In contrast, the plantarflexion push-off moment of the ankle joint is generated by the triceps calf muscle (biceps, medial and lateral gastrocnemius) and other external foot muscle-tendon units.And the peak ankle push-off force is partially derived from the release of elastic energy stored in the Achilles tendon during ankle dorsiflexion (44).The results of the study showed that a restricted ankle dorsiflexion angle reduces the ankle plantarflexion moment, which suggests that individuals with restricted ankle dorsiflexion have less ability to swing their lower limbs forward during walking.Also, this may account for the less forward ground reaction force in the RADF group during walking versus jogging.From the results, it was observed that the anterior ground reaction force of walking was greater than that of jogging, which may be caused by changes in gait parameters due to changes in movement patterns during the transition from walking to running, such as the duration of the stance phase and the change in stride frequency, as well as the choice of walking versus jogging speed that equally affects the magnitude of the ground reaction force, which is consistent with the results of previous studies in the literature (45).Since the medial-lateral forces have particularly high coefficients of variation (46-48), they are the least reliable among the ground reaction forces and therefore are not analyzed in this study for the time being.
Strengths and limitations
In this study, a three-dimensional motion capture system is proposed to determine whether subjects have sufficient ankle dorsiflexion angle to complete functional movements such as walking and jogging.This study also systematically analyzed the biomechanical effects of different ankle dorsiflexion angles on hip, knee, ankle and pelvis during walking and jogging.
The present study has several limitations.Based on the maximum dorsiflexion angle in walking test, this study proposed a novel method of diagnosing functional limited ankle dorsiflexion by maximum ankle dorsiflexion during stance phase of walking.However, this method was not further compared with other methods such as the weight-bearing lunge test, which may affect the validity of this method.This study focused on and discussed lower extremity biomechanics and pelvic motion during walking versus jogging in individuals with ankle dorsiflexion restrictions, but did not further compare the differences between walking and jogging in individuals with ankle dorsiflexion restrictions.Jogging requires a greater ankle dorsiflexion angle to propel the body forward, but the transition from walking to running shortens the duration of the support period of gait (38), which can affect the biomechanics of the lower extremity during the gait cycle and needs to be continued to be explored in future studies.In addition, lower extremity muscle activity or muscle strength was not assessed as a variable in this study and needs to be further advanced in future studies.Finally, due to the lack of upper limb model construction, the COM could not be determined, and the changes of the COM in individuals with different ankle dorsiflexion angles during walking and jogging should be further studied.
Conclusion
The present study demonstrated that during walking, individuals with Smaller ankle dorsiflexion peaks in gait result in reduced pelvic frontal plane motion; reduced hip posterior extension at the moment of peak ankle dorsiflexion, increased knee external rotation angle, and reduced ankle plantarflexion moment and anterior ground reaction force.During jogging, ipsilateral pelvic rotation and anterior ground reaction forces were reduced in those with limited ankle dorsiflexion.Thus, limited ankle dorsiflexion alters the movement pattern of the lower extremity during walking and jogging, diminishing the body's ability to propel forward, which may lead to higher injury risks.The variations of joint motion angles during jogging in the two groups of subjects.x-axis, the percentage of the stance phase of gait; y-axis, Joint angles (°); Red Line, RADF group; Green Line, un-RADF group; Blue horizontal line, significant effect; DF-dorsiflexion, PF-plantarflexion; Ext Rot, External Rotation; Int Rot, Internal Rotation.First vertical dotted line, contralateral toe off; Second vertical dotted line, contralateral heel off.
FIGURE 1
FIGURE 1The variations of joint motion angles during walking in the two groups of subjects.x-axis, the percentage of the stance phase of gait; y-axis, Joint angles (°); Red Line, RADF group; Green Line, un-RADF group; Blue horizontal line, significant effect; DF-dorsiflexion, PF-plantarflexion; Ext Rot, External Rotation; Int Rot, Internal Rotation.First vertical dotted line, contralateral toe off; Second vertical dotted line, contralateral heel off.
FIGURE 2
FIGURE 2The variations of moments and ground-reaction force during walking in the two groups of subjects.x-axis, the percentage of the stance phase of gait; y-axis, the moment of force/Ground reaction force; Red Line, RADF group; Green Line, un-RADF group; Blue horizontal line, significant effect; BW, body weight; BW * BH, body weight multiplied by body height; Ext M, Extension moment; Fle M, Flexion moment; ExtR M, External Rotation moment; IntR M, Internal Rotation moment.First vertical dotted line, contralateral toe off; Second vertical dotted line, contralateral heel off.
TABLE 1
Participant information.
TABLE 2
Biomechanical parameters during the stance phase during walking.
TABLE 3
Biomechanical parameters during the stance phase during jogging., standard deviation; CI, Confidence Interval; *, significant effect; BW, body weight; BW * BH, body weight multiplied by body height; GRF, ground-reaction force; Medial, medial ground reaction force; Posterior, Posterior ground reaction force; Vertical, Vertical ground reaction force.review & editing, Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Project administration, Software, Supervision, Validation.Funding The author(s) declare financial support was received for the research, authorship, and/or publication of this article.This work is partially supported by Beijing Nova Program (20230484412), Beijing Natural Science Foundation (L222138), Innovation and Transformation Fund Project of Peking University Third Hospital (BYSYZHKC2022119), and Capital Health Research and Development of Special (SF2022-2-4175). SD-
|
2024-02-06T17:29:34.512Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "64369b66206edf10c03beae5c2e1b2ddb22d6c5c",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fneur.2023.1269061/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ea8387d531fa0ad52a3b24286101a00228fedb90",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251075099
|
pes2o/s2orc
|
v3-fos-license
|
Gallbladder neuroendocrine carcinoma diagnosis, treatment and prognosis based on the SEER database: A literature review
BACKGROUND Gallbladder neuroendocrine carcinoma (GB-NEC) has a low incidence rate; therefore, its clinical characteristics, diagnosis, treatment and prognosis are not well explored. AIM To review recent research and analyze corresponding data in the Surveillance Epidemiology and End Results (SEER) database. METHODS Data of GB-NEC (n = 287) and gallbladder adenocarcinoma (GB-ADC) (n = 19 484) patients from 1975 to 2016 were extracted from the SEER database. Survival analysis was performed using Kaplan–Meier and Cox proportional hazards regression. P < 0.05 was considered statistically significant. We also reviewed 108 studies retrieved from PubMed and Reference Citation Analysis (https://www.referencecitationanalysis.com/). The keywords used for the search were: "(Carcinoma, Neuroendocrine) AND (Gallbladder Neoplasms)". RESULTS The GB-NEC incidence rate was 1.6% (of all gallbladder carcinomas), male to female ratio was 1:2 and the median survival time was 7 mo. The 1-, 2-, 3- and 5-year overall survival (OS) was 36.6%, 17.8%, 13.2% and 7.3% respectively. Serum chromogranin A levels may be a specific tumor marker for the diagnosis of GB-NEC. Elevated carcinoembryonic antigen, carbohydrate antigen (CA)-19-9 and CA-125 levels were associated with poor prognosis. Age [hazard ratio (HR) = 1.027, 95% confidence interval (CI): 1.006–1.047, P = 0.01] and liver metastasis (HR = 3.055, 95% CI: 1.839–5.075, P < 0.001) are independent prognostic risk factors for OS. Patients with advanced GB-NEC treated with surgical resection combined with radiotherapy and/or chemotherapy may have a better prognosis than those treated with surgical resection alone. There was no significant difference in OS between GB-NEC and GB-ADC. CONCLUSION The clinical manifestations and prognosis of GB-NEC are similar to GB-ADC, but the treatment is completely different. Early diagnosis and treatment are the top priorities.
INTRODUCTION
Neuroendocrine neoplasms (NENs) have been reported in nearly every tissue. According to the International Agency for Research on Cancer -World Health Organization, neuroendocrine tumors (NETs) are composed of cells with distinctive phenotype characterized by the expression of general and specific neuroendocrine biomarkers. NETs account for about 0.5% of all newly diagnosed malignancies [1]. Gallbladder neuroendocrine carcinoma (GB-NEC) is extremely rare. Yao et al [2] reported that GB-NEC accounted for only 0.5% of all NENs and for 2.1% of all gallbladder malignancies. Since GB-NEC has a low incidence rate, many clinical questions related to it are yet to be fully explored in literature. After reviewing the relevant literature, we found the following problems: (1) The epidemiological characteristics, clinical features, treatment and prognosis of GB-NEC are still unclear; (2) Most studies compared the prognosis of GB-NEC to that of adenocarcinoma; however, the results reported are still contradictory. In most of the studies, the sample sizes were small and as such, the results may not be objective; and (3) Most of the studies only focused on the clinical manifestations and prognosis of GB-NEC. Few articles explored the pathogenesis and mechanism of GB-NEC. In this study, the authors attempt to address the three problems stated above.
Patients and literature
The Surveillance Epidemiology and End Results (SEER) database was searched and screened according to the following criteria: (1) Site and morphology; diagnostic confirmation = positive histology; (2) Type of reporting source = autopsy only; death certificate only; (3) Site and morphology site recode ICD-0-3/WHO 2008 = gallbladder; and (4) cause of death; follow-up; survival month = complete dates are available. Finally, 19 842 patients with pathologically confirmed gallbladder malignancy from 1975 to 2016 were obtained. Among them, there were 19 484 cases of gallbladder adenocarcinoma and 287 cases of GB-NEC. Among the patients with GB-NEC, there were 29 cases of large cell NEC and 109 cases of small cell NEC. In addition, we searched PubMed and Reference Citation Analysis (https://www.referen cecitationanalysis.com/) for the following keywords and obtained 217 articles describing GB-NEC: Key words = (carcinoma, neuroendocrine) AND (gallbladder neoplasms). These described: (1) Mixed GB-NEC; (2) other biliary NEC; (3) metastatic tumor; and (4) NENs not carcinoma were ruled out, giving a final total of 108 articles for review ( Figure 1).
Variables and outcome
Patients' variables and follow-up data were obtained from SEER database, including gender, age, race, pathological differentiation degree of tumor, pathological classification, and tumor metastasis. All patients had complete follow-up data on postoperative survival status, and the primary outcome of this study was OS. 2 and independent sample t tests and univariate ANOVA were used to compare baseline data of patients between GB-ADC and GN-NEC. Univariate 2 test and multivariate Cox regression analysis was used to investigate the independent risk factors influencing the prognosis of GB-NEC patients. Kaplan-Meier curve and log-rank test were used to explore survival analysis between different groups of patients. All analysis was performed using SPSS Statistics version 24.0 (IBM Corp., Armonk, NY, USA). P < 0.05 was considered statistically significant.
Epidemiology and classification of GB-NEC
GB-NEC account for 2%-2.5% of all gallbladder tumors and the male to female ratio ranges between 1:4 and 1:2 [2][3][4]. In our study, a total of 19 771 patients diagnosed with gallbladder malignancy from 1975 to 2016 were selected from the SEER database and analyzed. In this cohort, GB-NEC accounted for 1.4% of all gallbladder malignancies. The male to female ratio was 1:2, the average age was 68 years and the median survival time was 7 mo. GB-NEC had a significantly lower degree of tumor differentiation compared to GB-ADC. The proportion of poorly differentiated and undifferentiated tumors was 57.8% vs 33% (P < 0.001) ( Table 1). In order to avoid ambiguity in clinical practice, the WHO 2019 classification is currently used. The WHO criteria classifies NETs into three levels instead of discretely classifying NEC. However, NEC are still classified into small and large cell types. The final classification of NEC is not based on the degree of tumor differentiation, but rather on the mitotic rate and tumor genetic characteristics [4]. In most mixed NENs, both neuroendocrine and non-neuroendocrine components are poorly differentiated. The neuroendocrine component has proliferation indices in the same range as other NECs; however, this conceptual category allows for the possibility of one or both components being well differentiated. When feasible, each component should therefore be graded separately [5]. As such, most previous studies on NEC have reported no clear distinction between NET and NEC. The clinicopathological characteristics of NEC and NET remain ambiguous. In this paper, we focus on GB-NEC ( Table 2).
Origin of GB-NEC
NETs of the gastrointestinal tract usually originate from hormone-producing cells known as amine precursor uptake and decarboxylation (APUD) cells [6]. However, normal gallbladder mucosa does not have APUD cells; therefore, several hypotheses exist to explain the origin of GB-NECs. that leads to metaplasia of the normal epithelial cells of the gallbladder. Cells with endocrine function including goblet cells and enterochromaffin gradually replace the normal cells. If the hypothesis holds, in principle, gallbladder stones and cholecystitis are highly correlated with GB-NECs. Unfortunately, due to the rarity of GB-NEC, no large sample size analysis of the hypothesis exists [7].
Pluripotent cells hypothesis
This hypothesis is based on a demonstration of shared immunoreactivity patterns between tumor components and common characteristics (featuring both neuroendocrine and glandular differentiation) observed in electron micrographs [8][9][10][11][12].
Adenocarcinoma transformation theory
In addition to the aforementioned hypotheses, some scholars have proposed that GB-NEC is derived from the transformation of adenocarcinoma. The rationale is that endocrine carcinoma and adenocarcinoma sometimes coexist. However, currently evidence to support the hypothesis is insufficient [13][14][15][16].
Clinical manifestations and diagnosis of GB-NEC (immunohistochemistry, biomarkers and imaging)
About half of GB-NEC patients present with upper right quadrant discomfort or pain on initial doctor's visit, accompanied with atypical manifestations such as weight loss, anorexia, jaundice, fever, nausea and vomiting. At the time of diagnosis, patients often have distant metastases (often liver metastasis) with lymph node involvement, thus disqualifying them from surgical resection. Most studies have not found any specific tumor markers for GB-NECs. There have been sporadic reports of carbohydrate antigen (CA)-125, CA-19-9, carcinoembryonic antigen (CEA) and serum chromogranin A (CgA) being elevated in GB-NEC. GB-NECs can be divided into functional and nonfunctional types. Functional NETs may secrete histamine, vasodilator factors or substances contributing to carcinoid syndrome. Although the syndrome is rarely reported in GB-NEC, it makes the diagnosis of GB-NEC difficult. Lin et al [17] reported a patient with GB-NEC complicated by Cushing's syndrome. The disease is predominantly diagnosed by postoperative pathology and immunohistochemistry. It is worth noting that some reports have reported a relationship between elevated tumor markers (such as CEA) and prognosis, as well as liver invasion [18]. Patient clinicopathological characteristics are summarized in Table 3. Imaging has limited diagnostic value for GB-NEC. On ultrasound, a solid, nonuniform, hypoechoic lesion is detected. On plain computed tomography (CT), the lesions may appear hypodense. With contrastenhanced CT, uneven enhancement, cystic degeneration and necrosis may be observed. The gallbladder regional lymph nodes as well as those of the hepatic hilum may be enlarged. The scan may also show annular enhancement. On plain magnetic resonance imaging (MRI), all lesions show a low signal on T1weighted imaging (T1WI) and a high signal on T2-weighted imaging (T2WI). The signal of the lesions is lower in T1WI and higher in T2WI. With enhanced MRI, uneven enhancement is observed. GB-NEC has no particularly distinguishing features on imaging. It mostly has a wide-basal shape with a clear boundary. Cystic degeneration and necrosis are common. Both CT and MRI are necessary to assess involvement of adjacent organs. Lymph node involvement and metastasis are useful for preoperative staging and selection of treatment options (Figures 2 and 3).
Treatment of GB-NEC
Surgery: GB-NEC surgical resection refers to the surgical options available for gallbladder cancer. Basic cholecystectomy is limited to patients classified as stage T1a [19]. Some surgeons have reported cholecystectomy combined with wedge resection (negative margins) to be sufficient for T1b malignant gallbladder tumors [20,21]. However, Liu et al [22] in their case report considered basic cholecystectomy with bed cautery to be sufficient for T1bN0M0 GB-NEC. Further research is required given that theirs' was a case report. For GB-NEC classified as T2-4 without lymph node involvement, surgical resection may improve prognosis. When patient have lymph node metastasis, lymph node dissection may improve prognosis however the scope lymph node resection D1/D2 remains controversial [23]. For advanced gallbladder cancer, most clinical guidelines recommend systemic comprehensive treatment such as radiotherapy and chemotherapy [24] (Table 4).
Radiotherapy and chemotherapy:
Although surgery remains the only curative approach, most patients experience recurrence and resection is not an option for some [23]. As such, the National Comprehensive Cancer Network guidelines recommend adjuvant chemotherapy, concurrent chemoradiotherapy or observation for resected gallbladder carcinoma staged T2 or higher [24]. Generally speaking, neuroendocrine carcinoma histology is similar to that of small cell lung cancer; therefore, platinum-etoposide chemotherapy is recommended as a more effective regimen for extrapulmonary NETs [25,26]. To date, no uniform radiotherapy and chemotherapy protocol exists for BG-NEC. We reviewed and summarized reported effective regimens for GB-NEC (Table 5).
Disease outcome, prognosis, risk factors and comparison with GB-ADC
Prognosis and associated risk factors of GB-NEC are unknown due the low incidence rate of GB-NEC. Some researchers have compared GB-NEC and GB-ADC prognosis. Some suggest that GB-ADC has a better prognosis[27] while others think no significant difference exists [28]. Consequently, we analyzed and summarized data from the SEER database to determine independent prognostic factors for GB-NEC; compare GB-NEC prognosis to that of GB-ADC; and determine the effect of postoperative adjuvant therapy on patient survival. The primary outcome was patient survival (death). Variables of interest included: race, sex, pathology, tumor grade, liver metastasis and age. Determination of potential and independent prognostic factors (in relation to OS) was via univariate and multivariate analysis respectively. We found that age [hazard ratio (HR) = 1.027, 95% confidence interval (CI): 1.006-1.047, P = 0.01] and liver metastasis (HR = 3.055, 95% CI: 1.839-5.075, P < 0.001) were independent prognostic factors for GB-NEC. However, race and gender only influence incidence but not OS (Table 6).
We screened six patients who underwent only surgical resection and 16 who underwent resection coupled with adjuvant therapy (radiotherapy and/or chemotherapy) to analyze and compare survival. Due to the limitation of the database, specific chemotherapy regime and the clinical data of patients could not be ascertained due to Health insurance Portability and Accountability compliance. TNM staging for all patients was Stage III and above. Based on Kaplan-Meier analysis, postoperative adjuvant radiotherapy and or chemotherapy may prolong patient survival ( Figure 3A). We also compared prognosis between the different pathological subtypes of GB-NEC. There was no significant difference in survival was found between small cell GB-NEC (n = 29), large cell GB-NEC (n = 109) and GB-NEC (n = 149).
DISCUSSION
Currently, GB-NECs are not well understood by clinicians because of its low incidence rate. To address this challenge, we reviewed the literature (case reports and reviews) and analyzed data in the SEER database so as to provide more insight into GB-NEC diagnosis, pathology, treatment and prognosis. We also wanted to compare GB-NEC to GB-ADC. In the course of our analysis, we used the SEER database August to perform analysis on GB-NEC and GB-ADC data with larger sample sizes compared with previous studies. The observed GB-NEC incidence was lower than we anticipated, < 2%. The male to female ratio was 1:2 and the average age of onset was 68 years (incidence is higher in older women). GB-NEC had an median OS of 7 mo. GB-NEC has a lower degree of tumor differentiation compared to GB-ADC. The proportion of poorly differentiated and undifferentiated tumors was 57.8% versus 33% (P < 0.001) in GB-NEC and GB-ADC, respectively. GB-NEC was highly malignant with an aggressive progression profile. Systemic metastasis was common, even in early stages. Most patients were diagnosed at an aggressive stage [4,[29][30][31][32], and 19.7% had already developed liver metastasis at the time of diagnosis. One explanation is that the gallbladder lacks a peritoneal layer on its hepatic adjacent side. Instead, the boundary between the gallbladder and the liver is the cystic plate, which is a continuation of Glisson's capsule [26]. For this reason, gallbladder cancers that invade the muscularis (T1b-T2) have a propensity to invade the liver and the correlation between the metastasis foci and Glisson system needs verification.
Clinical manifestations are not specific and about half of the patients present with right upper quadrant abdominal pain and discomfort. Presentation with carcinoid syndrome may be somewhat specific; however, its incidence in GB-NEC is low. Serum CgA may be a sensitive biomarker for GB-NEC. CA-125, CA-19-9, CEA, soluble IL-2 receptor and nonspecific enolase are elevated in some patients but none of them is specific. Some studies have suggested that CA-125 is associated with liver metastasis and poor prognosis. We however could not verify these findings due database-related limitations. Imaging examination has limited value in GB-NEC however its useful for treatment planning. Diagnosis of GB-NEC is mostly based on pathology and immunohistochemistry. The neoplasm must originate from the gallbladder instead of invasion of NEC from the liver or other organs [7].
Radical resection is the only curative approach. Selection of surgical resection is based on recommended surgical methods for gallbladder cancer. Patients with Stage III can be considered for surgery and postoperative adjuvant therapy. Except for T1aN0M0, specific surgical procedures are controversial. Patients with T2N0M0 may only require basic cholecystectomy and gallbladder bed cautery. Based on the nearly 20% incidence rate of liver metastasis, performing a wedge resection of the liver would be preferable since the difficulty level of wedge resection is not significantly different from gallbladder cautery to hepatobiliary surgeons around the world.
CONCLUSION
GB-NEC has a low incidence rate, high degree of malignancy and poor prognosis. The incidence is significantly higher in older women. GB-NEC is difficult to diagnose and most patients have advanced disease at the time of diagnosis. Therefore, the focus should be placed on investigating the pathogenesis and treatment rather than the atypical clinical manifestations of GB-NEC.
Research background
Neuroendocrine neoplasms (NENs) have been reported in nearly every tissue. According to the International Agency for Research on Cancer -World Health Organization, neuroendocrine tumors (NETs) are composed of cells with distinctive phenotype characterized by the expression of general and specific neuroendocrine biomarkers. NETs account for about 0.5% of all newly diagnosed malignancies. Gallbladder neuroendocrine carcinoma (GB-NEC) is extremely rare; thus, many clinical questions related to it are yet to be fully explored.
Research motivation
To investigate GB-NEC, we reviewed recent research and analyzed corresponding data in the Surveillance Epidemiology and End Results (SEER) database.
Research objectives
We found the following problems. (1) The epidemiological characteristics, clinical features, treatment and prognosis of GB-NEC are still unclear; (2) Most studies compared the prognosis of GB-NEC to that of adenocarcinoma; however, the results reported are still contradictory. In most of the studies, the sample sizes were small and as such, the results may not be objective; and (3) Most studies only focused on the clinical manifestations and prognosis of GB-NEC. Few articles explored the pathogenesis and mechanism of GB-NEC. So in this study, we attempted to address the three problems stated above.
Research methods
Data of GB-NEC (n = 287) and gallbladder adenocarcinoma (GB-ADC) (n = 19 484) patients from 1975 to 2016 were extracted from the SEER database. Survival analysis was performed using Kaplan-Meier and Cox proportional hazards regression. P < 0.05 was considered statistically significant. We also reviewed 108 studies retrieved from PubMed and Reference Citation Analysis (https://www.referencecitationanalysis.com/). The keywords used for the search were: "(carcinoma, neuroendocrine) AND (gallbladder neoplasms)".
Research results
The GB-NEC incidence rate was 1.6% (of all gallbladder carcinomas), male to female ratio was 1:2 and the median survival time was 7 mo. The 1-, 2-, 3-and 5-year overall survival (OS) was 36.6%, 17.8%, 13.2% and 7.3%, respectively. Serum chromogranin A levels maybe a specific tumor marker for the diagnosis of GB-NEC. Elevated carcinoembryonic antigen, carbohydrate antigen (CA)-19-9 and CA-125 levels were associated with poor prognosis. Age and liver metastasis were independent prognostic risk factors for OS. Patients with advanced GB-NEC treated with surgical resection combined with radiotherapy and/or chemotherapy may have a better prognosis than those treated with surgical resection alone. There was no significant difference in OS between GB-NEC and GB-ADC.
|
2022-07-27T15:11:04.297Z
|
2022-08-16T00:00:00.000
|
{
"year": 2022,
"sha1": "99f781309a2ecb2ff39844c899353eeb2cea2357",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.12998/wjcc.v10.i23.8212",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1195908ffea7e3df545c6a31c6ce23d878ab6421",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11367692
|
pes2o/s2orc
|
v3-fos-license
|
Spinal Burkitt's Lymphoma Mimicking Dumbbell Shape Neurogenic Tumor: A Case Report and Review of the Literature
Non-Hodgkin's lymphoma (NHL), a disease which may involve the spine, is frequently associated with advanced disease. Radiculopathy caused by spinal root compression as the initial presentation in patients with NHL is very rare and thought to occur in less than 5% of cases. A 69-year-old woman complained of a history of low back pain with right sciatica for 1 month prior to admission. Computed tomography and magnetic resonance imaging of the lumbar spine showed a dumbbell-shape epidural mass lesion extending from L2 to L3, which was suggestive of a neurogenic tumor. After paraspinal approach and L2 lower half partial hemilaminectomy, total excision of the tumor was achieved, followed by rapid improvement of back pain and radiating pain. The lesion was confirmed to be Burkitt's lymphoma by histopathological examination. We then checked whole-body PET-CT, which showed multifocal malignant lesions in the intestine, liver, bone and left supraclavicular lymph node. Although a rare situation, Burkitt's lymphoma should be considered in the differential diagnosis for patients presenting with back and lumbar radicular pain without a prior history of malignancy. Burkitt's lymphoma could be the cause of dumbbell-shape spinal tumor.
INTRODUCTION
The spinal epidural space is an uncommon presenting site in non-Hodgkin's lymphoma (NHL), and accounts for 9% of spinal epidural tumors and 0.1-3.3% of all lymphomas 8,10) . Burkitt's lymphomas are small noncleaved B-cell lymphomas with highly aggressive clinical features. It is characterized by rapid progression, early hematogenous dissemination, and a propensity to spread to the bone marrow and the central nervous system (CNS) 1) . Three clinical variants of Burkitt's lymphoma are described in the World Health Organization classification: endemic, sporadic, and immunodeficiency-associated types. Endemic Burkitt's lymphoma refers to those cases occurring in African children, usually in the age between 4 and 7 years. Sporadic Burkitt's lymphoma occurs worldwide, accoun-ting for 1-2% of lymphomas in adults and up to 40% of lymphomas in children in the USA and Western Europe. Immunodeficiency-associated Burkitt's lymphoma occurs mainly in patients infected with human-immunodeficiency virus 7) . CNS disease, found in less than 15% of sporadic cases at diagnosis, can include involvement of the meninges, infiltration of cranial nerves, intraparenchymal brain disease, or a paraspinal mass 9,13) . When Burkitt's lymphoma develops in the epidural space and presents with neurologic deficits from compression of the spinal cord or spine nerve root, it is frequently associated with advanced disease 3,20) . Thoracic segments are predominantly affected regions, but any spinal region can be affected. In patients with NHL, spinal cord compression as the primary presentation is rare and thought to occur in less than 5% of cases 4,6,12,16) . We describe a patient who initially presented with spine nerve root compression, which was later revealed to be Stage 4E of Burkitt's lymphoma.
CASE REPORT
A 69-year-old woman admitted with a 1-month history of low back pain radiating down to the right leg. The patient had been healthy before admission and her medical history was unremarkable except for several year history of chronic renal disease, which had been treated at a local clinic. Neuro- logical examination showed hypoesthesia on the right L2-3 sensory dermatome and diminished motor power of right hip flexion and limiting gait. The right straight leg raising test was positive at 30 degrees of elevation. She showed normal rectal tone and no bladder-urinary dysfunction. In initial blood investigations, complete blood cell counts, protein, electrolytes were within normal ranges. There was no other evidence of hematologic disorder in laboratory examination. Plain X-ray films showed no abnormality. Magnetic resonance imaging (MRI) of the lumbar spine showed a well-demarcated, posterolateral extradural mass lesion between L2 and L3, with extension through the spinal foramen ( Fig. 1). This mass lesion was isointense relative to the spinal cord on T1-and T2weighted images. And heterogeneous enhancement was appreciated after administration of gadolinium. Computed tomography (CT) showed a poorly defined, slightly enhancing lesion involving the central canal and the right paraspinal area at the body level of L2-L3. Intervertebral foramina widening or bony destructive/sclerotic lesion was not demonstrated. Due to the dumbbell shape, we initially assumed the tumor was a benign neurogenic tumor affecting the L2 and L3 roots on the right. And we chose paraspinal approach because of the large extraforaminal portion of the tumor. Under prone position, midline hockey stick incision and dissection of paraspinal muscle were performed. After the exposure of L1-2-3-4 inter-transverse space, L2-3 intertransverse ligament was removed. Also L2 and L3 transverse process was cut for visualization of the tumor. Careful dissection and removal of tumor were performed without injury of L2 and L3 root. Subsequently, L2 Lower half hemilaminectomy was needed for removal of intraspinal portion of the tumor. The epidural mass occupied the right L3 root with extension to the foramen. It was a whitebrown in color, soft and fragile in consistency and uncapsulated mass. So it was hard to dissect due to fragility and uncapsulation but gross total removal was possible. The post-operative course was uneventful and the patient's symptoms were improved soon.
Histopathological examinations disclosed a lymphoid lesion, which was diagnosed as Burkitt's lymphoma. The tumor consisted of a single population of medium sized cells with abundant basophilic cytoplasm and multiple small nucleoli producing a starry-sky pattern with frequent mitotic figures (Fig. 2). Immunohistochemical studies demonstrated that the tumor cells were positive for CD20, CD79a, BCL-6, CD10, and EBV, but not for BCL-2 (Fig. 3). The proliferation marker Ki-67 was expressed in almost all the tumor cells, thus confirming Burkitt's lymphoma.
Clinical postoperative reevaluation with whole-body positron emission tomography (PET) discovered multifocal malignant lesions in the intestine, liver, bone and left supraclavicular lymph node (LN) (Fig. 4). Finally, the staging of the disease revealed Stage 4E of Burkitt's lymphoma. The patient was transferred to the oncology department and recommended for receiving chemotherapy. But she refused chemotherapy for her underlying diseases such as chronic kidney disease, asthma and cardiac problem.
DISCUSSION
Burkitt's lymphoma is a rare and aggressive B cell tumor, involves typically extranodal sites.
Adult patients with Burkitt's lymphoma present with abdominal masses, B symptoms, tumor lysis, bone marrow involvement (70%) and leptomeningeal involvement (up to 40%). Diagnostic workup in the acute phase should include an MRI and/or CT scan of the spine and tissue sampling during surgery. MRI is the initial procedure of choice for evaluation of acute spinal cord or root compression. It provides good anatomical detail, offers more information about bone and soft tissue involvement, and potentially characterizes the tissue for the tumor mass itself. Following decompressive surgery, chemotherapy would be the initial treatment of choice in most patients with intermediate and high grade NHL, followed by radiotherapy in localized presentations 5) . Complete remission of Burkitt's lymphoma was sometimes reported after treating with dose-intensive, multi-agent chemotherapy regimens that incorporates CNS prophylaxis. There are two highly effective regimens, which are CODOX M (cyclophosphamide, vincristine, doxorubicin, high-dose methotrexate) and IVAC (ifosfamide, etoposide and high dose cytarabine) 15) . For CNS prophylaxis, intrathecal chemotherapy using cytarabine or methotrexate can be added. In the results obtained with 4 cycles of CODOX M/IVAC protocol in pediatric patients, the 1-year event-free survival (EFS) rate was reported to 85% 19) . However, the study about the overall survival rate of older patients is insufficient.
Patients with radiculopathy or myelopathy caused by spine root compression due to an unknown lesion require surgical decompression for the diagnosis and treatment. Although spinal chemotherapy for secondary spinal epidural Burkitt's lymphoma seems to be an effective treatment protocol, the aggressive nature of the tumor necessitates immediate intervention to minimize neurologic dysfunction 14,18,19) . Surgery provides the most rapid decompression of nerve tissue compared to chemotherapy and radiotherapy, both of which may take several days for decompression to occur. The role of surgery in this case was to achieve immediate neural decompression and to obtain an adequate specimen for a definitive pathological diagnosis.
Spinal tumors that extend into the vertebral canal and paraspinal spaces through the intervertebral foramen are so-called dumbbell-shaped spinal tumors. Basically, any mass occurring in the vertebral canal space, intervertebral foramen, or paraspinal space can be dumbbell shaped. The most common causes of spinal dumbbell lesions are benign neurogenic tumors, such as schwannomas or neurofibromas. The percentage of neurogenic tumors among dumbbell-shaped spinal tumors has been reported to be in the range from 68.8 to 81.3%. Dumbbellshaped spinal tumors are usually thought to be of neurogenic origin, but this is not always the case. Various neoplastic and non-neoplastic causes, originating at an intradural and/or extradural compartment, may also lead to dumbbell-shape spinal tumors with/without intervertebral foraminal widening. There are many kinds of dumbbell-shaped spinal tumors other than neurogenic ones, and several cases of dumbbell-shaped spinal lymphoma have been reported 11,17) . Most benign tumors have a tendency to grow slowly and extend into the intervertebral foramen, so they usually cause intervertebral foraminal widening. On the other hand, malignant tumors grow rapidly and are easily extended into the vertebral canal. In the present case, a benign neurogenic tumor was suspected due to the typical dumbbell-shape appearance. However, the possibility of other tumors was also raised on preoperative imaging findings. Many neurogenic tumors display some typical features such as regular margins, enlarged intervertebral foramen, and cystic/hemorrhagic changes. Lack of bony involvement on plain films or CT scan provides an important clue to the diagnosis. Extradural compression of the cord in the presence of normal radiographs may suggest a lymphoma 2) . Finally, dumbbell-shaped Burkitt's lymphoma was diagnosed in healthy immunocompetent women because of radicular symptom due to spine root compression as her initial manifestation.
CONCLUSION
Spinal cord or root compression as the initial presenting feature of lymphoma is rare. Although rare, Burkitt's lymphoma should be considered in the differential diagnosis when a patient without a prior history of malignancy presents with back pain followed by spinal cord or root compression by a tumor. Burkitt's lymphoma could be the cause of dumbbellshape spinal tumor too.
|
2016-05-04T20:20:58.661Z
|
2015-09-01T00:00:00.000
|
{
"year": 2015,
"sha1": "f5c30f1e94217852eabf2e791846e50c9e54cd52",
"oa_license": "CCBYNC",
"oa_url": "http://www.e-neurospine.org/upload/pdf/kjs-12-221.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f5c30f1e94217852eabf2e791846e50c9e54cd52",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216072627
|
pes2o/s2orc
|
v3-fos-license
|
Association between systolic blood pressure and first ischemic stroke in the Chinese older hypertensive population
Objective This study aimed to evaluate the association between systolic blood pressure (SBP) and first ischemic stroke in older people with hypertension in the community. Methods This retrospective cohort study included 3315 residents who were hypertensive and older than 60 years in Guangdong, China. Results A total of 1475 men and 1840 women aged 71.41±7.20 years were included. All subjects had a median follow-up duration for 5.5 years and 206 subjects reached the endpoint. The prevalence of first ischemic stroke increased with a higher SBP. SBP expressed as a continuous variable (hazard ratio [HR], 1.01; 95% confidence interval [CI], 1.00–1.02) and categorical variable (HRs, 1.00, 1.06, 1.17, 1.39, and 1.60 for increasing blood pressure from < 120–≥150 mmHg), was significantly associated with a higher risk of first ischemic stroke. Moreover, a fully adjusted model indicated an obvious increased risk in the SBP ≥150 mmHg group (HR, 1.60; 95% CI, 1.15–2.71) and the SBP 140–149 mmHg group (HR, 1.39; 95% CI, 1.01–2.39). Conclusions High SBP was independently associated with the risk of first ischemic stroke in hypertensive residents in the community aged older than 60 years. SBP ≥140 mmHg increases the risk of first ischemic stroke.
Introduction
Arterial hypertension has a high prevalence in the older population. According to the 2017 American College of Cardiology and the American Heart Association (ACC/ AHA), the prevalence of hypertension based on systolic blood pressure (SBP)/diastolic blood pressure (DBP) !140/90 mmHg or self-reported antihypertensive medication is 64% and 63% in 65-to 74year-old men and women, respectively. 1 The prevalence of hypertension is >70% in individuals aged 75 years or older. 2,3 Current studies have shown that hypertension is an independent risk factor of ischemic stroke. [4][5][6][7] Among older individuals, hypertension is a major risk factor for cardiovascular disease for 77% of hypertensive patients with incident stroke. 8 According to the Guideline for the Primary Prevention of Stroke, aging and hypertension are risk factors of stroke. 9 Increasing blood pressure is strongly, independently, predictively, and etiologically correlated with the risk of stroke. 9 Blood pressure, especially SBP, rises as age increases in adults, and this may also progressively increase the risk of ischemic stroke. 2,10 Nevertheless, the target for blood pressure control remains uncertain in the older population with hypertension. 11 Current guidelines recommend different targets for controlling blood pressure for preventing stroke and other cardiovascular events. 1,9,12,13 In the current study, we examined the clinical data of older patients in the community with hypertension. We aimed to estimate the correlation between SBP and first ischemic stroke, and to investigate an appropriate blood pressure target for older hypertensive patients to decrease the incidence of stroke.
Study population and design
We performed a retrospective cohort study and recruited 3500 people from a rural residential population from 1 January 2010 to 31 December 2011 at Liaobu in Guangdong, China. Subjects were older hypertensive patients who met the following inclusion criteria: age !60 years, and SBP !140 mmHg and/or DBP !90 mmHg, or receiving antihypertensive medications within 2 weeks. 14 We excluded patients with a previous stroke history (n ¼ 132), lack of blood pressure (n ¼ 37), and missing other physical examination data (n ¼ 16). The study was undertaken on the basic principle of the Helsinki Declaration and was approved by the institutional medical ethical committee of Guangdong Provincial People's Hospital, Guangzhou, China (No. 2012143H). All of the participants provided written informed consent.
Data collection
Blood pressure measurements were conducted according to the 2010 Chinese guidelines for management of hypertension. 14 Measurements were taken by trained nurses or physicians. Participants were asked to avoid exercise, smoking, and caffeine for at least 30 minutes and have a rest for longer than 5 minutes before measurement. The measured arm was positioned at the level of the heart and circled with cuffs of an appropriate size. Blood pressure was measured simultaneously by an automated device (OMRONHBP1100u; Omron Corp., Tokyo, Japan). The arm with the highest blood pressure value was used for all subsequent measurements and data analysis.
Demographic and medical data, including age, sex, and a history of smoking and alcoholism were obtained from interviews with patients or medical records. The past medical history, including cardiovascular diseases, cerebrovascular diseases, and type 2 diabetes mellitus, were collected from medical records and self-reports. The body mass index (BMI) was calculated as the ratio of weight in kilograms to the square of height in meters (kg/m 2 ). The estimated glomerular filtration rate (eGFR) was calculated by the simplified Modification of Diet in Renal Disease equation. 15 Antihypertensive medications were classified as angiotensin-converting enzyme inhibitors (ACEIs), angiotensin II receptor blockers (ARBs), beta-blockers, and calcium channel blockers (CCBs).
Clinical outcome
The endpoints were obtained by reviewing medical records that included the last hospitalization and personal physical records.
Based on previous studies, the primary endpoint was defined as first ischemic stroke, including cerebral infarction and transient ischemic attack. The diagnosis of ischemic stroke was based on a cranial computed tomography (CT) or contract vascular CT scan, magnetic resonance imaging of the brain, or cerebrovascular angiography. All stroke cases were ascertained from the local medical insurance system of the medical insurance bureau, and patients without medical records were followed up by phone call or face-to-face interview in the community. The duration of follow-up began at the time of the first visit and ended on 31 December 2016.
Statistical analysis
Continuous variables are expressed as mean AE standard deviation, and categorical variables are presented as absolute values and percentages. SBP was divided into the following five groups: (1) < 120 mmHg, (2) 120 to 129 mmHg, (3) 130 to 139 mmHg, (4) 140 to 149 mmHg, and (5) !150 mmHg. The differences between groups were evaluated by ANOVA (normal distribution) or the Kruskal-Wallis H test (skewed distribution) for continuous variables and the chi-square test or Fisher's exact test for categorical variables. The multivariate Cox regression model was used to evaluate the hazard ratios (HRs) between SBP and ischemic stroke. Adjustments were made for age, sex, BMI, eGFR, antihypertensive medications, total cholesterol, triacylglycerol, low-density lipoprotein cholesterol, high-density lipoprotein cholesterol, fasting blood glucose, type 2 diabetes mellitus, smoking, and drinking. SBP was also handled as a categorical variable according to SBP groups and the P for trend was estimated in each model. Subgroup analyses were performed by the multivariate Cox regression model. The interactions of subgroups for each variable were adjusted according to full adjustment. Survival analysis was performed using Kaplan-Meier curves, and the log-rank test was performed to examine between-group differences.
The collected data were double entered into EpiData software 3.1 (EpiData Associations, Odense, Denmark). Private identity information of all participants could not be ascertained by any approach in this study. All of the analyses were performed by SPSS version 22.0 (IBM Corp., Armonk, NY, USA) and R version 3.3.2 (R Foundation for Statistical Computing, Vienna, Austria). The threshold of statistical significance was defined as P < 0.05 (two-sided).
Demographic characteristics
A total of 3315 subjects (1475 men and 1840 women, mean age: 71.41AE7.20 years) were included in this analysis. Demographic and clinical characteristics of the subjects, grouped by different SBP levels, are shown in Table 1. All of the participants were hypertensive. The comorbidities in this cohort included diabetes mellitus in 15.23% of the subjects and coronary atherosclerotic heart disease in 1.03%, 28.14% were smokers, and 10.20% had a history of drinking.
All patients in the cohort were followed for 5 to 7 years (median follow-up duration: 5.5 years). During the follow-up, 206 (6.21%) subjects reached the endpoint of first ischemic stroke. A total of 26 (4.03%), 44 (5.33%), 54 (6.06%), 35 (7.74%), and 47 (9.36%) patients had new-onset ischemic stroke in the < 120, 120 to 129, 130 to 139, 140 to 149, and !150 mmHg groups, respectively ( Table 1). The morbidity of first ischemic stroke tended to be higher in the higher SBP groups. Kaplan-Meier curves showed that participants with a higher SBP were associated with a higher chance of first ischemic stroke among the SBP groups (log rank, P < 0.001) ( Figure 1).
Relationship between SBP and first ischemic stroke
The association of SBP and first ischemic stroke was analyzed by the multivariate Cox regression model ( Table 2). We found that a high SBP, expressed as a categorical variable and a continuous variable, was significantly associated with first ischemic stroke. When SBP was expressed as a continuous variable, high SBP was slightly associated with a higher risk of first ischemic stroke after adjustment for covariates (HR, 1.01; 95% confidence interval [CI], 1.00-1.02; P < 0.0135). Moreover, the effect size obviously changed when SBP was divided into different categories. In the fully adjusted model, the effect size sequentially increased in the higher SBP groups (HRs, 1.00, 1.06, 1.17, 1.39, 1.60; P for trend ¼ 0.0381) (model 3). Therefore, with a higher SBP category, the trend of a higher risk of first ischemic stroke significantly increased. Similar results were found in non-adjusted (HRs, 1.00, 1.34, 1.54, 2.00, 2.46; P for trend < 0.0001) and minimally adjusted models (HRs, 1.00, 1.24, 1.40, 1.78, 2.20; P for trend < 0.0001) (models 1 and 2). Additionally, the risk of first ischemic stroke was significantly increased in non-adjusted (HR, 2.46; 95% CI, 1.50-4.03; P ¼ 0.0004), minimally adjusted (HR, 2.20; 95% CI, 1.34-3.62; P ¼ 0.002), and fully adjusted (HR, 1.60; 95% CI, 1.15-2.71; P ¼ 0.0062) models in the SBP !150 mmHg group. Similarly, in the SBP 140-149 mmHg group, the risk of first ischemic stroke was also increased compared with the reference group (HR, 1.39; 95% CI, 1.01-2.39) in the fully adjusted model. Table 3). The HRs showed a slight risk of ischemic stroke in most subgroups, but the sample size was limited. There were no significant differences between prespecified and exploratory subgroups of all variables according to the P for interaction.
Discussion
This study showed that SBP was significantly associated with the risk of first ischemic stroke in the older population with hypertension in the community, regardless of conventional cardiovascular risk factors. SBP !140 mmHg was a significant risk factor for first ischemic stroke. Therefore, we suggest controlling blood pressure to < 140 mmHg to prevent ischemic stroke in older hypertensive patients. Hypertension remains a vital risk factor of ischemic stroke in the older population, and antihypertensive treatment is still the first strategy to prevent stroke. 2,[16][17][18] The Guideline for the Primary Prevention of Stroke regards aging as a non-modifiable risk factor of ischemic stroke and intracerebral hemorrhage for increasing cardiovascular risk in older individuals. 9,19 The seventh report of the Joint National Committee suggested that SBP is a more important cardiovascular risk factor than DBP in those aged older than 50 years. 20 A meta-analysis that pooled 23 randomized trails estimated that 32% risk of ischemic stroke was decreased in any antihypertensive drug group compared with the no treatment group (risk ratio, 0.68; 95% CI, 0.61-0.76; P ¼ 0.004). 21 Isolated systolic hypertension (SBP !140 mmHg and DBP < 90 mmHg) should be accounted for when controlling blood pressure. [22][23][24] The Systolic Hypertension in the Elderly Program (SHEP) included 4736 older patients aged !60 years with isolated systolic hypertension (SBP, 160-219 mmHg; DBP < 90 mmHg). The 5-year incidence of total stroke was 5.2 versus 8.2 per 100 participants in active treatment versus placebo (risk ratio, 0.64). 25 Eighty-five participants suffered from ischemic stroke in the active treatment group during follow-up and 132 suffered from ischemic stroke in the placebo group. 26 The Systolic Hypertension in China (Syst-China) study focused on Chinese people older than 60 years with isolated systolic hypertension. 22 This previous study compared the incidence of stroke and other cardiovascular complications between active and placebo treatment. This incidence in the active treatment group was reduced by 38% compared with the placebo group (13.0 versus 20.8, P ¼ 0.01). 22 Our study examined the association between SBP and the risk of ischemic stroke, which may be predictive and significant in older patients with isolated systolic hypertension in future subgroup analysis.
The Systolic Blood Pressure Intervention Trial (SPRINT) randomized 9361 hypertensive patients (aged !50 years, SBP: 130-180 mmHg) to an SBP target of < 120 mmHg (intensive treatment group) or < 140 mmHg (standard treatment group). 27 This previous study especially excluded patients with previous stroke or diabetes mellitus. The annual stroke rate was 0.41% versus 0.47% in the intensive and standard treatment groups, respectively (HR, 0.89; 95% CI, 0.63-1.25). 27 Subgroup analysis of the SPRINT study included 2636 participants aged !75 years. With a median follow-up of 3.14 years, the rate of stroke was 0.67% in the intensive group versus 0.85% in the standard treatment group (HR, 0.72; 95% CI, 0.34-1.21). 28 This finding indicates the advantage of intensive blood pressure lowering on preventing stroke. The Hypertension in the Very Elderly Trial (HYVET) showed effectiveness of antihypertensive therapy for reducing the risk of cardiovascular and total mortality, regardless of the frailty status of older individuals aged !80 years. 29 Similar to the SPRINT study, the HYVET showed no significant difference in estimation of stroke between the active drug and placebo groups. Evidence on blood pressure lowering to reduce the risk of stroke is still limited in older patients with hypertension. These results may be related to special age categories and limited follow-up periods. Intensive treatment of hypertension may decrease the risk of ischemic stroke in the general population, but not lead to significant results in older age groups.
Moreover, the current guidelines have different cut-off values of age for older individuals and controversial blood pressure targets for older hypertensive patients. 16,30 The Guideline for the Primary Prevention of Stroke recommends that hypertensive patients should be treated with hypertensive drugs to a target blood pressure of < 140/ 90 mmHg. 9 In 2017, the ACC/AHA recommended a target SBP < 130 mmHg for community-dwelling older patients with hypertension. 1 Additionally, the European Society of Cardiology and European Society of Hypertension recommended that older hypertensive patients should reach a target of SBP < 140 mmHg. 12 Although these guidelines recommend different blood pressure control targets, they are consistent in the view that the risk of stroke and adverse effects are decreased by progressively lowering blood pressure. 1,8,9,12,13 In our study, the threshold of controlling blood pressure conformed with most current guidelines and studies, suggesting that SBP < 140 mmHg is an ideal target. Further studies are required to determine the appropriate control target of blood pressure in older hypertensive individuals.
This study showed a significant association between SBP and ischemic stroke in the older hypertensive population and provided an optimal blood pressure target, but did not discuss the relationship between DBP and stroke. However, we adjusted DBP in Cox regression model analysis to avoid confounding. Moreover, there were various antihypertensive therapies used in our study, which may have led to underestimation of the cardiovascular risk because of potential poor blood pressure control and hypotension. Additionally, this study did not examine different types of ischemic stroke separately. This issue should be evaluated in a further study. Finally, because we focused on hypertensive patients in this study, we could not estimate the baseline characteristics and clinical outcome in general subjects. Therefore, future studies are required to investigate the risk factors of ischemic stroke in the general population.
In conclusion, SBP is independently associated with the risk of first ischemic stroke in hypertensive patients older than 60 years in the Chinese community. SBP !140 mmHg obviously increases the risk of first ischemic stroke in the older population. Undoubtedly, lowering SBP to < 140 mmHg is an effective and moderate method of decreasing the risk of first ischemic stroke in older hypertensive patients.
|
2020-04-23T09:06:20.770Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "5346e9f61173fb04eb0cec9e066bfdb9c500ad26",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/0300060520920091",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1dd4ec207ee0c1eab24aa05db9699c9e9cd36ea7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
202557003
|
pes2o/s2orc
|
v3-fos-license
|
Graphene–aramid nanocomposite fibres via superacid co-processing†
The development of graphene–polymer nanocomposite materials has been hindered by issues such as poor colloidal stability of graphene in liquid media, weak interactions between graphene and the host polymers as well as the lack of scalable and economical graphene synthesis routes. Chlorosulfonic acid (CSA) can spontaneously disperse graphene without the need for mechanical agitation, chemical functionalisation or surfactant stabilisation, however is incompatible with most polymers and organic materials. Here, we demonstrate how poly(pphenylene terephthalamide) (PPTA) – the polymer which constitutes Kevlar – can be co-processed with graphene in CSA and wet-spun into nanocomposite fibres with minimal aggregation of graphene.
A single layer of pristine graphene has an intrinsic strength of ca. 130 GPa and a Young's modulus (stiffness) of ca. 1.0 TPa, greatly exceeding the mechanical properties of any known bulk material. 2 Despite these outstanding properties, attempts to commercialise graphene-based materials have been frustrated by issues such as its poor colloidal stability in most liquid phases, the lack of economical and scalable graphene synthesis routes, as well as poor interactions between graphene and polymer matrixes. 3,4 Nevertheless, the incorporation of even small quantities of graphene into host polymers has been shown to significantly enhance their mechanical, electrical and thermal properties. [4][5][6] Aromatic polyamides (aramids) are a class of synthetic polymers which, when spun from liquid crystalline dopes under carefully controlled conditions, can form paracrystalline fibres with exceptionally high strength, stiffness and toughness. [7][8][9][10] For these reasons, aramid fibres such as Kevlar, Twaron and Nomex are employed for a range of high-performance applications where a high strength-to-weight ratio is needed, such as ballistic armour, car-tyre reinforcements and aerospace composites. 10 The incorporation of graphene into aramid fibres may further enhance their mechanical properties, as well as introduce electrical conductivity, as has been shown for a myriad of other graphenepolymer nanocomposites. 4,5 The relatively linear nature of aramid polymers, coupled with their high degree of aromaticity, could also stabilise graphene sheets through p-p interactions, as well as effectively transfer mechanical stress from the polymer matrix to the reinforcing graphene. [11][12][13][14] Graphene can be produced through a number of different routes, which can be classified as either top-down or bottom-up approaches. 4,5 Of these, the direct exfoliation of graphite by the superacid chlorosulfonic acid (CSA) is considered to be one of the most effective methods for its scalable and economical production. 1,4,15,16 By protonating and spontaneously exfoliating graphite, CSA yields colloidal liquid crystal dispersions of highpurity, single-layer graphene. 1 Moreover, this is achieved without the need for mechanical agitation, chemical functionalisation or surfactant stabilisation -which can compromise graphene's outstanding physical properties. 4,5 A major drawback of this technique is that CSA has poor compatibility with most conventional materials, reacting violently with water, alcohols, metals, and organic materials including many polymers and solvents. 17 Aramids are one of the few organic materials that can be dissolved by, yet resist decomposition from, powerful acids such as CSA -and are typically spun into fibres from anhydrous fuming sulfuric acid solutions at elevated temperatures. 9 In this communication, we show how PPTA-graphene composite fibres can be produced via co-processing with CSA. Wide-angle X-ray diffraction (WAXD) and polarised light microscopy indicated minimal aggregation of 2 mm graphene sheets within the fibres, even at a relatively high loading of 10% w/w. Evolution of gases (e.g., H 2 O, HCl vapour) at the point of fibre formation (due to contact of CSA with the water coagulation bath) resulted in porous fibres with large cavities and a specific surface area of ca. 16 m 2 g À1 . This porosity was likely detrimental to the mechanical and electrical conductivity properties of the fibres, meaning further development of the fibre spinning technique and processing conditions, such as post-spin drawing and heat treatment, would likely be required to attain high-strength, non-porous fibres analogous to commercial aramid fibres. Improved spinning conditions (e.g., employing dry-jet wet spinning) would also be needed to increase the degree of molecular and crystallite orientation to further improve mechanical properties, as seen in commercial aramid fibres. 9,10,18,19 In this work, PPTA was synthesised through the low-temperature anhydrous polycondensation of benzene-1,4-dicarbonyl dichloride and benzene-1,4-diamine (details in ESI †) ( Fig. 1). 20 Commercial aramid fibres were not reprocessed for spinning due to the presence of contaminants (e.g., finishing oils) and since their precise composition is not disclosed by manufacturers (e.g., may contain up to 15 mol% non-aromatic linkages). 9,10 Since fibre strength is proportional to average polymer molecular weight (M ave ), 18 the synthesis was first subject to several rounds of optimisation to maximise M ave . Due to the poor solubility of PPTA in most solvents, common techniques such as gel permeation chromatography (GPC) could not be employed to determine M ave . Instead, capillary viscometry was employed to compare the dynamic viscosity (Z) of the batches under analogous conditions, where a higher Z implied a higher M ave (Table S1, ESI †). Multiple dilute solution viscometry measurements were employed to obtain approximate M ave values for the optimised batch (abbreviated B7-PPTA) as well as commercial Kevlar, by employing both the Huggins and Kraemer methods followed by application of the Mark-Houwink equation: is the intrinsic viscosity, and K and a are the Mark-Houwink parameters, taken as 3902.4 and 1.556, respectively. 21 This gave M ave values of 4.5 kDa (Huggins method) and 4.7 kDa (Kraemer method) for B7-PPTA, and a M ave of 61.7 kDa (Huggins) and 79.2 kDa (Kraemer) for commercial Kevlar. Further optimisation of the synthesis would be required to attain a M ave closer to that of commercial Kevlar and hence produce fibres with comparable mechanical properties.
To produce PPTA-graphene spinning dopes, two grades of graphite nanoplatelets (GNPs) -M25 and C750 -were first dispersed in CSA before addition and dissolution of B7-PPTA (details in ESI †). M25 GNPs have a stated average platelet diameter of ca. 25 mm whilst C750 GNPs have an average platelet diameter of ca. 2 mm. 22 The resultant PPTA spinning dopes (12% w/w) had graphene contents between 0.1-10% by mass relative to the PPTA (Fig. S1, ESI †). The solutions were then subject to wet spinning into a water coagulation bath using a custom-made spinning rig (Fig. S2, ESI †). After washing and drying, the B7-PPTA fibres without graphene had a pale yellow/brown colouration, which darkened through green into black as the graphene content increased (Fig. 2a). Fourier transform infrared spectroscopy (FTIR) was performed on all samples, showing an almost identical signature to commercial Kevlar (Fig. S3, ESI †). Scanning electron microscopy (SEM) images of the fibres revealed a relatively rough surface compared to commercial Kevlar (Fig. 2b). The thickness of the fibres varied between ca. 45-120 mm in diameter, significantly larger than commercial Kevlar which are ca. 11 mm in diameter. Smaller diameter fibres with greater uniformity could likely be obtained with a commercial wet-spinning rig. Polarised light microscopy revealed the B7-PPTA fibres to be paracrystalline, as observed by the birefringence effect (i.e. rotation of polarised light by crystalline domains) (Fig. 2c). This effect was exploited to visualise the dispersion of graphene throughout the fibres: as graphene content increased, the intensity of light passing through the fibres decreased -highlighting any regions of aggregated graphene as substantially darker areas. It can be seen from Fig. 2c that the B7-PPTA fibres containing M25 graphene have these darker areas, suggesting graphene aggregation, whereas the fibres with C750 darken relatively uniformly suggesting good dispersion.
WAXD was employed to further probe the dispersion of the graphene as well as the crystallinity of the fibres (Fig. 3). Without graphene, the diffraction peaks of the B7-PPTA fibres matched those of commercial Kevlar, 8 although were notably broader suggesting smaller crystallites and hence a less ordered paracrystalline structure. 10 Commercial aramid fibres also have highly anisotropic crystallites which are aligned along the longitudinal axis of the fibre; this is achieved through specialised spinning techniques (dry-jet wet spinning) and the use of liquid crystalline spinning dopes. 8,10,23 Aggregated graphene, or graphite, has a distinctive WAXD diffraction peak at 26.61 2y -corresponding to the 002 Bragg reflection of stacked graphite sheets; 24 this peak is not seen in fully exfoliated graphene. 4,25,26 A 002 peak at 261 can be seen in the B7-PPTA-M25 fibres with a graphene loading of 2.5% or greater (Fig. 3a), however is not observed in any of the B7-PPTA-C750 fibres. This suggests that C750 graphene is relatively well-dispersed within the fibres and has not re-aggregated into GNPs during the spinning process -even at the relatively high loading of 10% graphene relative to PPTA. 5 Raman spectroscopy was employed to further analyse the graphene dispersion within the fibres. Raman spectroscopy is a common technique for the characterisation of graphene since it can differentiate between single and multi-layer sheets through characteristic changes to peaks around 2700 cm À1 . 27,28 These peaks were not observed however, which was attributed to the vibrational modes being diminished and attenuated through interactions with the PPTA matrix, meaning differentiation between singleand multi-layer graphene through Raman spectroscopy could not be performed in this instance (Fig. S4, ESI †).
The outstanding mechanical properties of commercial aramid fibres is largely due to a high degree of non-covalent intramolecular bonding, namely H-bonding between adjacent amide groups and p-p interactions. 7 Restricted rotation around the amide bonds also promotes these interactions and the formation of semi-crystalline domains. 10 It was hypothesised that the highly linear nature of PPTA polymers, coupled with their high degree of aromaticity, could effectively stabilise graphene sheets though significant p-p interactions. These interactions could also be effective in transferring stresses from the polymer matrix to the reinforcing graphene. [11][12][13][14] Furthermore, alignment of the PPTA crystallites along the axis of the fibre (achieved in commercial aramid fibres through dry-jet wet spinning of liquid crystalline dopes) could also orientate the graphene sheets which may further improve mechanical properties and electrical conductivity. 10,29 A basic computational model, based on the crystal structure of PPTA 8 and a monolayer of graphene, was constructed which confirmed the possibility of p-p bond interactions between overlapping aromatic groups (Fig. S5, ESI †).
The mechanical properties of the fibres were assessed through uniaxial tensile testing (Fig. 4), however brittleness of the fibres under compression made loading samples into the testing instrument (i.e., securely clamping the fibres) and therefore obtaining reliable measurements difficult. The cross-sectional area of the fibres, required to calculate the ultimate tensile strength, was determined through a combination of cross-sectional SEM imaging and visible light microscopy (i.e., measurement of fibre diameter to calculate area). The cross-sectional SEM images of the fibres revealed large cavities (ca. 90% cavity volume) within the fibres, which likely compromised their mechanical integrity (Fig. S6, ESI †). These cavities were attributed to the release of gasses (e.g., H 2 O, HCl vapour) as CSA contacted the water coagulation bath -resulting in a degree of foaming at the point of fibre formation. N 2 gas sorption and subsequent BET surface area analysis indicated a specific surface area of 16 m 2 g À1 (Fig. S7, ESI †), suggesting a degree of non-visible micro-/meso porosity in addition to the larger cavities. It should be noted that voids within commercial Kevlar fibres (i.e., PPTA spun from anhydrous sulfuric acid) have also been observed, 10,30 and have been attributed to differences in the coagulation rates between the skin and core of the fibres, 31 as well as cavities caused by the presence of Na 2 SO 4 (from the neutralisation of residual H 2 SO 4 ). 32 Other polymeric fibres such as poly(acrylonitrile) wet-spun from standard solvents (e.g. dimethyl sulfoxide, dimethylformamide) into water also have significant voids arising from phase separation, which can be removed by optimising the coagulation bath composition/ temperature, as well as post-spinning fibre treatment including drawing and heating under tension. 33 It's therefore feasible that a more advanced fibre spinning line could overcome the issue of cavities within the fibres and hence improve the mechanical properties of the fibres. The conductivity of the fibres was measured through a 4-probe technique. However, there was no detectable electrical conductivity within the sensitivity range of the instrument. The lack of conductivity was attributed to the pores and cavities within the fibres impeding electron percolation.
The fact that CSA can effectively dissolve PPTA as well as produce stable colloidal dispersions of single-layer graphene 1 was exploited to produce PPTA-graphene composite fibres. Graphene sheets derived from C750 GNPs (average diameter of 2 mm) were effectively dispersed throughout the PPTA fibres up to a loading of 10% by mass without any significant aggregation as determined by WAXD and cross-polarised light microscopy. Aggregation was, however, observed for graphene sheets derived from M25 (average diameter of 25 mm) at a 4 Ultimate tensile strength of B7-PPTA with increasing amounts of M25 and C750 graphene. Note, fibres with 10% loading of M25 were too brittle to measure and therefore no data could be obtained.
loading of 2.5% and greater. Significant p-p interactions between PPTA chains and graphene sheets may facilitate homogenous dispersion throughout the polymer matrix and effectively distribute mechanical stresses. Gasses released at the point of fibre formation resulted in porous fibres with large cavities, compromising the mechanical and electrical conductivity properties. Further optimisation of PPTA synthesis to increase M ave , development of the spinning rig, and post-fibre treatment could overcome these issues to produce non-porous aramid fibres with a high loading of well-dispersed graphene. This superacid co-processing approach could also be applied to other carbon nanomaterials (e.g., single or multi-walled carbon nanotubes, fullerenes, activated carbon particles, etc.) or other acid-resistant polymers.
This work was funded by the Defence Science and Technology Laboratory and the Engineering and Physical Sciences Research Council (EPSRC; grant EP/N025504/1). The work is a contribution from the EPSRC/BBSRC Future Biomanufacturing Research Hub (EP/S01778X/1).
Conflicts of interest
There are no conflicts to declare.
|
2019-09-12T13:06:37.904Z
|
2019-09-26T00:00:00.000
|
{
"year": 2019,
"sha1": "523bd95360b8db3a8726f5a2a97dd8245a5bab61",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/cc/c9cc04548a",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bb6ce2ee509f6cc85589fbe92fa9c8a0ae9fffdf",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
235438319
|
pes2o/s2orc
|
v3-fos-license
|
Enhanced NRF2 expression mitigates the decline in neural stem cell function during aging
Abstract Although it is known that aging affects neural stem progenitor cell (NSPC) biology in fundamental ways, the underlying dynamics of this process are not fully understood. Our previous work identified a specific critical period (CP) of decline in NSPC activity and function during middle age (13–15 months), and revealed the reduced expression of the redox‐sensitive transcription factor, NRF2, as a key mediator of this process. Here, we investigated whether augmenting NRF2 expression could potentially mitigate the NSPC decline across the identified CP. NRF2 expression in subventricular zone (SVZ) NSPCs was upregulated via GFP tagged recombinant adeno‐associated viral vectors (AAV‐NRF2‐eGFP), and its cellular and behavioral effects compared to animals that received control vectors (AAV‐eGFP). The vectors were administered into the SVZs of aging rats, at time points either before or after the CP. Results indicate that animals that had received AAV‐NRF2‐eGFP, prior to the CP (11 months of age), exhibited substantially improved behavioral function (fine olfactory discrimination and motor tasks) in comparison to those receiving control viruses. Further analysis revealed that NSPC proliferation, self‐renewal, neurogenesis, and migration to the olfactory bulb had significantly increased upon NRF2 upregulation. On the other hand, increasing NRF2 after the CP (at 20 months of age) produced no notable changes in NSPC activity at either cellular or behavioral levels. These results, for the first time, indicate NRF2 pathway modulation as a means to support NSPC function with age and highlight a critical time‐dependency for activating NRF2 to enhance NSPC function.
. Given the pivotal role of stem cells in tissues with lifelong regenerative capacity such as the brain, understanding stem cell aging will be important if we are to understand aging at the organ level. More broadly, comprehending stem cell aging will also support the development of interventions that could improve both health and lifespan.
In this context, our previous studies, conducted in naturally aging rodents, identified a specific temporal pattern of change in NSPC dynamics during aging. In particular, the studies highlighted a critical time during middle age (13-15 months), when the regenerative function of NSPCs showed a striking decline (Corenblum et al., 2016;Ray et al., 2018;Schmidlin et al., 2019). The studies also determined the reduced expression of nuclear factor (erythroid-derived 2) like 2 (or NRF2), as a key mechanism mediating this phenomenon. As such, this work provided first evidence of an important regulatory role for NRF2 in NSPC aging.
NRF2 is a redox-sensitive transcription factor known to be essential to the cell's homeostatic mechanism (Bryan et al., 2013;Itoh et al., 2010;Suzuki & Yamamoto, 2017). NRF2 is ubiquitously expressed in most eukaryotic cells and functions to induce a broad range of cellular defenses against exogenous and endogenous stresses, including oxidants, xenobiotics, inflammatory agents, and excessive nutrient/metabolite supply. In particular, NRF2 can up-regulate a range of classical ARE (antioxidant response element)-driven genes, encoding major antioxidants and other detoxification enzymes. In addition to its classical function in regulating the stress response, NRF2 has been linked to cell growth, proliferation, mitochondrial and trophic functions, protein quality control, and increased lifespan (Holmstrom et al., 2013;Malhotra et al., 2010;Sykiotis & Bohmann, 2008;Tullet et al., 2008;Wakabayashi et al., 2010;Wiesner et al., 2013;Zhu et al., 2013). Our recent work adds a unique and important new face to NRF2 actions in the cell-namely the age-relevant regulation of NSPCs (Corenblum et al., 2016;Madhavan, 2015;Ray et al., 2018;Schmidlin et al., 2019).
Given that NRF2 loss accentuates NSPC aging, in this study, we investigated whether increasing NRF2 levels could boost NSPC function with age. In particular, we studied whether inducing high intrinsic NRF2 expression can potentially mitigate the decline in NSPC regeneration during the critical middle-age period between 13 and 15 months (mos), identified in our previous work. NRF2 was delivered to rat subventricular zone (SVZ) NSPCs through recombinant adeno-associated viral (AAV) vectors injected either before (at 11 mos of age) or well after the critical aging period (at 20 mos of age). We find that the administration of AAV-NRF2-eGFP vectors before the initiation of the critical period (CP), substantially improved SVZ NSPC regeneration and associated behavioral function, as compared to controls (AAV-eGFP delivery). On the other hand, application of AAV-NRF2-eGFP after the conclusion of the CP failed to significantly promote NSPC activity and function.
These data establish a major governing role for NRF2 in NSPCs and support targeting the NRF2 pathway as a potential approach to advantageously modulate NSPC function with age.
| Viral expression of NRF2 in aging SVZ NSPCs improves behavioral function during the critical period
In order to address whether augmenting NRF2 expression can promote NSPC function during aging, recombinant adeno-associated viral vectors tagged with a GFP reporter carrying either NRF2 (AAV-NRF2-eGFP), or eGFP alone (AAV-eGFP) as a mock control, were stereotactically delivered into the SVZs of aging rats. To specifically determine the effects of rescuing NRF2 expression in the context of the critical middle-age period (13-15 mos) identified in our previous studies, the vectors were injected into the SVZ either before (11 mos of age) or well after (20 mos of age) the CP. Subsequently, behavioral function (at 2 and 4 mos post-viral injection) and cellular changes (4 mos post-injection) were assessed ( Figure 1a).
First, we confirmed the efficiency of viral transduction. It was found that AAV2/1 administration into two sites along the rostrocaudal extent of the lateral SVZ robustly and specifically transduces NSPCs (stereotaxic locations shown in Figure S1A,C and described in the Methods section). Strong GFP expression was noted in the rat SVZ by immunofluorescence microscopy (Figure 1b, broader views of the transduced areas are in Figure S1B,D). This high GFP expression was seen as early as 2 weeks post-injection, with peak viral transduction reached at 1.5 mos. As shown, co-labeling with antibodies targeting the NSPC specific antigen Musashi1 (expressed by a large population of SVZ stem and progenitor cells) indicated that AAV2/1 proficiently infected SVZ NSPCs (confocal micrographs in Figure 1A-C). Moreover, significantly increased NRF2 expression was seen in SVZ cells of animals that received AAV-NRF2-eGFP, as compared to GFP controls ( Figure 1D-I). To ensure that NRF2 overexpression further activates downstream target genes, levels of the well-established NRF2 target gene, glutamate-cysteine ligase modifier subunit (GCLM), in the SVZ were also assessed. As shown, GCLM expression was increased in the same SVZ cells that showed high NRF2 expression thus confirming NRF2 pathway activation ( Figure 1J-Q). This level of NRF2 expression and activation appeared comparable to what was observed in 9-to 11-month-old animals as characterized previously (Corenblum et al., 2016).
Next, the behavioral consequences of increased NRF2 expression were analyzed. The fine olfactory discrimination task is a known measure of SVZ NSPC function that tests the animal's ability to discriminate between different ratios of [+]/good tasting coconut (COC) and [−]/bad tasting mixture of almond and denatonium benzoate (ALM) (Corenblum et al., 2016;Enwere et al., 2004;Schmidlin et al., 2019). As expected, the baseline olfactory function (i.e., prior to AAV injection) of the older 20 mos rats was significantly worse (reflected by lower scores on the Y-axis) than the 11-month-old animals (Figure 2A,D). Intriguingly, as compared to the AAV-eGFP control-injected rats, the 11-month-old animals that received AAV-NRF2-GFP exhibited an increased capacity to discriminate between very similar ratios of COC and ALM (56:44) starting at 2 mos after injection [ Figure 2B; p = 0.011, F 3,33 = 217.124 (concentration), two-way RM-ANOVA], which became even more significant ( We also assessed motor function via a challenging beam task to investigate potential striatal effects of increased SVZ NRF2 expression. We generated a composite score that represents the ability of an animal to cross an increasingly narrow in size set of beams without foot slip errors, scooting across, or failing to cross the beam ( Figure 2G-R). While there was no significant difference in the composite scores between animals at 2 mos post-AAV injection, the 11-month-old animals injected with AAV-NRF2-eGFP were able to successfully traverse both the 20 mm and 15 mm beams more often than their AAV-eGFP-injected counterparts at the 4 mos postinjection time point ( Figure 2I; 20 mm beam -p = 0.0395, unpaired t test, t = 2.20, df = 20; 15 mm beam -p = 0.028, unpaired t test, t = 2.29, df = 31). Interestingly, AAV-NRF2-eGFP-injected rats also F I G U R E 1 NRF2 expression and activation in the SVZ NSPCs. (a) Depicts the experimental design and timeline. (b) Shows GFP expression in SVZ cells 1 mos after AAV-eGFP injection (white arrows). A-C Depicts confocal images of Musashi + NSPCs showing high GFP expression after AAV-eGFP transduction (arrows indicate example positive cells). GFP expressing NSPCs in the dorsolateral SVZ showed increased NRF2 expression in AAV-Nr2-eGFP-injected rats (G-I, arrows) compared to rats that had received AAV-eGFP (D-F). The NRF2 target gene, GCLM, was highly expressed in the same NRF2 overexpressing cells of AAV-NRF2-eGFP rats (N-Q) compared to control rats (J-M). Inset in O shows higher magnification view of NRF2/GCLM co-labeling. Scale bar of 20 μM, applicable to images in A-Q, is drawn in Q activation is delayed to an older age after the completion of the critical period.
| SVZ NSPC proliferation and neurogenesis are enhanced following NRF2 overexpression during the critical period
Based on our findings that increased NRF2 expression can promote SVZ-associated behavioral function, we next investi-
| Increased NRF2 supports SVZ NSPC regeneration during the critical period
Given that viral NRF2 expression during the critical period increased SVZ NSPC proliferation and neurogenesis, we interrogated how NRF2 affects various NPSC subtypes in the SVZ by examining the expression of markers that delineate different SVZ stem and progenitor cells. SVZ NSPCs show a hierarchy of division: glial-like type B cells divide relatively infrequently to give rise to rapidly dividing type C transit-amplifying cells (also referred to as intermediate progenitor cells), which expand the progenitor pool. These type C transit-amplifying cells then generate immature type A neuroblasts that mature into fully differentiated neurons. To assess these different NSPC subtypes, we first stained for Musashi1 (Mus) that is highly expressed in type B and type C NSPCs. It was observed that Mus immunolabeling (red) in 11-month-old AAV-NRF2-eGFP-injected rats was notably greater than in AAV-eGFP controls ( Figure 4A-F).
Confocal quantification confirmed higher numbers of Mus + cells in the dorsolateral SVZ of AAV-NRF2-eGFP-injected rats compared to F I G U R E 2 Heightened olfactory discrimination and motor abilities in NRF2 overexpressing rats. Results from baseline testing of fine olfactory discrimination (upper schematic on the left) on naïve 11-month and 20-month rats (aging stages before and after the CP are in (A) and (D). 11-month-old rats showed significantly improved abilities to discriminate between similar concentrations of odorants, 2 mos (B) and 4 mos (C) after AAV-NRF2-eGFP administration, compared to controls. Analysis of rats which received AAV-NRF2-eGFP at 20 mos of age (after the CP) showed no positive effect on fine olfactory discrimination capacities (E,F). "Ratio of odor components" label below (D-F) graphs applies to all six olfactory graphs above. [*p < 0.05, **p < 0.01, Two-way repeated measures ANOVA with Tukey's post hoc test].
Lower schematic on the left shows the challenging beam apparatus. Younger 11-month-old rats showed similar composite motor scores at baseline (G) and 2 mos (H) after AAV administration. However, rats receiving AAV-NRF2-eGFP displayed significantly higher composite scores at 4-month post-viral injection when traversing the 20 mm and 15 mm beams (I). (J-L) Shows beam traversal times for AAV-NRF2-eGFP rats and AAV-eGFP rats. AAV-NRF2-eGFP administration in older 20-month-old rats (after CP completion) had no effect on their composite motor scores when traversing the 30 mm beam at baseline (M), 2-month (N) or 4-month post-viral injection (O). Animals were unable to cross the 20 mm or 15 mm width beams (P,Q) at this age (indicated by "x"). Beam traversal time for the 20-month-old AAV-NRF2-eGFP-injected rats across the 30 mm beam is shown in (R). "Beam widths" label below (P-R) graphs applies to all nine beam graphs above.
[*p < 0.05, **p < 0.01, ***p < 0.001, unpaired t test with Welch's correction] controls ( Figure 4A; p = 0.003, unpaired t test, t = 4.72, df = 6). Next, we examined GFAP/Nestin (denotes type B NSPCs), Sox 2 (marker of C and some type B NSPCs), and Nestin (seen in type B and C NSPCs) expression ( Figure These data, in concert with the data in Figure 3, suggest that NRF2 activation improves the regeneration of all major SVZ cell types, namely type B, type C and type A cells.
| Increased NRF2 promotes NSPC migration via the RMS during the critical period
Newly generated neuroblasts (type A cells) leave the SVZ from the anterior part (base of the anterior horn of the lateral ventricle) and rRMS -p = 0.001, unpaired t test, t = 9.82, df = 6). These results suggested that newly generated NSPCs overexpressing NRF2 not only proliferate and regenerate at the level of the SVZ, but also migrate more effectively to the OB, than control cells. When NSPC migration was assessed in the older 20-month-old animals in the aSVZ and rRMS, it was found that the number of BrdU + /Dcx + expressing cells was higher on average in rats treated with AAV-NRF2-eGFP ( Figure S2A-R). However, these increases were not statistically significant compared to control rats.
| Increased NRF2 expression supports NSPC differentiation and neuronal maturation
Having confirmed that an amplification of NRF2 expression improves NSPC proliferation, regeneration, and migration during the critical period, we studied whether these newly generated migratory suggesting that other cell-intrinsic and/or extrinsic factors maybe interfering with NRF2-mediated effects in the older animals.
| Increased NRF2 expression promotes striatal neurogenesis
Given the significant improvements in motor learning seen at 4 mos after viral NRF2 transduction in the 11-month-old rats, we also assessed striatal neurogenesis in the animals. Immunostaining with BrdU showed no apparent streams of potentially migrating cells from the subventricular zone in the AAV-NRF2-eGFP-injected animals, although occasional BrdU cells disjointed from the SVZ as well as pockets of BrdU cells in the striatum were noted (arrows in Figure 7A,B). The BrdU + cells were most often found as single isolated cells distributed predominantly in the dorsomedial, and some in the dorsolateral, striatum (schema in Figure 7C depicts this distribution of BrdU cells). NeuN immunostaining and highresolution confocal imaging showed that multiple BrdU cells were co-expressing the neuronal marker ( Figure 7D-K). Quantification determined that there were higher numbers of BrdU + /NeuN + double-stained cells in AAV-NRF2-eGFP-injected animals than in the animals that had received AAV-eGFP only ( Figure 7L). We also examined the differentiation of the newborn neurons into DARPP32 + cells, a marker of medium spiny neurons which constitute a large proportion of striatal neurons. However, we did not detect any BrdU labeled cells co-expressing DARPP32 ( Figure S4A-H). These data indicated that increased NRF2 expression before the CP had induced striatal neurogenesis, but not their further subtype specification. before a certain critical period of vulnerability during aging, can enhance NSPC regeneration and function. These novel data are the F I G U R E 5 NSPCs travel more successfully through the rostral migratory stream to the olfactory bulb upon NRF2 upregulation. Immunohistochemical analysis of newborn BrdU + /Dcx + cells in the aSVZ, mRMS, and rRMS at 4 mos after viral injections in 15-month-old AAV-eGFP and AAV-Nr2-eGFP rats was conducted (see schematic at the top). AAV-NRF2-eGFP-injected animals showed higher BrdU/Dcx co-labeling in the aSVZ (A-H), mRMS (I-P), and rRMS (Q-X), compared to AAV-eGFP controls (Arrows point to example BrdU/Dcx doublepositive cells). Associated data from confocal quantification are shown in (a-c). (d) Conveys the average number of cells present across all three regions. [*p < 0.05, **p < 0.01, ***p < 0.001, unpaired t tests]. Scale bars: 10 μM. Scale bars for A-H is in H; for I-P is in P; for Q-X is in X first to reveal the ability of a cell-intrinsic factor, namely NRF2, to control NSPC aging and impact lifelong neural plasticity. Secondly, our data show that the observed activation of NSPC regeneration, upon NRF2 upregulation, correlated with a significantly better performance on fine olfactory discrimination and motor learning tasks-thus connecting molecular enhancements to a behaviorally relevant readout. It was noted that 11-month-old rats that had received control AAV viruses displayed an expected decline in olfactory discrimination function by 15 mos of age. However, rats that were administered AAVs encoding NRF2 exhibited superior olfactory discrimination abilities at 2 mos and 4 mos post-viral delivery (at 13 and 15 mos aging stages). This suggests that NRF2 upregulation can not only induce increased NSPC proliferation, self-renewal, differentiation, and migration to the OB, but also affect the olfactory circuitry leading to functional effects. In terms of motor function, AAV-NRF2-eGFP-injected rats showed higher composite motor scores and faster traverse times on a beam walking task. Specifically, NRF2 overexpressing animals were able to cross the narrower 20 and 15 mm beams without foot slips or falls, compared to control animals. The AAV-NRF2-eGFP animals also traveled across the 15 mm beam at a quicker pace than controls. These data suggest that NRF2based activation of NSPCs can also support striatum-based motor function. In this regard, it is known that a recruitment of newborn (Benraiss et al., 2012;Kobayashi et al., 2006;Madhavan et al., 2012;Yamashita et al., 2006). Our data show that this was indeed the case in the NRF2 overexpressing rats, thus providing a basis for the improvement in motor learning in the 11-month-old animals.
| DISCUSS ION
A third important finding is that the supportive effects of NRF2 on NSPC activity and function were largely muted when viral NRF2 delivery was delayed until an older age of 20 mos (after the end of the CP). More specifically, although Musashi expressing cells were increased, and an almost significant increase in BrdU cell numbers (p = 0.052) was observed upon NRF2 upregulation in the older rats, other major NSPC populations (Nestin/GFAP, Nestin/Sox2, Dcx, and their behavior) were largely not affected. These results identify a certain age-and timedependency of NRF2 effects, which is intriguing. More broadly, our data suggest that specific downstream molecular events may already have taken root, to irrevocably compromise NSPC function, by the end of the critical middle-aged period, thus contributing to a resistance to NRF2-based rejuvenation after this time. Dysfunction of proteasome-dependent proteolysis is also heavily implicated in aging and cell senescence (Leeman et al., 2018;Morimoto & Cuervo, 2009). NRF2 regulates proteasome expression, and its activation has been shown to impede cellular senescence, while inactivation of the pathway recapitulates aging phenotypes (Gabriel et al., 2015;Kubben et al., 2016;Schmidlin et al., 2019). Such mechanisms may be important in determining the age-dependent NRF2 effects seen in our study. Interestingly, other studies have indicated that old mouse NSPCs can be activated through specific intrinsic manipulations (Leeman et al., 2018;Seib et al., 2013). In the context of the current work, since NRF2 has a multitude of targets besides GCLM, it is possible that age-associated differences exist in the ability of NRF2 to activate certain downstream genes versus others due to cell-intrinsic or other extrinsic influences coming from the older niche. For instance, age-related epigenetic alterations in the NRF2 pathway may be involved (Guo et al., 2015). Such processes will need to be further investigated. Nevertheless, our work provides important information regarding specific time-periods during which NSPCs may be more amenable, or resistant, to change-a fundamental subject that needs to be understood but one on which not much is known.
In the larger context of the presented work, we comment that although NRF2 is known as major transcription factor, essential to the cell's survival and homeostatic mechanisms, its specific contribution and importance in stem cells is only recently emerging (Bryan et al., 2013;Schmidlin et al., 2019;Suzuki & Yamamoto, 2017). Stem cells, including pluripotent/embryonic stem cells and adult tissue stem cells, possess unique metabolic programs and reduction-oxidation (or redox) states to sustain proliferation while maintaining pluripotency, multipotency, and/or specified differentiation (Dai et al., 2020). In this vein, it has been shown that NRF2 may govern stem cell function through the modulation of redox and metabolic pathways involving mitochondria and the proteasome (Holmstrom et al., 2016;Jang et al., 2014). In particular, NRF2, by its ability to control cellular reactive oxygen species (ROS) levels, would promote an optimal intracellular redox environment increasingly recognized as critical to stem cell function (Hochmuth et al., 2011;Madhavan, 2015;Noble et al., 2003;Rafalski & Brunet, 2011). Studies by Khacho et al. (2016) suggest that changes in mitochondrial dynamics during neural stem cell development regulate cell fate decisions through a ROSdependent NRF2-mediated transcriptional process (Khacho et al., 2016). The metabolic reprogramming from oxidative phosphorylation to glycolytic energy production seen during the induction of pluripotent stem cell differentiation is also dependent on ROSmediated NRF2 activation (Hawkins et al., 2016;Zhou et al., 2016).
Besides these metabolic and redox effects, NRF2 is also known to directly regulate cell division and phenotypic fate by interacting with other transcription factors and cell cycle regulators involved in maintaining cellular self-renewal, multipotency/pluripotency, and differentiation (Wakabayashi et al., 2015;Zhu et al., 2013). Our work aligns with these studies and highlights NRF2 as a crucial player in aging stem cells.
In conclusion, our study provides evidence that enriching NRF2 expression during a critical time of aging can meaningfully support NSPC activity and function. These data implicate NRF2 as a powerful age-relevant regulator of NSPCs. Understanding the molecular basis of NRF2's effects will reveal fundamental aspects of NSPC biology that underlie its ability to sustain enduring plasticity and lifelong resilience. Moreover, optimizing NRF2 pathway regulation of downstream targets will likely expand the opportunities for clinical translation of NSPCs.
| Animals
Adult male Fisher 344 rats aged 11 mos and 20 mos were obtained from the National Institutes of Health (NIH-NIA). The rats were
Intraperitoneal Bromodeoxyuridine (BrdU) injections were given at about 4 mos post-AAV to label proliferating and migrating NSPCs (Corenblum et al., 2015;Madhavan et al., 2015). BrdU was delivered at a dose of 50 mg/kg/12 h for 3 days, and the animals sacrificed 4 days afterward.
Fine olfactory discrimination behavior
Rats were subjected to behavioral testing via a fine olfactory discrimination task, which is an established measure of the SVZ NSPC function and neurogenesis in vivo (Corenblum et al., 2016;Enwere et al., 2004). As described previously, the task includes an initial training and subsequent testing stages for Discrete and Fine with the same 2-min time limit and 30-s inter-trial interval. This was repeated so that five total trials were conducted.
Challenging beam test
In order to assess striatum-based motor function, rats were tested through a modified beam walking task (Drucker-Colin & Garcia-Hernandez, 1991). Briefly, rats were trained to traverse a set of wooden beams of three different widths (15 mm, 20 mm, and 30 mm) consisting of a start platform at 100 cm above floor height, and an end platform (with rat's home cage) also at 100 cm above floor height. The rats were then evaluated through different measures to analyze motor strength and coordination. Training and testing are performed in the dark during animal's awake cycle. Briefly, training consists of 3 days, where each animal is placed on the beam and allowed to traverse the length in each of three trials. A 30-s rest time is allowed between runs. Testing consisted of the rat being placed on each of the three beams consecutively and allowed to traverse the length. Two trials were completed for each width of beam. If the animal did not cross the beam in 120 s, the trial was considered unsuccessful. Evaluations are based on successful beam crossings, total time to traverse the beam, and foot slip errors. Specifically, quantification of total traverse time was taken, in seconds, and consisted of only trials where an animal successfully crossed the entire beam and did not fall. For an overall task assessment, a composite score was also generated giving each animal a starting score of 3. One point each was subtracted if the animal made a foot slip error or scooted across the beam instead of using its limbs to cross. If the animal fell or did not cross the beam, it was given a score of zero.
| Immunohistochemistry
Sections were blocked [10% normal goat serum, 0.5% triton-X-100 in tris-buffered saline (TBS, pH 7.4)] and incubated in primary antibody overnight at room temperature (RT). Primary antibodies were detected in a 2-h incubation at RT with secondary antibodies coupled to fluorochromes Alexa 488, 594, 647 (Life Technologies-Molecular Probes) and counterstained with 4′,6′-diamidino-2-phenylindole, dihydrochloride (DAPI, Life Technologies). Alternatively, a chromogenic method was used in which primaries were exposed to biotinylated secondary antibodies (Vector Laboratories) followed by treatment with ABC reagent (Vector Laboratories) and 3′-Diaminobenzidine Table S1]. (Madhavan et al., 2012. Using the optical fractionator probe, BrdU cell counts were conducted through the dorsolateral SVZ in sections at 480 μm intervals across the rostrocaudal axis of the structure. In terms of the dorsoventral extent of the SVZ counted, it covered the SVZ area to a point midway between the genu of the corpus callosum and the anterior commissure crossing.
| Stereology and cell counting
In all cases, after section thickness was determined, guard zones were set at 2 μm each at the top and bottom of the section. All contours were drawn around the region of interest at 2.5x magnification. Clear uniformly labeled nuclei were counted under a 63x oil immersion objective using a grid size of 40 × 40 μm and a counting frame size of 60 × 60 μm. The counting frame was lowered at 1-2 μm interludes and each cell in focus was marked. The Gundersen method for calculating the coefficient of error was used to estimate the accuracy of the optical fractionator results. Coefficients obtained were generally less than 0.10.
Data were expressed as mean ± SEM of the total number of cells obtained across rostral to caudal sections counted in each experimental group.
| Microscopy
Fluorescence analysis was performed using a Zeiss LSM880 confocal microscope (Zeiss). Z sectioning was performed at 1-2 μm intervals in order to verify the co-localization of markers. Image extraction and analysis was conducted via the Zen Blue software (v2.5; Zeiss).
A Zeiss M2 Imager microscope connected to an AxioCam Mrc digital camera connected was used for brightfield microscopy. A Leica DMI 6000 inverted microscope (Leica Microsystems) equipped with Leica Application Suite-Advanced Fluorescence 3.0 and a Hamamatsu Flash 4.0 sCMOS greyscale camera was used to image entire sections using a 5X dry objective. These pictures were captured using the Leica LAS-X version 3.7 software (Leica Microsystems) and used to generate stitched images that showed broader views of the AAV-eGFP transduction.
| Statistical analyses
Sigmaplot 11 and GraphPad prism 8 software were used for statistical analyses. For comparing two groups, t tests were used. For comparisons between three or more groups, one-way analysis of variance (ANOVA) followed by Tukey's or Bonferroni's post hoc test for multiple comparisons between treatment groups was conducted. A two-way repeated measures ANOVA with Tukey's post hoc test was applied to the olfactory behavioral data. Differences were accepted as significant at p < 0.05. Statistical details as pertaining to each experiment are provided within the relevant results and legend sections.
CO N FLI C T O F I NTE R E S T
The authors declare no competing financial interests.
DATA AVA I L A B I L I T Y S TAT E M E N T
The data that support the findings of this study are available from the corresponding author upon reasonable request. As such, we will follow guidance provided by the journal for sharing the data.
|
2021-06-16T06:17:27.360Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6df0250e901ca34d28f6934aa66462aaa7f35491",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/acel.13385",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "79b6a2d525a32be6d07c689587797b26432262fd",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
250498198
|
pes2o/s2orc
|
v3-fos-license
|
Infant formulas with synthetic oligosaccharides and respective marketing practices: Position Statement of the German Society for Child and Adolescent Medicine e.V. (DGKJ), Commission for Nutrition
Human milk contains more than 150 different oligosaccharides, which together are among to the quantitatively predominant solid components of breast milk. The oligosaccharide content and composition of human milk show large inter-individual differences. Oligosaccharide content is mostly influenced by genetic variants of the mother’s secretor status. Oligosaccharides in human milk are utilized by infants’ intestinal bacteria, affecting bacterial composition and metabolic activity. Maternal secretor status, and respective differing fucosylated oligosaccharide content, has been associated both with reduced and increased risk of infection in different populations of breastfed infants, possibly due to environmental conditions and the infant’s genotype. There are no safety concerns regarding the addition of previously approved oligosaccharides to infant formula; however, no firm conclusions can be drawn about clinically relevant benefits either. Therefore, infant formulas with synthetic oligosaccharide additives are currently not preferentially recommended over infant formulas without such additives. We consider the use of terms such as “human milk oligosaccharides” and corresponding abbreviations such as “HMO” in any advertising of infant formula to be an inappropriate idealization of infant formula. Manufacturers should stop this practice, and such marketing practices should be prevented by responsible supervisory authorities. Pediatricians should inform families that infant formulas supplemented with synthetic oligosaccharides do not resemble the complex oligosaccharide composition of human milk. Supplementary Information The online version contains supplementary material available at 10.1186/s40348-022-00146-y.
Background
Human milk contains lactose as a digestible carbohydrate and various oligosaccharides as indigestible carbohydrates. In mature human milk, the total content of oligosaccharides is 5-15 g/L. Together with lactose, fat, and protein, they are one of the major solid components of human milk [1,2]. Oligosaccharides in human milk (known as "breast milk oligosaccharides, " "human milk oligosaccharides, " or "HMOs") are made up of five building blocks, namely galactose, glucose, fucose, N-acetyl glucosamine, and N-acetyl neuraminic acid [3]. Beginning with lactose, the complexity of the diverse structures increases through one or multiple extensions with lacto-N-biose or lactosamine and additional modifications with fucose and/or sialic acid. The activity of various short-and long-chain components of mammalian milk has been characterized [4]. Of these, about twothirds is neutral, and one-third is acidic (containing sialic acid) oligosaccharides. There are 15 predominant oligosaccharides that account for 80-90% of the total content of oligosaccharides in human milk.
Individual variations and genetic predisposition
The oligosaccharide patterns in human milk show very large inter-individual differences which are partly genetically determined. In humans, certain clusters can be distinguished by the presence or absence of certain glycosyltransferases, such as fucosyltransferases FUT2 and FUT3 [5,6]. FUT2 mediates the synthesis of the neutral oligosaccharides such as 2′-FL and lacto-N-fucopentaose-I (LNFP-I). FUT3 is crucial for the formation of lacto-N-fucopentaose-II (LNFP-II The biological significance of the differences in human milk composition between secretors and nonsecretors (of 2′-FL) is a matter of debate. Lack of FUT2 activity has been associated with relative resistance to rotavirus and norovirus infections [9-11] but increased colonization rate with group B streptococci [12]. Divergent effects have been reported in different populations. Studies from North America showed a lower incidence of diarrhea in breastfed children of secretors than in breastfed children of nonsecretors [13,14], while breastfed children of secretors in the UK, Bangladesh, Peru, and Tanzania showed increased diarrhea incidence [15,16]. An association of the level of 2′-FL in milk with excessive weight gain in infants has also been reported [17]. The effects of secretor status may differ depending on environmental conditions and pathogen exposure. In addition to the composition of human milk, the infant's secretor status also seems to be important. Infant FUT2 and FUT3 positivity were associated with a marked risk reduction by almost 30% for all-cause diarrhea [15]. However, further data from clinical studies are required to potentially support conclusive inferences.
Biological functions of oligosaccharides in human milk
Oligosaccharides pass undigested through the small intestine but are metabolized by gut bacteria. They can affect metabolic activity and proliferation of the intestinal microbiota, similar to the effects of undigested lactose and fiber. With regard to the structural variety and the sometimes very high content of certain oligosaccharides in human milk, structure-specific effects have also been ascribed to them [1,2,18,19]. An everincreasing number of ex vivo and animal studies indicates potential gastrointestinal and systemic effects. Effects on the composition of the intestinal microbiome have been the most studied so far. Oligosaccharides conveyed through human milk seem to be preferentially metabolized by certain commensal bacteria, in particular Bifidobacteria and Bacteroides species. Bacteria that utilize certain oligosaccharides ("cross-feeding") and receptor-analogous effects of oligosaccharides may influence intestinal colonization and the composition of the microbiota (through, for example, the formation of short-chain fatty acids). An infant's immune system could be influenced directly or indirectly via the composition of the microbiota. Furthermore, certain oligosaccharides interfere with the lectin-mediated binding of certain pathogenic bacteria or viruses to the intestinal mucosa [12,20]. Influences on intestinal permeability and intestinal cell maturation are also debated [1,21].
The total amount of oligosaccharides in milk does not differ between mothers of preterm infants with and without necrotizing enterocolitis (NEC) [22,23]. However, human milk fed to preterm infants who developed NEC had less disialyllacto-N-tetraose (DSLNT) than milk fed to control infants in studies conducted in South Africa [24], North America [22], and the UK [25], whereas NEC was associated with less milk lacto-N-difucohexaose I and lower diversity of oligosaccharides in a Swedish cohort [23]. In randomized controlled trials, pasteurized human milk has been shown to reduce the risk of NEC in preterm infants [26]. It is conceivable that human milk oligosaccharides which are not affected by pasteurization might contribute to the observed risk reduction for NEC.
Since small amounts of oligosaccharides can be taken up systemically, leukocyte-endothelium interactions detected in vitro or the effect on lymphocytes with the subsequent production of specific cytokines is also conceivable in vivo [27]. There is also some evidence to suggest that oligosaccharides may influence the gut-brain axis. In rodents and pigs, the use of oligosaccharides had a positive effect on the development of brain functions [28,29]. However, it is currently unclear whether these experimental animal data reflect the situation in human infants.
Oligosaccharides in cow's milk and goat's milk
In cow's milk, which serves as the basis for the production of infant formula, there are only a few mainly acidic oligosaccharides, present in very low concentrations. The total content in mature cow's milk is about 0.03-0.06 g/L. In goat's milk, which is also used for producing infant formula [30], concentrations of 0.06-0.35 g/L are slightly higher than in cow's milk [31].
Addition of synthetic oligosaccharides to infant formula
Oligosaccharides have been added to some infant formulas. Galactooligosaccharides (GOS) are galactose oligomers synthesized from lactose. GOS including 3′-galactosyllactose (3′-GL) are found in human milk only in small amounts [32][33][34]. Fructooligosaccharides (FOS), also called oligofructose, are fructose polymers which have a sweetening effect. They are absent in human milk. In clinical studies, the addition of shortchain GOS and long-chain FOS in a ratio of 9:1 [30,35], which is approved in Europe, at a concentration of 0.8 g/100 mL led to softer stool consistency and an increase in the proportion of bifidobacteria in infants' stool [36]. No conclusive data are available for any other effect [36]. The European Food Safety Authority (EFSA) did not find evidence for any cause-effect relationship between the intake of GOS or FOS and reductions in gastrointestinal discomfort or potentially pathogenic microorganisms [37,38]. Advances in the production of oligosaccharides, including the use of genetically modified microorganisms, have made it possible to produce some of the oligosaccharides found in human milk on an industrial scale [21,39,40]. However, only simple, shortchain oligosaccharides are currently used, mostly because of financial costs. EFSA and US Food and Drug Administration (FDA) have evaluated several synthetic oligosaccharides also found in human milk as novel food ingredients (2′-fucosyllactose, 2′-FL; lacto-N-neotetraose, LNnT; lacto-N-tetraose, LNT; 2′-FL + difucosyllactose, DFL; 3′-sialyllactose, 3′-SL; and 6′-sialyllactose, 6′-SL) [41][42][43][44][45][46]. Table 1 shows the maximum levels of synthetic oligosaccharides or combinations of oligosaccharides permitted for addition to infant formulas.
Recent clinical studies on infant formula supplemented with synthetic oligosaccharides
At present, there are only a few clinical studies in which the supplementation of infant formula with 2′-FL alone or in combination with LNnT or other nondairy oligosaccharides (GOS) has been investigated [27,[47][48][49].
Marriage et al. reported in 2015 that the supplementation of infant formula with 2′-FL (control 2.4 g GOS; experimental infant formula 1: 2.2 g GOS + 0.2 g 2′-FL; experimental infant formula 2: 1.4 g GOS + 1.0 g/l 2′-FL) did not lead to significant differences in head circumference, height or weight of the infants in the experimental groups, compared to breastfed infants, over the first 4 months. In addition, the authors state that the supplemented formula was well tolerated, and that the amount of 2′-FL detected in blood was comparable to that in breastfed infants [47].
Two randomized studies with infant formulas to which 2′-FL [50] or 2′-FL and LNnT [48] had been added showed no adverse effects on infant growth or tolerance to the formula. As a secondary endpoint, fewer respiratory infections and less use of antipyretics and antibiotics in the first year of life were reported when using infant formula enriched with 2′-FL and LnNT compared to non-supplemented formula [48]. These findings require further verification. In a further clinical study on an infant formula supplemented with different concentrations of GOS, with or without the addition of 2′-FL, the authors describe a lower inflammatory cytokine profile in the first 4 months of life that is comparable to that of exclusively breastfed children [27]. The addition of 2′-FL + LNnT to infant formula has also been reported to affect bacterial populations in infants' stool [49].
In summary, no disadvantages in terms of infant growth have been observed in infants fed infant formulas supplemented with individual oligosaccharides previously approved by EFSA. Reported effects on the infant's gut microbiota and the defense against infections require confirmation in further studies. As reported above, some oligosaccharides such as 2′-FL are absent from human milk in 20-30% of mothers in Europe. Both advantages and disadvantages with regard to risk of infections in breastfed infants of nonsecretory mothers have been described in different studies. It is unknown whether the addition of fucosylated oligosaccharides to infant formula could analogously induce both potential benefits and risks. However, the existence of individual oligosaccharides in human milk alone is not a sufficient justification for an assumed additional benefit of structurally identical synthetic oligosaccharides in infant formula. The oligosaccharide fraction in human milk is highly complex and has an individualized composition. Whether these differences affect the health of the infant cannot be assessed at this time. Moreover, the complexity of the oligosaccharides in human milk currently cannot be emulated in infant formula [51]. Overall, existing data on supplementation of infant formula with synthetic oligosaccharides are considered too limited to make general recommendations for its use.
Marketing of infant formulas fortified with synthetic oligosaccharides
In their marketing to consumers, manufacturers of infant formulas and follow-on formulas enriched with synthetic oligosaccharides suggest a similarity with breastfeeding. They do this by using terms such as "breast milk oligosaccharides" or "human milk oligosaccharides" ("HMO") on product packaging, on their websites, through sponsored blogs, and in magazine articles. The use of this term suggests to consumers that the oligosaccharide composition in infant formula is similar to that of human milk. This is not correct and can lead to consumer deception, because the addition of simple, short-chain oligosaccharides does not lead to a similarity with the complex composition of hundreds of short-and long-chain oligosaccharides in human milk. The Committee on Nutrition regards this kind of marketing as a violation of applicable European and German law. The European Union directive on infant formula and follow-on formula states that communication on infant formula "should not undermine the promotion of breastfeeding. " Furthermore, "use of the terms 'humanised, ' 'maternalised, ' 'adapted, ' or similar terms is prohibited" [52]. The German regulation of dietetic foods prohibits "idealized wording" in the labelling of infant formula. Accordingly, when labelling infant formula and follow-on formula, the use of the terms "humanized, " "maternalised, " "adapted, " or similar terms is prohibited [53]. The Commission for Nutrition considers terms such as "breast milk oligosaccharides" or "human milk oligosaccharides" and respective abbreviations such as "HMO" in relation to infant formula to be misleading. Idealization of infant formula with the term "humanized" and similar terms is considered to be equally unlawful and undermine the promotion of breastfeeding.
Additional information
The German version of this consensus article can be found as an additional file attached to this article.
Conclusions
• The Commission for Nutrition of the German Society for Child and Adolescent Medicine does not see any safety concerns when supplementing infant formulas with the synthetic oligosaccharides previously approved in Europe in the specified maximum amounts. • The few studies on infants available to date do not allow any reliable conclusions to be drawn about clinically relevant advantages of synthetic oligosaccharide additives. • Preferential use of infant formulas with synthetic oligosaccharide additives is therefore not recommended on the basis of currently available data. • The use of terms such as "human milk oligosaccharides" and abbreviations such as "HMO" in promoting infant and follow-on formula represents an unacceptable idealization, which suggests a nonexistent similarity with human milk and can thus undermine the priority of breastfeeding promotion.
Additional file 1. The German version of the article.
|
2022-07-14T14:06:50.274Z
|
2022-07-13T00:00:00.000
|
{
"year": 2022,
"sha1": "d6c47d2535cd541a9892873607c1970c998bfd3e",
"oa_license": "CCBY",
"oa_url": "https://molcellped.springeropen.com/track/pdf/10.1186/s40348-022-00146-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d6c47d2535cd541a9892873607c1970c998bfd3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246423620
|
pes2o/s2orc
|
v3-fos-license
|
Effect of UV-C Irradiation on the Shelf Life of Fresh-Cut Potato and Its Sensory Properties after Cooking
SUMMARY Research background Potato tissue is damaged during fresh-cut production, which makes fresh-cut potato susceptible to the quality loss and microbiological spoilage. At the same time, such products are desirable due to their convenience; however, they are extremely sensitive and have short shelf life. The main challenge of the fresh-cut potato industry is to find possibilities to overcome these drawbacks. UV-C treatment, known for its antibacterial activity, is a promising technique and it shows a potential to improve shelf life of fresh-cut potato products. Experimental approach The influence of the UV-C treatment on the safety and quality, as well as sensory traits of fresh-cut potato (Solanum tuberosum L. cv. Birgit) during storage was examined. For this purpose, 0-, 3-, 5- and 10-min UV-C irradiation was applied on vacuum-packed potato slices pretreated with sodium ascorbate solution. During 23 days of storage at (6±1) °C, microbiological, physicochemical and sensory properties of raw samples were monitored, along with sensory properties of boiled and fried fresh-cut potatoes. Results and conclusions The 5- and 10-min UV-C treatments significantly reduced microbial growth, increased total solids and lightness (L*), and positively affected odour and firmness of raw potatoes. Cooked UV-C-treated samples were described with more pronounced characteristic potato odour and taste. Overall, UV-C-treated fresh-cut potato retained its good quality and sensory traits up to 15 days at (6±1) °C. Novelty and scientific contribution To the best of our knowledge, this is the first scientific article dealing with the effect of UV-C light on durability (safety, quality and sensory traits) of fresh-cut potato cv. Birgit and its suitability for boiling and frying. In general, UV-C treatment is a known antimicrobial technique, but its application on fresh-cut potato is poorly explored. Results confirmed that vacuum-packed fresh-cut potato treated only with UV-C and sodium ascorbate as anti-browning agent, without the addition of chemical preservatives, had twofold longer shelf-life at (6±1) °C than the fresh-cut potato not treated with UV-C. Fresh-cut potato treated with UV-C retained good overall quality and sensory properties either raw, boiled or fried. Results of this study could also be useful for producers in terms of potential UV-C application as a strategy for prolonging the shelf-life of fresh-cut potato.
INTRODUCTION
The popularity and commercial importance of the fresh-cut products are growing due to extreme convenience for the preparation of home meals, catering industry and in many other food services. The processing of fresh-cut fruits and vegetables includes only washing, trimming, peeling and/or cutting and packing to maintain their freshness and high nutritional value (1). During that process they are susceptible to microbial growth, water loss, off-odour, tissue softening, browning and general loss of quality, which makes them very perishable and limits their shelf life (2). During processing of fresh-cut products, enzymes and their substrates are delocalized due to cell integrity damage, which results in higher enzymatic activity responsible for oxidative reactions. These reactions lead to the formation of brown melanoid pigments (3).
Fresh-cut potato is a potentially interesting potato product (4) and many studies are focused on finding solutions to preserve the quality and safety of fresh-cut potato and to extend its shelf life. For this purpose, appropriate cultivar, antimicrobial and antibrowning agents, packaging materials and conditions as well as storage conditions have been investigated (5,6). According to our latest published study, fresh-cut potato cv. Birgit pretreated with sodium ascorbate solution and vacuum-packed showed promising results during 8 days of storage at 10 °C (5). Besides the above-mentioned approach, non-thermal UV-C technology has been investigated, especially in terms of prolonging shelf life by preventing microbial growth and enzyme activity (7). Antimicrobial effect of UV-C has a maximum effect at 254 nm and its effectiveness is based on structural changes in the DNA of microorganisms, caused by cross-linking between pyrimidine bases, which consequently contributes to the inability of transcription and replication of the cells (8). However, the irradiated plant tissue can be damaged using high UV-C doses (9). Besides, effectiveness of UV-C irradiation against enzyme activity depends on the applied dose and the sensitivity of enzymatic proteins, which is highly correlated with their nature (10,11). By exposing the enzyme to irradiation, their spatial structure can change, enabling better exposure of active sites, which leads to an initial increase in the enzyme activity (12). Thus, to extend the shelf life of fresh-cut products, it is necessary to evaluate the optimal doses of UV-C irradiation considering plant properties and the already mentioned antibrowning agents, packaging materials and packaging conditions, as well as storage conditions. According to Teoh et al. (6), the optimal UV-C dose was 684 mJ/cm 2 for potato slices dipped in ascorbic acid and calcium chloride solution, closed in permeable plastic boxes and stored for 10 days at 4 °C. This dose decreased the activity of polyphenol oxidase, phenylalanine ammonia lyase and peroxidase. Moreover, a significant decrease in browning and enzyme activity as well as increase in firmness were observed in the study of Xie et al. (13), where potato slices were treated with sodium acid sulphate, irradiated with UV-C for 3 min and stored in polyethylene bags for 25 days at 4 °C.
Selection of the packaging material is also very important, particularly if slices are packed and then UV-C treated. The permeability of materials to UV-C irradiation depends on the type of the used polymers as well as on its thickness. It was found that 40 µm thick polyamide/polyethylene laminate was permeable for 80 % UV-C irradiation (11).
Furthermore, UV-C treatment showed a positive effect on soft rot prevention in potato seed tubers (14). Also, irradiation of tubers reduced the accumulation of fructose and glucose during cold storage, which consequently reduced the formation of toxic acrylamide during frying (15) and increased brightness of the fries (16).
However, although there is a number of studies that have dealt with the quality properties of cooked potatoes without UV-C treatment (17,18) or with UV-C pretreatment of tubers (16,(19)(20)(21), reports regarding the effect of UV-C light on the quality and sensory attributes of raw and cooked fresh-cut potatoes are scarce.
Therefore, the aim of this study is to investigate the effect of different UV-C irradiation doses and 23 days of storage at (6±1) °C on microbial growth, quality and sensory properties of fresh-cut potato cv. Birgit, pretreated with sodium ascorbate solution and vacuum packed, as well as on the sensory properties of fresh-cut potatoes after boiling and frying.
Plant material
Potato (Solanum tuberosum L.) tubers of cv. Birgit were harvested in Slavonia region, Croatia (45°40'N, 17°1'E) during 2019, treated with anti-sprouting agent (Gro Stop Basis and Gro Stop Fog, Certis Europe, Great Abington, UK) and stored for one month in the dark (8 °C and relative humidity approx. 100 %) before analysis.
UV-C treatment
The potato slices were treated in an UV-C chamber (UVpro EKB 100; Orca GmbH, Kürten, Germany) equipped with 4 UV-C lamps (4×HNSL 24 W, maximal emission at 253.7 nm; UVpro). The samples were irradiated for 0 (control), 3 (3-UV--C), 5 (5-UV-C) and 10 min (10-UV-C) to obtain doses of 0, 162, 270 and 540 mJ/cm 2 outside and 0, 108, 180 and 360 mJ/cm 2 inside the vacuum bags (UVC pro radiometer; Orca GmbH). Afterwards, the untreated and UV-C-treated samples were stored at (6±1) °C and analysed at the beginning of the storage (day 0), on the 8th day, because in our previous study (5) we found that 8-day stability can be achieved using vacuum packing and sodium ascorbate treatment (under the same conditions as described in the paragraph Sample preparation), and on the 11th, 15th and 23rd days of storage. Experiment was done in duplicate.
Determination of oxygen permeability of packaging
Oxygen permeability (cm 3 /(m 2 ·day·kPa)) of packing was determined using manometric method on a permeability tester (GDP-C; Brugger Feinmechanik GmbH, Munich, Germany). The increase in the pressure during the test period was evaluated and displayed by an external computer. Data were recorded and permeability was calculated automatically. The sample temperature (23±1) °C was adjusted using an external thermostat (Haake F3 K circulating water bath chiller/heater; Haake GmbH, Karlsruhe, Germany). All measurements were carried out in duplicates.
Microbiological analysis
Total aerobic mesophilic bacteria count (TAMBC) was determined at 30 °C according to HRN EN ISO 4833-1:2013 method (22). Dilutions were made with peptone water (0.1 %, m/V) and surface plated (1 mL) in duplicate on a plate count agar (Biolife, Milan, Italy). The plates were incubated at (30±1) °C for (72±3) h in dry heat oven (FN-500; Nüve, Ankara, Turkey). Analyses were performed on raw samples and the results were expressed as mean value of log CFU/g.
Determination of total solids, soluble solids and pH
The raw potato slices were homogenized (MSM89160 blender; Robert Bosch GmbH, Gerlingen-Schillerhöhe, Germany) and used for determination of total solids, soluble solids and acidity. Total solids were calculated as a percentage of the mass ratio before and after drying potato samples at (105±1) °C (FN-500; Nüve) to a constant mass, while soluble solids were determined by a digital refractometer (DR201-95; A. Krüss Optronic GmbH, Hamburg, Germany) at 20 °C and expressed as °Brix (g/100 g). The pH was measured by a pH meter (WTW Lab pH meter inoLab® pH 7110; Xylem Analytics Germany GmbH, Weilheim, Germany). All measurements were carried out in duplicates and results were expressed as mean value±standard error (S.E.).
Firmness analysis
The firmness of raw fresh-cut potato samples was determined using a texture analyser (Fruit Texture Analyzer, Agrosta, Serqueux, France) with 5 kg load cell and 2 mm punch probe. High and low speeds were set to 1 mm/s and stroke after contact to 2 mm. Firmness was determined by measuring the maximum force (N) required to puncture the slices. The measurements were performed on two slices of each sample with 2 punctures on each slice and the results were expressed as mean value±S.E.
Colour analysis
The colour of raw fresh-cut potato slices was measured by a colorimeter (CR-5; Konica Minolta, Tokyo, Japan), equipped with D65 light source and 2° standard observers using CIELAB colour parameters: L* (lightness), a* (red/green) and b* (yellow/blue). Measurements were performed on two slices of each sample and results were expressed as the mean value±S.E.
Cooking treatments
Immediately after the treatment and on the 8th, 11th, 15th and 23rd day of storage, raw samples were cooked according to Dite Hunjek et al. (18). Samples were boiled in distilled water Φ(water, sample)=5:1 at 100 °C for 15 min. Other samples were fried in sunflower oil (m(sample)/V(oil))=120 g/L at initial temperature of 180 °C for 5 min. The surface moisture and oil of cooked potatoes were removed with paper towel.
Sensory monitoring
Quantitative descriptive analysis (QDA) of raw, boiled and fried potato samples was conducted in a sensory laboratory equipped according to the ISO 8589:2007 (23) guidelines at ambient temperature (20 °C) by a panel of six trained people from the faculty and according to the ISO procedures 6658:2017 and 8586:2012 (24,25). Panellists had 3-day training before the evaluation in order to get acquainted with the product sensory descriptors and its evaluation. The panellists judged the quality and ranked each sample served at ambient temperature on coded plastic plates using a standard five-point scale from 1 (the lowest grade) to 5 (the highest grade) as described by Dite Hunjek et al. (5,18). Briefly, colour, as the browning intensity, was scored as follows: 1=no browning (white or cream), 2=no browning (yellow), 3=light browning, 4=average browning and 5=complete browning. Intensity of odour and off-odour was described as follows: 1= absent to 5=very pronounced, moistness from 1=very dry to 5=very moist and firmness from 1=very soft to 5=very firm. Additional sensory attributes of boiled and fried potatoes were evaluated: potato-, sweet, sour, salty, bitter and off-taste from 1=absent to 5=very pronounced. Creaminess of boiled potato was scored from 1=absence of creamy texture to 5=melting in the mouth, while oiliness and crispness, as fried potato attributes, were graded with 1=absent to 5=very pronounced. All tested attributes are given in the tables as mean value±S.E. (N=6).
Statistical analysis
The statistical analysis by parametric statistical tests was carried out to observe the effect of the UV-C treatment and storage time on the quality properties of raw, boiled and fried potato. The TAMBC, soluble solids, total solids, pH, firmness, colour parameters and sensory attributes were dependent measurable variables, while UV-C treatment and storage time were independent variables. Dependent variables were analysed by multivariate analysis of variance (MANOVA), while differences between specific group means (equal sample sizes) were determined by applying Tukey's HSD test. The analysis was performed using Statistica v. 8.0 software (26). In order to examine possible grouping of the samples, principal component analysis (PCA) was performed on the correlation matrix using XLSTAT v. 5.1 software (27), wherein principal components (PC) with eigenvalue >1 and variables with communalities ≥0.5 were considered. The significance level for all tests was α≤0.05.
Influence of UV-C treatment on permeability of packing material
Although a slight increase of permeability of packing material (1200 and 1300 cm 3 /(m 2 ·day·kPa)) was noticed for the samples 5-UV-C and 10-UV-C, respectively, this was not significantly different from control (900 cm 3 /(m 2 ·day·kPa); data not shown). Sample 3-UV-C had identical value as the control.
Tarek et al. (28) also concluded that the applied UV-C doses of 46.7-746 mJ/cm (for 0.5 to 8 min at 23 °C) did not affect surface properties of polyethylene (PE) film used for cucumber packing. It was also found that UV-C transmittance through polymeric films depends on their characteristics (such as thickness, composition, level of crystallinity and number of layers in the film). Thus, for example PE film (24.7 μm) shows transmittance of 75.5 %, multilayer films composed of six or more layers exhibit 0 % transmission (29), while PA/PE laminate is 80 % permeable to UV-C (11), similar to polypropylene film (30). Although the effect of UV-C treatment on polymeric films has been investigated by several authors (30,31), it seems that this treatment does not affect barrier properties (28), while different observations were noticed for the mechanical and surface morphology of the polymers (29,31).
Aerobic mesophilic bacterial count affected by UV-C treatment and storage time
The TAMBC in untreated and UV-C-treated raw fresh-cut potato during storage is presented in Fig. 1. Statistical results showed significant differences (p<0.01) in TAMBC among fresh-cut potato samples. The initial microbial load of control sample was 2.30 log CFU/g. At the beginning of the storage, the lowest TAMBC was noticed in 10-UV-C samples (2.18 log CFU/g). When comparing all UV-C treatments with control throughout the storage period, the significant log CFU/g values decreased in 5-and 10-UV-C samples, especially until the 15th day. On that day, measured values for 5-and 10-UV-C samples were 8.36 and 8.17 log CFU/g, respectively. These results indicated that UV-C treatment longer than 5 min did not significantly improve the decontamination effect. Similar results were reported in a study of Manzocco et al. (32) on fresh--cut melon cubes. The possible reason could be low UV-C light transmittance through the tissue as well as the rough surface of the fresh-cut product, which can partially overshadow the microorganisms and thus reduce the effect of radiation (32,33). At the end of storage, all applied UV-C treatments were equally effective on TAMBC reduction in fresh-cut potatoes compared to the control. However, it should be mentioned that for this type of foodstuff (fresh-cut potato intended for further cooking) there is no information provided by the EC regulations (34,35) related to microbiological criteria regarding TAMBC. Similarly, the Croatian Agency for Agriculture and Food (36) issued borderline level of TAMBC only for ready-to-eat vacuum packed and refrigerated vegetables, and it is ≥10 8 CFU/g. Table 1, total solid content was affected by UV-C treatment (p=0.046), while storage time did not have a significant influence (p=0.054). Mean value of total solids in control sample was 21.7 %. The highest values were obtained in 10-UV-C samples (23.2 %) and generally on the 11th day of storage (23.1 %). With regard to the UV-C treatment, all treated samples had higher total solid content, which increased with the increase of UV-C dose. The grand mean value of total solids was 22.24 %, which was quite similar to that already reported (20.72 %) by Dite Hunjek et al. (5) for cv. Birgit potatoes (harvested in 2018). The slight differences could be a result of different treatment, as well as crop year or growing conditions (37). Total solid content obtained in the present study represents an acceptable value in terms of frying, considering that potato dry matter content of 20-24 % is appropriate for chips (38). Higher potato dry matter will result in harder crust and drier potato inside texture (39).
However, the UV-C treatment and storage time had a significant influence (p≤0.01) on total soluble solid content, which varied from 4.18 to 4.80 g/100 g (°Bx) ( Table 1). In comparison with control (4.59 g/100 g), the total soluble solid content decreased with the increase of UV-C dosage, where the lowest value was measured in 10-UV-C sample (4.30 g/100 g). These results are in accordance with the results of Islam et al. (40), who treated tomatoes with UV-C. This could be related to the impact of UV-C on conjugated structural bonds of some soluble solids, which leads to their degradation or alteration (41). In this study, a significant decrease in soluble solid content was observed after 8 days of storage (4.40 g/100 g), after which it remained stable until the end of storage, when it significantly increased. Kasim and Kasim (42) also reported oscillations of total soluble solids during storage depending on the applied dose of UV-C on fresh-cut melon cubes.
UV-C treatment and storage time significantly affected the pH of raw fresh-cut potatoes (p<0.01), which ranged from 5.42 to 5.99. When compared to the control (5.64), the lowest pH value was observed in 10-UV-C samples (5.57) and after 15 days of storage (5.42) ( Table 1). The pH decreased with the increase of UV-C dosage, similarly to the results of Islam et al. (40), who also reported an increase of the titratable acidity of treated tomatoes with the increase of UV-C doses. Moreover, pH also decreased during storage probably due to the respiration rate increase and CO 2 production, which is in accordance with the results of Dite Hunjek et al. (5) and Rocha et al. (43). Lower pH can contribute to lower enzyme activity and consequently to the reduced intensity of browning (44).
Firmness of fresh-cut potatoes influenced by UV-C treatment and storage time
Firmness was significantly affected by the UV-C treatment (p<0.01) without significant effect of storage duration (p=0.14) ( Table 1). The firmness grand mean value was 7.37 N, which is in accordance with the results for cv. Birgit (7.42 N) (18). Control sample was described with the highest firmness value (7.77 N) as well as the samples on the 8th day of storage (7.5 N). The firmness of fresh-cut potatoes was lower in the UV-C-treated samples than of the control. However, increase of the UV-C dose caused firmness increase, which could be linked to the possible reduction of activity of plant cell wall degrading enzymes (45). A similar observation was also previously reported when fresh-cut pineapples were treated with UV-C (46).
Colour of fresh-cut potatoes influenced by UV-C treatment and storage time
The effect of UV-C treatment and storage time on the colour parameters of raw fresh-cut potatoes is shown in Table 1 Table 1). The lightness was considerably higher of 5-UV-C and 10-UV-C and lower of 3-UV-C samples than of control. Similar trend was also noticed in UV-C-treated watermelon, where L* values increased with the increase of the applied UV-C dose (33). This occurrence could be associated with the effect of UV-C light on the inactivation of enzymes such as polyphenol oxidase or with reduced carotenoid content (47). In this study the parameter b*, whose positive values describe yellow colour and usually reflect a presence of carotenoids in the potato (48), was not significantly reduced. The obtained colour parameters for fresh-cut potatoes are consistent with the European Cultivated Potato Database (49) data, where the colour of tuber flesh cv. Birgit is listed as yellow and also very resistant to enzymatic browning.
Sensory attributes of raw, boiled and fried fresh-cut potato affected by UV-C treatment and storage time
Raw fresh-cut potato samples All sensory attributes of raw fresh-cut potatoes were significantly affected by UV-C treatment and storage time (p<0.05), except moistness (p>0.05) ( Table 2). The colour was rated from 1.58 in 3-UV-C to 1.98 in control, indicating negligible occurrence of browning. Furthermore, all UV-C-treated -UV-C, 5-UV-C and 10-UV-C=samples treated with UV-C for 3, 5 and 10 min respectively samples showed a significant discoloration and were graded as brighter than control, which is in accordance with previously discussed L* values ( Table 1). Similar results were also reported by Manzocco et al. (32). The 10-UV-C samples showed more pronounced odour and less pronounced off--odour than other samples. All UV-C-treated samples were less firm than control, but the most pronounced reduction in firmness was observed for 3-UV-C samples, which is in accordance with the results measured by the instrument ( Table 1). During storage, potato colour was scored from 1.50 to 1.94 indicating no degradation in terms of browning. The odour was stable until the 15th day of storage, but at the end of storage the development of off-odour was notable. The lowest firmness was observed at the beginning of storage, but by the end of storage the fresh-cut potatoes maintained uniform firmness. Generally, the results of this study showed that UV-C treatment preserved sensory attributes of colour, odour, moistness and firmness of raw fresh-cut potatoes during 15 days of storage. However due to the off-odour development, the samples were not sensorially acceptable at the end of storage.
Boiled fresh-cut potato samples Table 3 shows that the majority of the evaluated sensory attributes of boiled potatoes were significantly affected by UV-C treatment and storage duration (p<0.05). Sour, bitter and off-taste were not influenced by storage time nor off--odour and moistness by UV-C treatment (p>0.05). As observed for raw samples, all UV-C-treated samples had brighter colour. The 5-and 10-UV-C samples had more intense boiled potato odour, sweet, salty and potato taste. The desirable boiled potato flavour is a result of many naturally present characteristic compounds (glutamic and other amino acids) and the ones produced during cooking (e.g. guanosine --5'-monophosphate and other 5'-ribonucleotides). Many other components such as methional, aliphatic alcohols and aldehydes also contribute to potato flavour. Besides, the desirable flavour of boiled potato derives from 2-isopropyl-3-methoxypyrazine, a compound with extremely low threshold present in raw and boiled potato (50)(51)(52). Obviously, UV-C treatment did not have a negative impact on flavour compounds, even stimulated their formation or better expression. All UV-C-treated samples had lower firmness and more pronounced creaminess than the control, where higher decrease in firmness and increase in creaminess were noticed when the UV-C dose was increased. The increased UV-C dose could probably induce some structural changes in the potato tissue, which can consequently be observed in a softer texture of the boiled potatoes. The softening degree of the boiled potatoes during cooking is influenced by starch characteristics such as amylose to amylopectin ratio, cell separation and cell wall softening (37). Some functional properties of starch can be changed as a result of prolonged UV-C treatment, such as capability of absorbing and holding water during gelatinization, reduction in amylose content, appearance of fractures and exocorrosion on the surface of the starch granule or a drop of crystallinity (53).
After the 15th day of storage browning was slightly more pronounced. Throughout the storage the odour was highly rated, while the off-odour was more pronounced only at the end of storage, and it received lower scores in boiled than in the raw samples. This could be explained by the volatility of compounds responsible for off-odour of raw potato. This was also observed previously by Dite Hunjek et al. (5). The firmness and creaminess showed variations in scores during storage; however, at the beginning of storage firmness was rated with the lowest scores and creaminess achieved the highest scores. On the 11th day, sweet taste was the most evident compared to other days, while a salty taste was the most prominent on the 8th day. Generally, boiled 5-and 10-UV-C samples were characterised by desirable odour, creaminess and taste, as well as appropriate colour and acceptable firmness. These favourable sensory attributes were preserved for 23 days of storage. Fried fresh-cut potato samples Most of the analysed sensory attributes of fried potato were significantly affected by the UV-C treatment and storage time (p<0.01), with an exception of off-odour, crispiness, sour, bitter and off-taste ( Table 4). Oiliness was significantly influenced only by storage time (p<0.01), but numerical differences were very slight (in a range from 1.00 to 1.13). All UV-C-treated samples had slightly brighter colour (2.11 to 2.15) after frying than fried control samples (2.33), and browning was not observed. According to the results of Sobol et al. (16), UV-C irradiation applied on potato tubers increased the brightness of the fried potatoes, which is in line with present results. Lin et al. (15) reported lower content of fructose and glucose in irradiated tubers during storage. During processing at high temperatures, reducing sugars and amino acids participate in Maillard's reactions, which are responsible for colour and volatile compound formation in fried products (54). Presumably, increased brightness could be linked to lowering of reducing sugars caused by UV-C treatment. Firmness of 10-UV-C samples significantly decreased, as it was observed in boiled ones ( Table 3). Odour and potato, sweet and salty taste significantly increased in fried 10-UV-C potatoes. Potato taste intensity increased with applied UV-C dose, while potato off-taste was not pronounced, a similar observation was for boiled fresh-cut potatoes.
Even though storage time showed a significant effect (p<0.01) on more than half of the evaluated properties, numerical differences were very slight. The browning scores were in the range of 2.00-2.52, and they were more pronounced on the 15th and 23rd days, like in boiled samples. Moreover, the sweet and salty taste of fried potatoes decreased and oiliness increased with storage time. The potato taste and odour were highly scored, and off-odour was not noticed regardless of the fresh-cut potato storage duration. These results indicate that the observed changes in off-odour of stored raw fresh-cut potatoes do not have an influence on the odour of fried potatoes, which is in accordance with observations for boiled potatoes. Generally, UV-C treatments positively affected the taste, odour and colour formation in fried potatoes regardless of storage time.
Results of PCA analysis of the applied UV-C treatment and storage time
PCA was used to visualize relations among the analysed parameters and to determine possible grouping of raw, boiled and fried fresh-cut potato samples in relation to the applied UV-C treatment and storage time (Fig. 2, Fig. 3 and Fig. 4, respectively).
Considering UV-C treatment duration, almost all 5-and 10 -UV-C-treated raw samples were placed among negative PC2 values since they received higher scores for colour parameters and odour. Moreover, 3-UV-C samples were not perceived as a separate group, while 0-UV-C samples were distributed mainly among positive PC1 and PC2 values. Furthermore, grouping was observed in relation to storage time, where all samples from the 23rd day of storage were also placed in the upper right quadrant, characterized by higher values of TAMBC, browning and off-odour. The 0-UV--C samples taken on the 11th and 15th days of storage were also positioned in this part of the factorial plane.
Considering the boiled fresh-cut potato samples, browning intensity, odour, moistness, sensorial firmness, creaminess, potato taste and bitterness were selected as PCA-active variables and PC1 and PC2 explained 63.92 % of the total data variance (Fig. 3). PC1 showed a strong correlation with browning intensity (r=-0.750), creaminess (r=0.699), potato taste (r=0.770), sweetness (r=0.626), saltiness (r=0.611) and bitterness (r=-0.746) as well as a moderate correlation with off--odour (r=-0.501), while a strong/moderate correlation was present between PC2 and off-odour (r=0.602), creaminess (r=0.430), saltiness (0.402) and bitterness (r=0.493). Clear separation of the samples can be noticed with regard to UV-C treatment. The major distinction of the samples was observed for 10-UV-C-treated samples, which were distributed among the positive values of PC1 and PC2 and were characterized by positive sensorial attributes: creaminess, saltiness, On the other hand, almost all control samples were placed among the negative values of PC1 and PC2. Also, 3-and 5-UV-C samples were situated around the centre of the factorial plane. Besides, boiled fresh-cut potatoes on the 23rd storage day were again separated by negative PC1 values, and were correlated with scores for browning intensity, bitterness and off-odour, which were especially high in the control sample.
As for fried potato samples, browning intensity, off-odour, oiliness, sensorial firmness, potato taste, sweetness, saltiness, bitterness and off-taste were considered and the first two PCs described 81.03 % of the total data variance (Fig. 4). A very strong/strong correlation was present between browning intensity (r=0.918), off-odour (r=0.918), bitterness (r=0.962), off-taste (r=0.969), oiliness (r=0.783) and PC1, while PC2 correlated strongly/very strongly with sensorial firmness (r= -0.838), sweetness (r=0.895), saltiness (r=0.901) and potato taste (r=0.654). The grouping of the fried potato samples in terms of UV-C treatment is rather poor, where only 10-UV-C samples, which received the highest scores for sweetness, saltiness and potato taste, were slightly distanced from the rest of the samples, especially from the samples fried at the beginning of the storage (1st and 8th day). Again, control sample from the 23rd day of storage was separated from the rest of the samples with the highest scores for undesirable sensory attributes, i.e. browning, oiliness, off-taste, bitterness and off-odour.
CONCLUSIONS UV-C technology is promising and it has a potential practical application in fresh-cut industry, especially since it has already been approved for application in food industry, specifically for liquid systems or surface disinfection. Furthermore, it is considered as environmentally friendly with low costs of energy, equipment and maintenance.
The results of this study could contribute to UV-C application in fresh-cut industry since UV-C treatment in combination with sodium ascorbate and vacuum packaging showed high efficiency in the reduction of microbial count in raw fresh-cut potato cv. Birgit during storage at (6±1) °C and in extension of its shelf life. UV-C treatments for 5 and 10 min were particularly effective. Generally, good quality and sensory attributes of fresh-cut potato were retained for up to 15 days of storage. The treatment also contributed to the reduction of browning and affected the odour of raw fresh-cut potatoes positively, and acceptable firmness was retained as well. Furthermore, UV-C-treated fresh-cut potatoes after boiling and frying were also sensorially desirable as they were characterized with more pronounced characteristic potato odour and taste than untreated samples.
On a potential large-scale production of fresh-cut potatoes UV-C treatment could present relatively short additional operation for ensuring safety and extended shelf life. Namely, it could be the final operation after potato slicing, treatment by antibrowning agents (e.g. sodium ascorbate solution) and after vacuum packaging. However, further investigation is needed in order to determine all parameters necessary to confirm the use of UV-C technology on a real scale in the fresh-cut potato industry.
FUNDING
This work was funded by the Croatian Science Foundation (Project title 'Innovative techniques in potato (Solanum tuberosum) minimal processing and its safety after preparation'; grant number IP-2016-06-5343).
|
2022-01-31T16:05:42.969Z
|
2022-06-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2ae93ddd1bc0341e028a6076ba12d6db211695f7",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.17113/ftb.60.02.22.7182",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d32b31e3c4a0a871dd817ecaeca5b2b294ace66a",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220364902
|
pes2o/s2orc
|
v3-fos-license
|
Single-molecule optical microscopy of protein dynamics and computational analysis of images to determine cell structure development in differentiating Bacillus subtilis
Graphical abstract
Introduction
Spore formation in B. subtilis offers a model system for studying development, differentiation, morphogenesis, gene expression and intercellular signalling in complex organisms [1,2]. In nutrient rich conditions, rod-shaped cells grow and multiply by symmetric midcell division to generate identical daughters (Fig. 1A). However, when starved, B. subtilis ceases growth and is able to embark on a pathway of differentiation to form a dormant cell called a spore. Spore formation begins with an asymmetric division producing a smaller forespore cell next to a larger mother cell. Each compart-ment inherits an identical chromosome, but the patterns of gene expression, orchestrated by compartment-specific RNA polymerase sigma factors, differ resulting in alternative cell fates. The mother cell engulfs the forespore in a phagocytosis-like process creating a cell-within-a-cell (Fig. 1A), and a nurturing environment in which a robust multi-layered coat is assembled around the maturing spore [3]. In the final stages, the mother cell undergoes programmed cell death releasing the spore, which is resistant to multiple environmental stresses and can lie dormant until favourable growth conditions are restored.
At sporulation onset, ring-like structures of the tubulin homologue FtsZ form at mid-cell and migrate on diverging spiral trajectories towards the cell poles [4], colocalizing with the membrane integrated phosphatase PP2C SpoIIE [5]. One polar ring matures into the sporulation septum while the other disassembles [6]. Asymmetric division otherwise involves the same proteins as vegetative cell division, though the resulting sporulation septum is thinner [7,8]. SpoIIE is the only sporulation-specific protein whose mutation causes ultrastructural changes in the asymmetric septum; null mutants of spoIIE are defective in sporulation and at lower frequency give rise to thicker asymmetric septa resembling the vegetative septum [7].
Changes in cell morphology during sporulation are coupled to a programme of gene expression, involving intercellular signalling, and the sequential activation of RNA polymerase sigma factors, r F and r G in the forespore and r E and r K in the mother cell [9]. subitlis. During spore formation, the cell divides asymmetrically producing a smaller forespore and a larger mother cell. Compartment and stage specific sigma factors are activated sequentially. The forespore is engulfed by the mother cell before maturing into a resistant spore which is released when the mother cell lyses. (B) The SpoIIE phosphatase is the most upstream-acting of three proteins regulating the activity of the first compartment-specific sigma factor, r F . Dephosphorylation allows SpoIIAA (AA) to displace r F from its complex with an anti-sigma factor (AB) enabling forespore-specific gene expression to be established. Proportions were statistically identical between epifluorescence and slimfield data at p = 0.05 (p = 0.40,0.08, 0.29, 0.72, 0.72,0.49 for each stage/indeterminate respectively) (G) Categorization of stages from Slimfield, detected forespore/septa features (coloured lines), cell boundary segmentation ( [33,44]) based on fitting a sausage shape to fluorescence image indicated (outer white dash) and interface between forespore, septa and mother cell (horizontal white dash). (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) Forespore-specific activation of r F on completion of the asymmetric septum is the defining step in differentiation. In pre-divisional and mother cells, r F resides in complex with the anti-sigma factor SpoIIAB while a third protein SpoIIAA is phosphorylated. After septation, SpoIIAA~P is dephosphorylated by the manganesedependent protein phosphatase SpoIIE. The resulting SpoIIAA displaces r F from the r F :SpoIIAB complex allowing RNA polymerase binding and transcription of forespore-specific genes (Fig. 1B) [10,11]. This in turn, triggers activation of r E in the mother cell and establishes alternate programmes of gene expression which determine different cell fates (Fig. 1A). SpoIIE has multiple roles at different sporulation stages (Fig. 1A, B). Assembly of SpoIIE to form polar rings -''E-rings", dependent on interaction with FtsZ [12], occurs during stage I, defined by the formation of an axial filament spanning the cell length and comprising two copies of the chromosome each tethered through its origin region to opposing cell poles. Formation of the asymmetric septum is defined as stage II i , during which SpoIIE interacts with the divisome components RodZ [13] and DivIVA [14]. After closure of the sporulation septum, the FtsZ ring disassembles.
SpoIIE-mediated activation of r F correlates with release of SpoIIE from the sporulation septum, marking stage II ii [13]. During stage II iii , SpoIIE interacts with SpoIIQ [15] the forespore component of an intercellular channel [16][17][18], crucial for later activation of r G .
Stage III is characterized by mother cell engulfment of the forespore; SpoIIE localizes around the forespore, but there are no data to suggest a specific role of SpoIIE in this or later stages [15].
An increased concentration of SpoIIE in the forespore relative to that in the mother cell has been proposed to account for the selective activation of r F in the emerging forespore. This may occur through equipartitioning SpoIIE into the mother cell and forespore septal membranes leading to a higher SpoIIE effective concentration in the forespore as a result of its~6 fold smaller volume [19]. It has also been shown that there is selective proteolysis of SpoIIE in the mother cell through the action of the membrane bound ATP-dependent protease, FtsH [20]. Here, it is proposed selective oligomerization at the forespore pole, protects SpoIIE from proteolysis in this compartment and further increases the concentration difference between the cell compartments.
To explore the complex function of SpoIIE further, we sought to determine its dynamic molecular architecture in differentiating cells. We employ a rapid single-molecule optical proteomics technique [65] Slimfield imaging [21][22][23] capable of tracking single fluorescently-labelled SpoIIE molecules with millisecond sampling in live B. subtilis cells to super-resolved spatial precision. By using step-wise photobleaching of the fluorescent protein tags [24] we determine the stoichiometry of each tracked SpoIIE complex and quantify the precise number of SpoIIE molecules in the mother cell and forespore in each individual cell. Also, by analysing the mobility of SpoIIE foci via their mean square displacement with respect to time, we calculate the microscopic diffusion coefficient D, model this to determine the effective diameter of SpoIIE complexes and correlate these data with measured SpoIIE content. Importantly, our copy number estimates indicate that there are similar numbers of SpoIIE molecules in both the mother cell and the forespore compartments when the asymmetric septum forms: since the volume in the forespore is significantly smaller than that of the mother cell this finding reveals an order of magnitude higher SpoIIE concentration in the forespore, correlated to the increased activity of r F . We find that the stoichiometry and diffusion of tracked SpoIIE is dependent on its interaction partners and morphological changes, suggesting its roles in sporulation are influenced by oligomeric composition and mobility. Interestingly, we detect higher order mobile, oligomeric SpoIIE, towards the cell pole, at the stage of sporulation when r F becomes selectively activated in the forespore, as previously proposed [20].
B. subtilis liquid cultures were grown in DSM [25] supplemented with chloramphenicol (5 mg ml À1 ), erythromycin (1 mg ml À1 ) and lincomycin (25 mg ml À1 ) as required. Samples for microscopy on 1% agarose slides were taken 2 h after sporulation onset (from our measurements this would ensure that the majority of cells after the onset of sporulation would have reached the start of stage II). For membrane visualization, FM 4-64 (Molecular Probes) was used (0.2-1 lg ml À1 ). When necessary, cells were concentrated by centrifugation (3 min, 2,300 Â g) and resuspended in a small volume of supernatant. Images and analysis were obtained with an Olympus BX63 microscope (Hamamatsu Orca-R 2 camera) and Olympus CellP or Olympus Image-Pro Plus 6.0 software. Imaging was performed at room temperature.
As N-terminal, cytosolic tail of SpoIIE (residues 11 to 37) is responsible for its proteolysis by FtsH (20), it is not possible to determine by western blot if the protein was degraded due to the fluorescent tag (Fig. S1C). It is also impossible to select only early stage sporulating cells corresponding to our microscopy data in western analysis as the fusion protein construct SpoIIE-mYPet localizes to the membrane and the cells sporulate at the level of the wild type cells, we believe the fusion protein is expressed, is functional and is degraded as the untagged version. Also, we did not detect any cytoplasmic fluorescence consistent with cleaved fluorescent protein alone (background fluorescence was consistent with out of plane foci -see later section). Epifluorescence images showed integration into the membrane (Fig. S1A) and simulated images of membrane integrated SpoIIE were qualitatively the same as our Slimfield images (Fig. S1B).
Single-molecule optical proteomics
A dual-color bespoke single-molecule microscope was used as described previously [23,27] which utilized narrow epifluorescence excitation [67,76] of 10 lm full width at half maximum in the sample plane from a 514 nm 20mW laser (Obis LS, Coherent). The laser was propagated through a~3x Keplerian beam deexpander. Illumination was directed onto an xyz nanostage (Mad City Labs, the Dane County, Wisconsin, USA), and emissions directed through a color splitter utilizing a dichroic mirror centered on 560 nm wavelength and emission 25 nm bandwidth filters centered at 542/594 nm (Chroma Technology Corp., Rockingham, Vermont, USA) onto an Andor iXon 128 emCCD camera, 80 nm/pixel. Brightfield imaging was performed with no gain (100 ms/frame), single-molecule imaging at maximum gain (5 ms/frame).
Foci were automatically detected using MATLAB (Mathworks) software enabling a spatial localization precision of 40 nm using iterative Gaussian masking, and automated D and stoichiometry calculation [49,77]. The copy number in the mother cell or forespore was determined by summing pixel intensities within the compartment, correcting for low background autofluorescence measured from FM4-64 labeled wild type B. subtilis, then dividing by the characteristic SpoIIE-mYPet intensity [27]. The intensity of each foci was defined as the summed intensity inside a 5 pixel radius circle corrected for the local background, defined as the mean intensity in a 17 pixel square outside the circle [24]. If the signal to noise ratio of the foci, defined as the mean intensity divided by the standard deviation of the local background, was >0.4 it was linked into an existing track if within 5 pixels [31,66], approximately matching the diffraction-limited point spread function width. Only tracks with 4 or more points were analyzed, a commonly used criterion by us and others in the single-particle tracking field [28,29]. The characteristic SpoIIE-mYPet intensity was calculated from foci intensities found towards the end of the photobleach, confirmed to be single molecule from detection of single step-wise photobleach events in individual over-tracked (i.e. tracked beyond photobleaching), Chung-Kennedy [30,68,69] filtered (an edge preserving smoothing algorithm) SpoIIE-mYPet tracks (Fig. S2). The stoichiometry of tracked foci was determined by fitting the first 4 intensity values of each track with exponential: intensity, I 0 = initial intensity, t = time since laser illuminated cell, t b = bleach time (determined by an exponential fit to all population foci intensity to be~100 ms). I o was divided by the mYPet characteristic intensity to give the stoichiometry. Although sub-optimal for low stoichiometry foci, e.g. < 6 molecules per focus, this exponential method is effective over a broad range of stoichiometries [31]. The 2D mean square displacement (MSD) was calculated from a fitted foci centroid (x(t),y(t)) assuming a track of N consecutive frames, and a time interval s = nDt, where n is a positive integer and Dt the frame integration time [32]: The localization precision from tracking is given by r, which we measure as 40 nm. D is estimated from a linear fit to the first three data points in the MSD vs. s relation (i.e. 1 n 3) for each accepted track, with the fit constrained to pass through a point 4r 2 on the vertical axis corresponding to s = 0, allowing r to vary in the range 20 -60 nm in line with the experimental range.
Frap
FRAP was carried out on a Zeiss LSM 510 Meta confocal system with Axiovert inverted microscope, fitted with Plan Apochromat 100x /1.4NA oil objective and temperature-controlled stage. A 488 nm wavelength laser excited GFP, emissions collected via a 498-564 nm bandpass filter. The strength of photobleaching in the region of interest was set to 10-20 iterations of 100 ms each to ensure maximal photobleaching of GFP inside and minimum photobleaching beyond.
Categorization of cell cycle stage
To determine the cell cycle stage during the sporulation process, the following algorithm was used: 1. Cell images were initially coarsely over-segmented by thresholding the brightfield image and then using an initial ellipse shape approximation to define the cell length [27]. We then manually optimised the cell width of a sausage function (rectangle capped with two hemicircles) that enclosed the mYPet fluorescence intensity in each cell above the level of background noise. 2. Cells were then cropped out of the original image using a bounding rectangle around the segmentation and automatically rotated parallel to the horizontal axis. 3. A more precise segmentation stage then followed. This consisted of a double threshold Otsu's method, applied to a 5 frame average of the mYPet fluorescence image. Pixels whose intensity values were above the 2nd threshold and multiplied by the segmentation contain the spore feature -either the whole forespore or septa. 4. These pixel areas were split into distinct connected components or candidate spore features and their centroids and areas calculated automatically using standard MATLAB functions. 5. A region was accepted as the mYPet spore feature mask if: 1. Its centroid is within 40% of either end of the cell. 2. Its centroid is within ± 40% of the middle of the cell width.
3. The area of its centroid was > 10 pixels (there was no upper threshold). 4. It had the highest summed pixel intensity of all the regions. 6. If nothing was accepted, steps 5.1-5.4 were repeated once with the previously found regions excluded. 7. If nothing was still found then the cell is 'pre-sporulation/ stage I'. 8. The FM4-64 frame average was similarly segmented but the mask multiplied by the forespore mask to give the FM spore feature. 9. Both FM and mYPet spore feature Major/Minor Axis, Area and Orientation were calculated by fitting the shape to an ellipse function.
10. Both were then assigned into 2 shape categories based on the aspect ratio, > 1.2 -'septa', otherwise 'filled' structure. These correspond to fluorescence only at the linearly extended septa or distributed about the forespore in a rounder shape.
11. If the FM segmentation was 'septa', the segmentation was morphologically 'thinned' and its linear curvature calculated.
12. Stages were then assigned as follows: Stage I/pre-sporulation: no mYPet spore feature detected. Stage II i : 'septa' FM and mYPet spore features with curvature < 1. Stage II ii : 'septa' FM and 'filled' mYPet spore features. Stage II iii : 'septa' FM and mYPet spore features with curvature > 1. Stage III: 'filled' FM and mYPet spore features. To confirm the spore categorization algorithm we tested it on a series of simulated images ( Fig S1C). These were generated by integrating a model point spread function (PSF) over a 3D model for the cell and forespore shape and subsequently noising the image with Poisson noise based on real noise characteristics of our microscope [33]. The cell membrane was modelled as a hollow cylinder, capped with hemisphere shells at either end with 1 pixel thick walls. Stage II i septa were modelled as cell width disks while stage II iii septa were modelled as hemispherical shells. Released SpoIIE in stage II ii was modelled as a hemispherical shell capped by a disk while in stage III, it was modelled as a spherical shell. The relevant features for 100 cells in each stage were simulated in the 'mYPet' and 'FM4-64 0 channels and run through the categorization algorithm as if they were real data with no noise, average noise and the most extreme noise observed in the data. Without noise.
100% of cells were correctly identified, dropping to at worst in stage II ii 79% with average noise and in the extreme case, as low as 42%.
We attempted further confirmation using Principal Component Analysis (PCA), an approach typically used to identify specific conformations or orientations in cryo-electron microscopy data. Data, images in this case, can be broken down into a basis set of eigenvectors or eigenimages which when summed in proportion to their eigenvalues, recreate the original dataset. Its use in live cell fluorescence data is challenging due to the high heterogeneity in size, shape and intensity of the images. Thus spore images were all cropped to 16x16 pixels, rotated and aligned and their intensity normalised (Fig. S1H) before a basis set of eigenvectors were calculated by Hotelling's deflation [34]. The distribution of eigenvalues was strongly biased towards the 1st eigenvector (Fig. S1I) however 3D scatter plots of the first 3 eigenvalues did show separation of the data, further confirming our categorization algorithm but not allowing us to categorise spores based on PCA alone.
Determining the contribution from out-of-focus SpoIIE-mYPet foci
To quantify the contribution from out-of-focus SpoIIE-mYPet foci (i.e. those not detected during tracking) into the membrane 'pool' (i.e. spatially extended membranous regions of fluorescence intensity not detected as distinct foci), we assumed that the number and stoichiometry of detected foci from within the depth of field were the same as those without and were uniformly distributed. Assuming a depth of field of~350 nm, on the basis of expectations from the numerical aperture of the objective lens and peak emission wavelength, a mean cell width of~0.9 mm (61) and that the focal plane is exactly on the cell midplane we estimate~1/4 of the cell membrane lies in the depth of field of the microscope. Thus, to generate indicative estimates for copy number values per cell we extrapolated the total number of summed SpoIIE-mYPet in foci by a factor of 4x. For the stage II mother cell (Table S2), the mean total number of molecules in foci per cell is~32 (Mean foci stoichiometry multiplied by mean number of foci per cell) which multiplied by 4 agrees with the mean copy number of 82 ± 42 to within experimental error. Using the same method on other stages either agrees or over or underestimates implying that there is no measurable 'pool' of SpoIIE i.e. all of the SpoIIE-mYPet fluorescence can be accounted for by foci.
Simulating the effects of different oligomeric states for SpoIIE on the predicted stoichiometry distribution from Slimfield analysis
To simulate the effects of different oligomeric states of SpoIIE-mYPet on the observed stoichiometry distribution from Slimfield image data we calculated the probability of foci overlap [35] in each individual cell using the number of detected foci and the area of the spore feature in that particular cell. This probability was used to generate the distribution of overlaps using a Poisson distribution, based on a stage specific frequency of overlap, k. The predicted apparent stoichiometry distribution was then generated by convolving the overlap distribution with the intensity distribution of model stoichiometry, S (i.e. S = 2, dimers, S = 4, tetramers etc.). This intensity distribution was generated from the mYPet characteristic intensity distribution (Fig. S2C), re-centred on 2S, width scaled to S 1/2 *r, where r = 0.675, the sigma width of This model is a summation of multiple Gaussian distributions which are separated by a fixed number of molecules (for example 4 molecules in the case of the tetramer model), whose amplitude scales with a Poisson distribution, as expected from the nearest neighbour model. Here k is the number of overlapping foci -we sum up to a maximum of k = 5 overlapping foci since this ensured in all cases that the expectation value of foci occurrence at higher values of k was<1 (i.e. P.S < 1 focus). Finally, each of these modelled cell stoichiometry distributions was averaged over the sporulation stage population to generate the model distribution and convolved with the same 0.7 molecule width kernel as the kernel density estimates (KDEs) [70] in the real (i.e. experimental) data. The Pearson's Chi-squared statistic v 2 was calculated as: scaled on the probability density axis such that the total area underneath the KDE sums exactly to 1) at single molecule bin intervals up to a total of typically n = 30 bins, i.e. stoichiometry range tested from the full distribution is 0-30 molecules, assuming the data contained at least one recorded focus in any respective bin (if not it was discarded in the Chi-squared summation). The calculated data value C i was taken from the normalized model fit. The degrees of freedom were equal to the number of bins used in the v 2 calculation subtracting the 4 free model parameters (which were overlap frequency (k), max number of overlaps (k), intensity distribution (r) and model stoichiometry (S)). The value of the measured v 2 was then used with the inbuilt inverse Chi-squared MATLAB function chi2cdf.m at this equivalent number of degrees of freedom to calculate the equivalent p value which corresponds to the null hypothesis that the measured variation between the observed values and the model fit is random. We found that the tetramer model was the only model to produce a goodness of fit corresponding to acceptable p values at approximately 0.05 or less in all stages ( Fig. 3 and S3).
Modelling the frictional drag on SpoIIE foci
We modelled the frictional drag coefficient in the cell membrane of SpoIIE foci as that due to a cylinder whose height h matches the width of the phospholipid bilayer (~3nm) with a radius given by parameter a, using a generalized method established previously to characterize the lateral diffusion of transmembrane proteins [36,37]. In brief, the diffusion coefficient D is estimated from the Stokes-Einstein relation of D = k B T/c, where k B is the Boltzmann constant and T the absolute temperature, and the lateral viscous drag c is given by: where g 1 and g 2 are the dynamic viscosity values either side of the membrane, which we assume here are approximately the same at g c the cytoplasmic viscosity. C is a function of e = 2ag c /hg m where g m is the dynamic viscosity in the membrane itself. Since g m is typically 2-3 orders of magnitude larger than g c [38] then e is sufficiently small to use an approximation for C of: We used these formulations to generate a look-up table between D and a for the vegetative cell membrane in the mother cell, assuming g m %600 cP, and the emerging forespore cell membrane, assuming g m %1,000 cP, assuming g c %1 cP throughout ( Fig. 5C) [39]. We estimated a consensus value for D in the mother cell from the population of unweighted mean D values determined from all cell stages I-III (Table S2) of 1.05 ± 0.06 m 2 m/s (±SEM, num-ber of stages n = 5). We similarly estimated a consensus D value for the low mobility sporulation stages II i and II iii of 0.47 ± 0.04 m 2 m/s (number of stages n = 2) and a consensus D value for the high mobility sporulation stages II ii and III of 0.76 ± 0.05 m 2 m/s (number of stages n = 2). We then extrapolated these consensus values and SEM error estimates using the vegetative and forespore cell membrane look-up tables to determine corresponding mean values and ± SEM ranges for a.
Stoichiometry vs. localization
To compare foci stoichiometry as a function of location in the forespore, a simplified, normalised 1D coordinate was used. This was based on the generous forespore segmentation which extends from the mother cell side of the septa through to the outer edge of the cell pole. There was also significant variation in the size of this segmentation between cells. Thus a normalised coordinate was used, 0-1 from the two most extreme points of the forespore. This implied that on average both the septa and cell poles lie within the most extreme points of the predicted cell outline segmentation.
Statistics and goodness of fit
Where means are presented and compared, students t-tests were run and p-values presented. For data-driven models, such as the stoichiometry modelling, v 2 and p values are presented.
For physical models such as FRAP and stokes fitting, the 95% confidence intervals on the fit parameters are presented as goodness of fit.
Sporulation stage can be categorized using an accurate, highthroughput automated algorithm
We generated a chromosomally encoded fusion of SpoIIE to monomeric yellow fluorescent protein mYPet (a bright fluorescent protein with very short maturation time,<10 min [41] compared to > 2hrs sporulation time, whose long excitation wavelength results in minimal contamination of cellular autofluorescence [42]) to report on SpoIIE localization (Table S1). We prepared cells for sporulation using nutrient-depleted media, incubating with the red lipophilic dye FM4-64 for visualizing B. subtilis membrane structures [43]. This allowed us to observe steady-state patterns of SpoIIE-mYPet and FM4-64 localization for sporulation stages I, II (with associated sub-stages) and III with single-molecule detection sensitivity [71] via Slimfield (Fig. 1C), as well as standard epifluorescence microscopy (Fig. 1D,E and S1). We developed an automated high-throughput analysis framework using morphological transformations [33,44] on SpoIIE-mYPet and FM4-64 data, enabling us to categorize each cell into one of five different sporulation stages (I/pre-divisional, forespore formation stages II i , II ii and II iii , and III after engulfment), validated by simulation and principal component analysis (Fig. 1F,G, Fig. S1). Our algorithm segments the SpoIIE and FM4-64 images to identify septa and forespore features and categorises them into appropriate stages but does not distinguish between E-ring structures in stage I [45,46] and SpoIIE localization in the septa in stage II i . The measured proportions of cells in each stage (Fig. 1G) were qualitatively similar to those reported using manual, low-throughput methods [47]. Imaging a SpoIIE-mYPet strain including a DspoIIQ deletion (Table S1), defective in spore formation and unable to progress beyond stage II ii , yielded similar relative proportions of cells in stages I, II i and II ii (Fig. S1). Although imperfect, resulting in some mis-characterisation (Fig. S1), our software is objective and enables study of cells which are not easily categorised by eye and avoids biasing our study to just previously accepted morphological features of sporulation.
SpoIIE is concentrated in the forespore, probably through equipartitioning
Slimfield images revealed distinct foci, as well as a more diffuse pool of fluorescent SpoIIE localized close to the cell membrane as expected (Movies S1 to S3). Slimfield employs a high numerical aperture objective lens with a high depth of field, thus a significant amount of fluorescence and even foci were detected in the middle of the cell. To check that this signal was really from membrane bound SpoIIE, we simulated images of membrane bound fluorophores in model Bacillus shaped cells and found similar patterns of localization (Fig. S1B). We used bespoke single particle localization [48] on the Slimfield data to track foci whose width was consistent with the measured point spread function (PSF) of our microscope,~250 nm. Foci could be tracked over consecutive images up to~0.3 s using rapid 5 ms per frame sampling to a spatial precision of 40 nm [27]. Tracking of distinct foci was coupled to molecular stoichiometry analysis by estimating the initial foci brightness and dividing this by the brightness of a single mYPet molecule [22,24,31,49,50] (Fig. S2A-C). We also observed a more diffuse pool of mYPet fluorescence, not detected as foci or caused by cell autofluorescence which was negligible. Slimfield images were taken at the approximate cell mid-body so foci at the top or bottom of the cell membrane are outside the depth of field, generating the more diffuse fluorescence observed. By using integrated pixel intensities [27], we determined the total SpoIIE copy number for each cell. Utilizing our stage categorization algorithm we assigned each cell to one of the sporulation stages I-III, and also sub-divided each into three sub-regions -a septum contributed by both mother cell and forespore, a mother cell which excluded the septum, and a forespore which excluded the septum. We then quantified the number of SpoIIE molecules specifically associated with each of these sub-regions for each cell imaged ( Fig. 2A, S2D).
These analyses (Table S2) show that the total SpoIIE copy number starts at a few tens of molecules per cell in stage I, increasing as sporulation progresses to~200 SpoIIE in stage II i , then rising to 700-800 molecules per cell in stages II ii and II iii , before dropping down to~580 molecules per cell in stage III after spore engulfment. The mother cell sub-region excluding the septum reflects this trend, increasing SpoIIE copy number from 20 to 80 molecules between stages I-II i , peaking at~300 molecules in stages II ii -II iii , then tailing off to~190 molecules in stage III. Copy number in the septum and forespore also increases throughout sporulation, starting at~60 copies of SpoIIE at stage II i increasing to~400 copies by stage III.
A key question is whether the SpoIIE concentration is higher in the forespore than the mother cell, providing an explanation for cell-specific r F activation [19]. Our results support this hypothesis, although they are complicated by ambiguity in septal fluorescence, which has potential contributions from both the mother cell and the forespore, since the standard optical resolution limit is greater than the pixel-level precision of image segmentation algorithms. Even excluding the septal region, the forespore concentration of SpoIIE is an order of magnitude higher than that in the mother cell ( Fig. 2B and C). This increased concentration would arise from equipartitioning of SpoIIE between the mother cell and forespore combined with the~6 times smaller volume of the forespore. We use volume as the simplest model here but similar results are obtained using the~3 times smaller surface area to also account for SpoIIE being membrane bound. Intriguingly, arbitrarily attributing the septal fluorescence equally to the mother cell and forespore, the simplest model considering the ambiguity in which side it is on, results in approximately equal copy numbers in the two. It has been shown with an in vitro reconstituted system that a~10-fold increase in the phosphatase activity of SpoIIE towards SpoIIAA~P is sufficient to release 90% of r F from its inhibitory complex [51]. However, this imbalance in SpoIIE concentration cannot be immediately decisive in vivo as r F activation is delayed until stage II ii . This suggests that following septation either SpoIIE is not immediately active as a phosphatase, or that following its dephosphorylation by SpoIIE, SpoIIAA is delayed in its capacity to displace r F from its inhibitory complex with the anti-sigma factor.
SpoIIE is a tetramer whose quaternary organization depends on spatial and temporal localization
Next, we sought to characterize the molecular architecture of functional SpoIIE by measuring the stoichiometry of fluorescent foci. In the mother cell, the apparent stoichiometry of tracked foci ranged from as few as two up to several tens of molecules, but with a clear peak at 4 ± 2 SpoIIE molecules, conserved throughout stages I-III (Fig. 3). Using a randomized Poisson model for nearestneighbour foci distances, whose key parameters comprise SpoIIE copy number and foci density, we calculated the probability of foci being separated by less than the optical resolution limit (thus detected as single foci of higher apparent stoichiometry) to be 20-40% in the mother cell. Overlap models which used the raw SpoIIE-mYPet intensity distribution (Fig S2C) in monomers, dimers, hexamers or octamers do not account for the observed stoichiometry distribution (Fig. S3). By contrast, we find that a tetramer overlap model generates reasonable agreement within experimental error for stages I-III in the mother cell (Fig. 3, dashed lines) for all stages, with a corresponding mean probability of confidence value of p = 0.05. Thus, we believe the most likely model among those trialled is that SpoIIE in the mother cell comprises predominantly tetramers.
For SpoIIE foci in the forespore or the septum, we find the same tetramer peak in the measured stoichiometry distribution but with a longer tail of higher stoichiometry clusters extending up to hundreds of SpoIIE molecules per focus (Fig. 4A, B). We adapted the overlap model to account for different sizes and shapes of sporulation features at each stage resulting in differences in the density of SpoIIE foci (Fig. 4B). The overlapping tetramer model accounts only for low apparent stoichiometries near the tetramer peak and only in stages II iii and III. More generally, accounting for the apparent stoichiometry in the forespore requires populations of higher order oligomeric SpoIIE clusters in the model fit, in addition to tetramers. Excluding free tetrameric foci, we observe 1-3 clusters per cell (Fig. 4D) with the mean cluster stoichiometry peaking in stage II ii at > 100 molecules per focus (Fig. 4C) before decreasing as the proportion of free tetramers increases again in stage III (Fig, S5A, B). We find for foci present in the forespore the measured stoichiometry in all stages was periodic, with a characteristic interval spacing of~4 molecules (Fig. S4), suggesting that higher order clusters are composed of associating SpoIIE tetramers.
Aspects of these in vivo observations are consistent with previous in vitro experiments. Analytical ultracentrifugation experiments using a soluble fragment of SpoIIE, in which the Nterminal 319 residues, which includes the 10 putative transmembrane segments, were truncated, suggested that SpoIIE(319-872) formed hexamers and larger assemblies composed of multiples of hexamers. [20] A more recent study of a similarly truncated protein SpoIIE(325-872) fragment fused to maltose binding protein demonstrated reversible manganese-dependent oligomerisation as evidenced by changes in sedimentation behaviour and the observation of extended structures (50 nm  10 nm) using electron microscopy [52], although these authors did not speculate on the oligomeric state of the species involved. Fragments of SpoIIE are challenging to express and purify (see also Lucet et al., 2000 [12]) and their behaviour is sensitive to the size of the truncation. It is therefore not surprising that the full length protein present in the membranes of living cells assembles in a different manner. Whether SpoIIE forms oligomers in vivo in the absence of manganese would be an interesting topic of further study.
We also observed that the stoichiometry of foci in the forespore was influenced by their distance from the septum. We normalized the distance parallel to the long axis of each cell from the mother cell side of the asymmetric septum through to the distal outer edge of the cell containing the smaller forespore cell for all tracked foci and plotted this distance against foci stoichiometry (Fig. 4E and S5C)., For stage II i , foci are localized to the septum, within~300 nm , however, other stages contain foci which are delocalized over the full extent of the emerging forespore (Fig. 4E); we find that the mean SpoIIE stoichiometry for these foci increases from~12 to 150 molecules per focus (a factor of~12) for stage II ii . This observation supports the recently proposed mechanism for r F activation regulation [20] through clustering of SpoIIE in the direction of the pole at stage II ii 3.4. SpoIIE foci mobility suggests that large multi-protein assemblies are present in stages II i and II iii We sought to determine the composition and function of clusters by analyzing their mobility in live cells. We find that SpoIIE fluorescent foci mobility in general was consistent with Brownian (i.e. normal) diffusion over short timescales irrespective of cell compartment or stage (Fig. S6, S7). In the mother cell, the mean value of the microscopic diffusion coefficient D was 0.9-1.2 lm 2 / s while that in the forespore was lower by a factor of~2 (Fig. 5A, Fig. S7 and Table S2). At the onset of sporulation in stage II i foci mobility in the forespore is at its lowest with a mean D of 0. 43 ± 0.08 lm 2 /s, which increases during stage II ii to 0.67 ± 0.19 l m 2 /s, then decreases in stage II iii to 0.50 ± 0.09 lm 2 /s before increasing again in stage III to 0.76 ± 0.05 lm 2 /s, although only statistically significant in stage III. For stages I-III, D shows a dependence on stoichiometry S, indicating a trend for decreasing D with increasing SpoIIE content (Fig. 5B). Modelling this depen-dence as D~S a indicates a power-law exponent a of 0.48 ± 0.18, with no measurable difference within error for each stage (Fig. S7).
Calculations of frictional drag on SpoIIE foci, using a consensus value for D from stages I-III for the mother cell, indicate an average Stokes radius (the radius of equivalent cylinder in the membrane) in the range 3-8 nm (Fig. 5C, red dashed line). The N-terminal 330 residues of SpoIIE are predicted to form a membrane binding domain with 10 transmembrane a-helices [53]. A close packed circular arrangement of these helices, each with a diameter of 1.2 nm, would produce a SpoIIE tetramer comprising 40 transmembrane helices with a~4 nm radius, consistent with our experimentallyderived estimate. By contrast, a 'mean'~50-mer SpoIIE cluster has a Stokes radius of~13 nm. Thus the Stokes radius provides an estimate for the real size of the diffusing SpoIIE complex, including any other protein partners diffusing along with it.
For the forespore, the mean value D for higher SpoIIE mobility stages II ii and III indicates a range for Stokes radius consistent with clusters composed solely of SpoIIE tetramers (Fig. 5C, magenta dashed line). However, the low SpoIIE mobility stages II i and II iii indicate a Stokes radius approximately an order of magnitude higher at~40 nm (Fig. 5C, blue dashed line), far more than expected for a cluster of only 100 SpoIIE molecules. This observation supports a model in which SpoIIE interacts with other proteins or complexes, with these other unlabeled proteins here form-ing~5x the SpoIIE foci surface area in the membrane, increasing the apparent Stoke's radius. In stage II i interactions would be with components of the divisome [12][13][14] while in stage II iii they would be with SpoIIQ, the forespore component of an intercellular channel formed with proteins encoded on the spoIIIA operon expressed in the mother cell [15]. In stage II ii we find clusters of SpoIIE are likely not associated with a protein partner as the Stokes radius is consistent only with the SpoIIE present. This finding is also consistent with r F activation regulation [20].
Forespore SpoIIE turnover depends on sporulation stage
Using confocal microscopy of a similar cell strain but using monomeric GFP labelled SpoIIE (i.e. SpoIIE-mGFP) we performed fluorescence recovery after photobleaching (FRAP) experiments to photobleach the asymmetric septum at different stages and monitor any subsequent fluorescence recovery (Fig. 5D). During stages II i and II ii there is a relatively slow recovery with mean exponential recovery time t r of 10 ± 3.6 s and 16.0 ± 15.3 s respectively (Fig. 5E, S7). Our finding that t r is not directly correlated to D in each stage suggests that turnover here is reaction-as opposed to diffusion-limited; it may be limited by an effective off-rate as observed in other complex bacterial structures such as components of the flagellar motor or replisome [24,54]. In subsequent stages II iii and III, no recovery is detectable within error, though lower levels of fluorescence and numbers of cells in stage III result in higher measurement noise which limits the sensitivity for detecting low levels of putative recovery.
Divisome components such as FtsZ have been shown to turnover in similar FRAP studies [55], consistent with our stage II i findings when SpoIIE associates with the divisome. Turnover is also expected at stage II ii when SpoIIE is released. At stage II iii SpoIIE interacts with the SpoIIQ-SpoIIIAH channel which may account for the lack of turnover. A similar absence is unexpected at stage III when SpoIIE is released and has no known function. This suggests that at stage III, SpoIIE is released quickly then anchored into the spore, or that the viscosity in spore itself has changed as has been shown to occur during sporulation [39].
Discussion
SpoIIE performs multiple important functions. For example, it is essential to form a proper sporulation septum as well as to activate SigF. Without SpoIIE no spore can be formed and also there are many point mutations characterized in spoIIE which cause complete arrest of cell differentiation. However, how SpoIIE switches roles at different stages has been unclear. It is not known how SpoIIE localizes to the polar septum, how it causes FtsZ to relocalize from mid-cell to one of the cell poles, what role it plays in septal thinning, or how its SpoIIAA-P phosphatase activity is controlled so that r F activation is delayed until the asymmetric septum is completed [7,11]. How SpoIIE brings about foresporespecific activation of r F is a subject of particular interest [56]. Plausible suggested mechanisms include preferential SpoIIE localiza-tion on the forespore face of the septum [57], transient gene asymmetry leading to accumulation of a SpoIIE inhibitor in the mother cell [56], and the volume difference between compartments leading to higher specific activity of equipartitioned SpoIIE [58,59]. Most recently, it was shown that mother cell restricted intracellular proteolysis of SpoIIE by the membrane bound protease FtsH is important for compartment-specific activation of r F [20]. Our findings indicate that SpoIIE operates as an oligomer whose stoichiometry and mobility switch in the forespore according to specific sporulation stage, driving morphological changes, as opposed to changes being primarily dependent on the differential effective concentration of SpoIIE in either mother cell or forespore. In particular, complexes comprising four SpoIIE molecules predominate in the mother cell and at multiple stages in the forespore. Crucially, we observe reversible assembly of these tetrameric SpoIIE entities into higher order multimers during stage II ii when the protein localizes towards the pole and its latent protein phosphatase activity is manifested.
Unlike previous microscopy of YFP-labelled SpoIIE which suggested a pattern of localization almost exclusively in the forespore following asymmetric septation [20], our higher sensitivity shows SpoIIE content is at most 10-30% greater in the early forespore and septum compared to the mother cell if all of the SpoIIE in the septum is assigned to the forespore. An equipartition of septal SpoIIE results in approximately equal copy number in the mother cell and forespore. However, the >6 times smaller forespore volume [19] results in a higher SpoIIE concentration by a factor of~6-8, depending on partitioning of septal SpoIIE. It was shown previously that a 10-fold difference in SpoIIE phosphatase activity towards its substrate SpoIIAA~P could account for all-ornothing compartmental regulation of r F activity [60]. The bias towards higher copy number values in the forespore aligns with the recent suggestion that SpoIIE captured at the forespore pole is protected against proteolysis [20]. In this model, SpoIIE sequestered in the polar divisome, is handed-off to the adjacent forespore pole following cytokinesis. This forespore polar SpoIIE is protected from FtsH-mediated proteolysis by oligomerisation, which is clearly described by our observations. Compartment specificity results from the proximity of the forespore pole to the site of asymmetric division.
Crystallographic and biophysical studies reveal that SpoIIE (590-827), comprising the phosphatase domain, is a monomer while SpoIIE(457-827), comprising the phosphatase plus part of the upstream regulatory domain, is dimeric (Fig. 6A) [61,62]. Comparison of these structures and mapping of mutational data onto them led to the proposal that PP2C domains in SpoIIE(590-827) and SpoIIE(457-827) crystals represent inactive and active states respectively. Activation is accompanied by a 45°rigid-body rotation of two 'switch' helices [62]. This switch is set by a long ahelix in the regulatory domain which mediates dimerization (Fig. 6A). Movement of the switch helices upon dimerization translates a conserved glycine (Gly629 in SpoIIE) into the active site where it can participate in cooperative binding to two catalytic manganese ions. These ions are conserved in PP2C phosphatases and here would be expected to activate a water molecule for nucleophilic attack at the phosphorus of the phosphorylated serine 57 residue in SpoIIAA-P. The increase in SpoIIE stoichiometry observed upon activation in vivo is consistent with these structural findings although clearly larger assemblies are implied. We can speculate on the basis of the data presented here that these larger assemblies arise from further homomeric quaternary interactions mediated by the substantial membrane binding domain and/or the component of the regulatory domain which has yet to be fully characterized. The results are consistent with the hand-off model [20] in which release from the divisome allows SpoIIE tetramers to diffuse away from the septum and self-associate to form high stoichiometry clusters in a spontaneous process with similarities to that observed for the plasmalemmal protein syntaxin [63]. We speculate that the free energy of reassembly is used to flip the helical switch, allowing manganese acquisition and activation of phosphatase activity.
Changes in oligomeric state and quaternary organization of proteins are widespread mechanisms for regulating biological activity. These can be induced variously by binding of allosteric ligands, covalent modification, proteolytic processing and reversible interactions with protein agonists or antagonists. SpoIIE, which transitions between complexes which are unusually large, is transiently active as a phosphatase after its release from an inhibitory complex with the divisome. Regulation of phosphatase activity through sequestration is also seen in adaptation to drought in plants; the phosphatase HAB1 dephosphorylates the kinase SnRK2, inhibiting transcription of drought tolerance genes until the complex of the hormone, abscisic acid and its receptor, PYR, binds to and inhibits the phosphatase HAB1 [64].
Our findings show more generally that we can combine robust cell categorization with single-molecule microscopy and quantitative copy number and stoichiometry analysis to follow complex morphologies during differentiation (Fig. 6B). Importantly, these tools provide new insight into the role of SpoIIE by monitoring its molecular composition and spatiotemporal dynamics, linking together different stages of cell development. Our findings show that the function of a key regulatory protein can be altered depending upon its state of multimerization and mobility, enabling different roles at different cell stages. Future applications of these methods may involve multicolor observations of SpoIIE with other interaction partners at different sporulation stages. Optimising these advanced imaging tools in the model Gram-positive B. subtilis may ultimately enable real time observations of more complex cellular development, paving the way for future studies of tissue morphogenesis in more challenging multicellular organisms. More generally our findings demonstrate that the application of superresolved single-molecule optical proteomics biotechnology can enable new mechanistic insight into complex cell stage dependent processes in single living cells which are technically too challenging to achieve using traditional methods [72][73][74][75]. Such findings are made possible by a range of innovative computational tools to categorise cell cycle stage and to quantify single-particle tracks, and enable not only new understanding of the dynamic patterns of spatial localization of a key protein used in triggering cell development, but also in posing questions about its structural properties at different cell cycle stages.
Data availability
Data included in full in main text and supplementary files. Raw data available from authors.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2020-06-11T09:07:42.264Z
|
2020-06-09T00:00:00.000
|
{
"year": 2020,
"sha1": "4f7e19325fc4e620dcf51d9cf3a680926e590510",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.csbj.2020.06.005",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d1725abcdf6690de45b807ebd4491b3384d26258",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
231646095
|
pes2o/s2orc
|
v3-fos-license
|
YOUTUBE USAGE PATTERN AMONG COMMUNICATION SCIENCE STUDENTS
Abstract-This study aims to determine the motives and patterns of using youtube among Communication science students. This research was conducted using quantitative methods. Data was collected online using a questionnaire created on Google Form. The questionnaire link was distributed using the crowdsourcing method using Whatsapp. The population of this study were students majoring in Communication Science, Faculty of Social and Political Sciences (FISIP), Makassar Islamic University. We have described both the cognitive motive and the affective motive. Second, we have calculated the value of the motive for both and found that the value of cognitive motive is slightly higher than the value of affective motive. This means that the cognitive motives and affective motives of students in using Youtube are quite balanced.
I. INTRODUCTION
The development of information and communication technology has been growing recently. This has brought changes in various sects of people's lives. Starting from finding information, education, finance, and so on. For example, the impact of the presence of the internet, which was discovered in 1969 through an ARPA project called ARPANET, succeeded in converting various previous media so that they could be accessed from one multitasking device. Currently, the development of smartphone technology and Android-based applications has led to many types of applications. One of them is social media.
Until the current era of the Industrial Revolution 4.0, we know a social media that is popular among millennials. Youtube is a site that allows users to share videos that contain information, education, entertainment, etc. A study states that the use of Youtube as a means of communication for the Makassar Vidgram Community is quite helpful. This study also found the characteristics of YouTube based on the perceived utilization by its users. The responses from the informants regarding the advantages and disadvantages of YouTube as a means of communication that are more inclined towards its advantages, make YouTube very effective and efficient as a means of communication for this community [1].
Other studies have even discussed the income that can be obtained from video monetization on a YouTube account. In this study, several advertising platforms are ready to collaborate with YouTubers such as Google Adsense and the Multi Channel Network (MCN) [2].
The use of video content on YouTube is also done to support skills. A study states that the Gram stain learning method by utilizing online videos and combination methods gives the same results as the direct demonstration method in terms of mastery of skills and the quality of the coloring results [3].
There are still many other studies that discuss patterns or motives for using Youtube [4], [5], [6], [7]. William J. McGuire, a motivational psychologist, said that there are 16 motives for using media which are summarized into two main motives, namely cognitive motives and affective motives. Cognitive motives emphasize the human need for information. Affective motives focus on the aspects of feelings and needs to reach a certain emotional level [8].
Research on the use of social media, especially YouTube for various positive purposes, is very interesting. On this basis, we conducted research on the patterns of using Youtube among Communication Science students. Communication Studies tends to touch on media issues so that students in the Communication Science study program are considered to be more responsive to the presence of new media. Therefore, the aim of this research is to find out the motives of Communication Science students in accessing the YouTube application. Second, to find out the intensity of Communication Science students in accessing YouTube videos.
II. METHOD
This research was conducted using quantitative methods. Data was collected online using a questionnaire created on Google Form. The questionnaire link was distributed using the crowdsourcing method using Whatsapp. The population of this study were students majoring in Communication Science, Faculty of Social and Political Sciences (FISIP), Makassar Islamic University. This study selects several students as samples. The minimum sample size is 30 people. This number has been used for various types of research (Roscoe, 1975) cited by [9].
This research questionnaire was structured as simple as possible to find out the motives and patterns of using Youtube among communication science students. The questionnaire was structured based on the conceptual framework shown in Figure 1. The research question is divided into 2 parts, including:
Motive
As mentioned earlier, that the use of media is summarized into two main motives, namely cognitive motives and affective motives. Cognitive motives focus on the human need for information and reach a certain ideational level. Affective motives focus on the aspects of feelings and needs to reach a certain emotional level. This research question is directed at several cognitive motives and several affective motives.
Intencity
The YouTube usage pattern also referred to the intensity of accessing Youtube. Intensity means a state of level or measure of intensity towards something. While intense itself means great or very strong (strength, effect), high, passionate, full of enthusiasm, fiery (about feelings), very emotional (about people). Or in other words, it can be interpreted as being serious and continuously working on something to get optimal results. Intensity can also be interpreted as a strength that supports an opinion or attitude [10]. Meanwhile, according to Arthur S. Reber and Emily S. Reber, intensity is the power of the emitted behavior. This understanding is common in behaviorist studies of learning and conditioning [11]. Andarwati and Sankarto suggest that the intensity aspects of accessing the internet are duration and frequency [12]. Duration is a description of how long individuals access the internet with various purposes. The duration of use is expressed in units of time such as per minute, or per hour. Meanwhile, frequency is a description of how often individuals access the internet for various purposes. The frequency of use is expressed in units of a certain time period, for example per day, per week, or per month.
III. RESULT AND DISCUSSION
We have distributed questionnaires and received 68 responses from Communication Science students of FISIP UIM. Respondents consisted of 66% women and 34% men. The majority of respondents were students in semester 7 as much as 51.5%. Next are the 3rd semester students (23.5%) and the 5th semester students 16.2%. the rest are students from other semesters.
Seven of the 68 people are not YouTube users. They never access Youtube for any motive. Thus, the number of YouTube user respondents analyzed was 61 people. We have asked a number of questions about whether they are accessing Youtube with cognitive or affective motives. We also ask about their habit of accessing Youtube. How many times and how long do they use each time they access Youtube. Pattern also referred to the intensity of accessing Youtube. Respondents' answers to each question are described below.
A. Cognitive motives
We asked five cognitive motives regarding whether they use YouTube for each of these purposes. The cognitive motives that we ask about include: 1) Searching information; 2) Increase knowledge; 3) Improve skills; 4) Help with college assignments; and 5) As a learning media in the classroom.
Searching/Update Information
One of the purposes of people accessing the internet is to find certain information or the latest information. Likewise with Youtube, there is a lot of information that we can get from Youtube media, regardless of whether the information is important, useful, up-to-date, true or not. Figure 2 shows the respondent's answer to the statement that they use Youtube to find information. From this chart, we know that almost 40% student say that they are strongly agree, 22% agree and 29 % enough agree. No studet disagree about that. It means that all students have used Youtube to find information.
Increase knowledge
Youtube also has many channels that discuss scientific issues. Many Youtube content providers choose this field. There are many kinds of knowledge available on Youtube, from general knowledge, natural knowledge, social knowledge to religious knowledge. We also asked about the statement that they use Youtube to increase their knowledge. Figure 3 shows the respondent's answer to this statement. 39,7 % student agree and 30,9% agree about that, and only 27,9% student enough agree. No one student less agree and disagree. It means, all student ever using Youtube to increase their knowledge.
Improve skills
Youtube can also be used to improve our skills. There are many channels on Youtube that contain content about skills such as English language skills, craft-making skills, coloring skills and many other video tutorials. In general, all students have also used Youtube to improve their skills. Not a single student disagrees or disagrees that they access YouTube to improve their skills. In general, all students have also used Youtube to improve their skills. Not a single student disagrees or disagrees that they access YouTube to improve their skills. Respondents' answers regarding this statement are shown in detail in Figure 4.
Help with college assignments
As students, sometimes we get various assignments from our lecturers. In the past, when working on the assignment given, many students visited the library to find the materials needed. The existence of the internet today has reduced the habit of students doing assignments in the library. the easier it is to find information as material for doing assignments on the internet can be done from anywhere. Sometimes, the materials needed can be obtained from videos on Youtube. In contrast to some of the motives for using Youtube that have been discussed previously. A small proportion of students felt that they did not agree with the motive for using Youtube to help with college assignments. It is possible that they have never used Youtube as a source to find material to do their college assignments. Figure 5 shows in detail the respondent's answer to this motivation.
As a learning media in the classroom
Recently, YouTube has been widely used by teachers and lecturers as a medium for additional learning in class) [13], [14], [15]. Even since the Covid-19 pandemic, Youtube has been used as an online learning medium [16]. This study also asks whether students access Youtube as a learning medium related to their courses. 16,2% students less agree about this statement. However, most students agreed to this statement. Possibly, they are using Youtube as a learning medium on their own accord without waiting for instructions from their teacher or lecturer. They can just look for the same learning material that is being taught on their campus, even if from other sources.
B. Affective Motives
This study also asked five examples of activities to access YouTube that fall into the category of affective motives. As in activities with cognitive motives, respondents' answers are divided into five Likert scales to calculate the level of motivation. Respondents' answers related to activities including affective motives are described below.
Fill the free time
Youtube is actually a social media that contains video content created by people who have a hobby of creating video content. This video maker on Youtube is called Youtuber or content creator. One of the most famous Youtubers in Indonesia is Raditya Dika. A study investigating the motives of YouTube users to watch Raditya Dika's videos found that the highest motives were entertainment and relaxation indicators [17]. According to the study, one of the reasons users enjoy Raditya Dika's YouTube is to spend their spare time. Raditya Dika's subscriber is included in the heavy internet user category. This study also asked whether students use Youtube to spend their spare time. The survey results show (Fig. 7) that this is one of the motives of all students. Approximately 41% of students agreed, 18% strongly agreed and the rest quite agreed. None of the students said they disagreed with the statement
Play music videos
One of the activities to find entertainment on Youtube is watching music videos. All UIM communication students have also accessed music video clips on Youtube. Based on Figure 8, the survey results show that 29.7% of students agree, 28.9 strongly agree and 22% of students quite agree with this statement.
Watching film
One other entertainment that can be accessed on Youtube is movies. Many Youtube users access Youtube to watch movies. The results showed that all UIM communication science students had watched films on Youtube. The survey results shown in Figure 8 explain that 41% of respondents agree with this statement.
Watching video streaming
Youtube also provides a lot of live video broadcasts, such as live broadcasts of football or other sports events, live music events, seminars and others. The results showed that not all students agreed with this statement. Based on the survey results shown in Figure 10, there are 22% of respondents who disagree even though no one disagrees.
Watching replays of favorite shows
Other entertainment-seeking activities that are asked in this study are about watching replays of favorite shows. The results showed that not all students have a favorite program that they want to watch the replay. The survey results as shown in Figure 11 show that there are 7.4% of respondents who disagree or in other words never watch replay of their favorite shows on Youtube.
C. Frequency
A study shows that there are three categories based on the intensity of the internet used, namely heavy users (spending more than 40 hours / month or about 1.5 hours / day accessing the internet), medium users (between 10 to 40 hours / month or about 1 hour/day), and light users (less than 10 hours / month or less than 1 hour day [18]. Based on the crosstab in Table 1 between YouTube access/week and Youtube access/da, it can be seen that respondents who visit the site every day spend 1-2 hours every day on YouTube. So, UIM communication science students access Youtube every day with a duration of 1-2 hours. It can be concluded that UIM communication science students are mostly YouTube heavy users where 11 people access Youtube every day for 1-2 hours, 12 people for 2-3 hours, 6 people for 3-4 hours and 9 people for more than 4 hours.
D. Calculation
We have calculated the level of motives for using Youtube, both cognitive and affective motives. We calculate the value of the motive based on the Likert scale. Based on the respondent's answer to the purpose of accessing Youtube, the answer strongly agrees is 4, agree is worth 3, just agrees is worth 2, does not agree is worth 1 and disagrees is worth 0. The motive value is the average of all values for each activity. If the average value is 3.5 -4, then the motive level is Very High, if the average value is 2.5 -3.4 then the motive level is High, If 1.5 -2.4 then the motive level is Medium, if it is less than 1.5, the motivation level is low. Figure 12 is a graph of the number of students who have cognitive motive levels (blue lines) and affective motive levels (red lines). The graph shows that students have high levels of cognitive and affective motives. The number of students with a high cognitive level is more than students with a high affective level. Likewise, the number of students who have very high cognitive motives is more than the number of students who have very high affective motives. This means that more students have cognitive motives that are higher than their affective motives. Table II shows that the cognitive motive value of UIM communication science students in accessing Youtube is slightly higher (2.86) than the value of affective motive (2.68).
IV. CONCLUSION
We have conducted a survey and shown the motives of Communication Science students, Islamic University of Makassar in using Youtube. We have described both the cognitive motive and the affective motive. Second, we have calculated the value of the motive for both and found that the value of cognitive motive is slightly higher than the value of affective motive. This means that the cognitive motives and affective motives of students in using Youtube are quite balanced.
V. ACKNOWLEDGMENT
The two authors are the main contributors, having the same contribution in this paper. The author would like to thank all respondents who filled out the research questionnaire. The author also thanks BBPSDMP Kominfo Makassar and Makassar Islamic University for facilitating this research.
|
2020-12-24T09:12:14.307Z
|
2020-12-22T00:00:00.000
|
{
"year": 2020,
"sha1": "5a9766fb827aa4a93f5f47642de03e3c11f099a7",
"oa_license": null,
"oa_url": "https://doi.org/10.30818/jitu.3.2.3477",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "1c70c4b443b94fa85819a0c2f0814753c2dbdadd",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
256454009
|
pes2o/s2orc
|
v3-fos-license
|
When Does a Light Sphere Break Ice Plate Most by Using Its Net Buoyance?
,
Introduction
The Arctic climate warming [1,2] has increased the frequency of human activities in this area.In order to expand the scope of activities in the Arctic and realize Arctic yearround navigation [3], icebreaking is a crucial technology [4].Since the first true modern sea-going icebreaker Yermak [5], human beings have been using icebreakers for icebreaking operation in polar regions for more than one hundred years, and researchers are still working on them [6][7][8][9].In addition to using icebreakers, some researchers are also trying to find new methods of breaking ice, including moving loads by virtue of flexural-gravity waves [10][11][12], high-pressure bubble [13,14] and high-speed water jet [15] etc.On the other hand, ice can also be broken under the impact vertically.Many scholars have also studied ice-breaking in the vertical direction.Ye et al. [16] simulated a submarine surfacing through the ice under a given constant speed by using the peridynamics method.However, for most underwater bodies, constant speed is difficult to control and the ice needs to be broken by net buoyance in many cases.For example, a bowhead whale needs to break the ice sheet above it for ventilation and it can break an ice sheet up to 60 cm thick by buoyance [17].Therefore, this paper explored when a light sphere breaks ice plate the most by using its net buoyance, or the optimal relative density of the sphere which can break the ice plate the most severely.
Before studying this problem, there were several aspects to be introduced: first, the damage criteria of ice plates under vertical loads, including static/quasi-static loads and dynamic loads; second, methods to measure or judge the damage degree of impact between bodies; third, relevant work on the collision of bodies and ice sheet along the vertical direction.Then, we expanded the literature review from these three aspects.
First, the damage criteria of ice plates under vertical loads were reviewed.Static/quasistatic loads can be divided into concentrated and distributed loads, according to the ratio of contact area relative to ice thickness [18].Various criteria have been proposed for predicting the breakup of an ice sheet.The most common one is the stress criterion, which can be obtained by the elastic analysis for the instantaneous loading case, and this method was developed significantly by Kerr [19].However, when the load at the moment of impact was difficult to obtain or measure, some scholars, e.g., Shapiro [20], proposed that breakthrough occurs when critical deflection is reached, regardless of creep before failure, which was also named as deflection and strain criteria.The critical strain-energy per unit volume concept is also used extensively in strength-of-materials literature [21].The advantage of the strain energy criterion relies on the fact it takes the loading and strain history into account.On these bases, Assur [22] and Frankenstein [23] studied the sequence of failure of an ice cover under static concentrated and distributed loads vertically downwards.There are usually three stages: the first is a radial crack on the bottom of the ice plate as a result of significant bending moments; the second is a circumferential crack on the top of the ice plate at some distance away from the load; the third is the breakup of ice plate along the radial and innermost circumferential cracks.The fracture stages are classical for ice plates subject to vertical loading.
In addition to static or quasi-static loads, impact load is another type of icebreaking load.Different from static or quasi-static loads, various shockwaves are observed in the ice at the instant of load impacting the ice.For example, when a high-speed water jet impacts an ice plate [15], the shock wave propagates in the form of volume waves (including longitudinal waves and transverse waves) in the plate and propagates in the form of Rayleigh surface waves on the surface of the plate [24].Longitudinal waves are compression waves before being reflected by the lower surface of the plate and become expansion waves due to the acoustic impedance of the different media on both sides of the interface.Transverse waves are shear waves.Rayleigh surface waves have vertical and horizontal components that correspondingly induce tensile and shear stresses.The propagation and interaction of shockwaves may be a fundamental cause of crack formation and propagation [25].
Second, methods to measure or judge the damage degree of impact between bodies are always challenging in experiments.On the one hand, researchers tried to explore contact measurements to study impact.Bouzid et al. [26] concerned a glass plate subject to impact at different loading rates by using two experiments: a compression split Hopkinson pressure bar, and the normalized drop ball test.The pulse was obtained by the gauges attached on the back of the glass sample.For studying the tensile strength of ice subjected to dynamic loading, Zhang et.al [27] investigated the dynamic tensile behaviors of distilled-water and river-water ice by a modified split Hopkinson pressure bar system.
However, in most cases, it is very hard to obtain data through contact measurement.Non-contact measurements and analysis were also adopted.Woodward et al. [28] studied the damage of brittle materials with different rigidity caused by projectiles of different diameters, in which the kinetic energies of the projectiles and the debris were recorded and analyzed.Dooge et al. [29] used ice particles with different mass and velocity to impact the aluminum plate in the experiment and made qualitative and quantitative analysis of the damage.They found that the incident kinetic energy of the ice ball had a good correlation with the expected damage of the aluminum plate.Kim et al. [30] investigated the damage resistance of thin-walled composite structures to ice impact by experiment, and one of the experiments was performed on a dynamic force measurement device.The experimental results showed a linear relationship between the measured peak force and the kinetic energy of the projectile, regardless of the projectile size.Some authors [31][32][33] have also studied the impact of ice ball on a rigid plate from experimental and numerical ways.One of the main conclusions was that microstructure of ice did not play an important role under these conditions and there was a correlation between impact force and kinetic energy.By contrast, some authors put forward different opinions on the study of impact.Xue et al. [34] studied the response of glass under dynamic impact load by using the drop ball test and proposed that the energy threshold was not specified as a prediction index, which did not take the time spent in contact during the impact event into account.Therefore, a metric for impact testing based on a momentum change threshold was established, and it was found that the momentum change had a linear relationship with the maximum deformation of the glass.They concluded that the momentum change was more suitable for predicting the maximum deformation.There are also some other problems concerning impact, such as dynamic compaction.Knut et al. [35] conducted field and laboratory experiments, respectively, to explore the influence of momentum and energy on the performance of dynamic compaction technologies.It was found that kinetic energy had no obvious effect on the crater depth.For an inelastic compaction process, they concluded that the momentum rather than energy determined the depth of the crater.
It can be seen from the above that there is always a controversy whether kinetic energy or momentum dominates the damage degree of impact between bodies [28][29][30][31][32][33][34][35].For the collision between ice plate and a buoyant sphere, it will become more complex, as it is still difficult to define the damage degree of ice plate.
Third, there is relevant work on the collision of bodies and ice sheet along vertical direction.Kozlov [36] studied the collision between a rigid sphere with a high initial speed and a very thick ice on water.He found that after the first collision with the ice plate, the maximum value of the sphere's intrusion at a certain point in time depended on its own kinetic energy before impact.Orlov and Bogomolov [37] quantitatively described the process of large impactors penetrating ice in the initial range below the speed of sound in air.It was found that the increase in crater depth was directly proportional to the impact velocity.The volume of ice destroyed was insignificant.Ren and Zhao [38] studied the process of a sphere falling from the top of an ice plate, breaking ice before entering water.Attention was paid to the numerical modelling of the interaction of ice, water and sphere.Wang et al. [39] simulated the process of an underwater cylinder breaking ice vertically before exiting water.The collision direction of this work was also from bottom to top, but the motion of the cylinder was prescribed, rather than free rising.
It can be seen that most previous work on the collision of a body and ice along vertical direction either concerned the intrusion of the sphere into the thick ice, or the damage of ice under the body with a prescribed velocity.To the best of our knowledge, no research on icebreaking by a free-rising buoyant sphere has been published.Many interesting problems are involved in the icebreaking process under such an icebreaking scenario.When does a light sphere break ice plate most by using its net buoyance?How can we define the damage degree in experiments?Does kinetic energy or momentum dominate the damage degree of ice plate?All these problems formed the main motivation and innovation of this paper.The paper studied the icebreaking process of a free-rising light sphere with variable weight in experiments.The whole process of the motion of sphere and the response of the ice plate was recorded and analyzed.On the basis of the experimental results, some simplified theoretical approaches were also adopted to find an explicit expression of the optimal relative density of the sphere.The results may provide potential applications in guiding an underwater vehicle that navigates under the ice sheet and needs to break ice by net buoyance in case of a mission or an emergency.
Theory
Driven by the buoyant force, a light sphere starts to accelerate from a resting position under the ice plate until it collides with the ice plate.A sketch of the problem with variables is presented in Figure 1.The definition of variables is as follows: D is the diameter of the sphere; L 0 is the initial submergence depth, which is the distance between the center of the sphere and the bottom surface of the ice plate while the sphere is stationary; h is the thickness of the ice plate; ρ s is the density of the sphere; ρ w is the density of water; g is the acceleration of gravity; U is the velocity of the sphere.
Theory
Driven by the buoyant force, a light sphere starts to accelerate from a resting position under the ice plate until it collides with the ice plate.A sketch of the problem with variables is presented in Figure 1.The definition of variables is as follows: D is the diameter of the sphere; 0 L is the initial submergence depth, which is the distance between the center of the sphere and the bottom surface of the ice plate while the sphere is stationary; h is the thickness of the ice plate; s is the density of the sphere; w is the density of water; g is the acceleration of gravity; U is the velocity of the sphere.The ascension of a rising buoyant sphere is modeled with a simple theoretical force balance illustrated in Figure 2 and demonstrated as follows [40]: Free body diagram of a rising buoyant sphere. () where is the mass of the sphere, where C is concerned with the distance between boundary to the sphere [37][38][39].
To facilitate unified expression, Equation ( 1) is rewritten as follows: The ascension of a rising buoyant sphere is modeled with a simple theoretical force balance illustrated in Figure 2 and demonstrated as follows [40]: where m = ρ s V is the mass of the sphere, where is the volume of the sphere; m a = C m ρ w V is the added mass of the sphere and C m is the added-mass coefficient; F b = ρ w gV is the buoyant force; F g = ρ s gV is the gravity force; is the cross-sectional area and C d is the drag coefficient.It is remarkable that C m is concerned with the distance between boundary to the sphere [37][38][39].
Theory
Driven by the buoyant force, a light sphere starts to accelerate from a resting position under the ice plate until it collides with the ice plate.A sketch of the problem with variables is presented in Figure 1.The definition of variables is as follows: D is the diameter of the sphere; 0 L is the initial submergence depth, which is the distance between the center of the sphere and the bottom surface of the ice plate while the sphere is stationary; h is the thickness of the ice plate; s is the density of the sphere; w is the density of water; g is the acceleration of gravity; U is the velocity of the sphere.The ascension of a rising buoyant sphere is modeled with a simple theoretical force balance illustrated in Figure 2 and demonstrated as follows [40]: Ice Plate () where s mV = is the mass of the sphere, where To facilitate unified expression, Equation (1) is rewritten as follows: In Equation (2a,b), x is the displacement of the rising buoyant sphere; ρ = ρ s ρ w is the dimensionless density of the sphere.When the sphere moves in an unbounded fluid, C m is equal to 0.5.However, when the sphere approaches to the wall vertically, the expression of C m changes with the distance to the wall.Many researchers studied the variation law of C m with wall [41,42] and Kharlamov et al. [43] provided a fitting formula of C m of a sphere approaching to a rigid wall vertically with a maximum deviation from the computed data 4 × 10 −3 : where l = L 0 −x D is a dimensionless distance between the center of the sphere and the wall, and constants H 1 = 0.2182, t 1 = −3.21;H 2 = 0.081, t 2 = −19.
If one assumes C m as 0.5 and C d as a constant during the movement, Equation ( 2) can be rewritten as follow: The general solution of Equation ( 4) can be written as [44]: where , R is a constant confirmed by the initial conditions, i.e., U = 0 when x = 0 in the case in this paper.Therefore, R can be written as . Since we are concerned with the velocity just before the sphere impacting the ice plate, the upper limit of the integral for displacement x should be L 0 − D 2 right before contacting the ice plate.As a result, when the sphere contacts the ice plate, the velocity U of the sphere is where L 0 = L 0 D is dimensionless initial submergence depth.If one further assumes that the viscosity of the fluid can be ignored, i.e., C d = 0. Equation ( 4) can be simplified further as: One can easily get the velocity U of the sphere when it contacts the ice plate as Then in the nondimensionalized system, the following three parameters are chosen as the characteristic quantities: diameter of the sphere D, water density ρ w and gravitational acceleration g.
Ice Specimen Preparation
From the perspective of ice mechanics, ice can be regarded as one of the most complex materials in nature [45][46][47].Ice in nature contains many defects, including preexisting cracks, inclusions, pores, grain boundaries, etc. [48,49] which further aggravates the uncertainty of the experiment.Therefore, some researchers [50][51][52] proposed using freshwater as the material to prepare experimental ice in the laboratory.Figure 3 shows a schematic diagram of the freshwater ice plate preparation method.The ice plates in this study were made by freshwater in a cryostat at −20 • C. The freshwater was boiled in order to maximize the removal of dissolved air in the water and to avoid the presence of bubbles.The boiled fresh water was placed in a cylindrical container without a top cover.The container was made of expanded polystyrene (EPS), whose good adiabatic properties ensure that the heat transfer direction was from top to bottom, in the same way as the growth direction of the ice crystal in reality.When the requirements of the thickness of the ice plates were met, the ice plates were removed from the container and moved into a cryostat at −5 • C for 10 h, which was to prevent the ice plate from breaking due to the excessive temperature difference between the ice plate and water [15,53].Figure 4 shows a picture of the ice plate samples.The diameter of ice plates was 345 mm, and the thicknesses were 6 mm, 8 mm, and 10 mm, respectively.The mechanical properties of the ice plate were obtained by testing at −5 • C. The average value of Young's modulus of ice was 6.2 GPa, and the average compressive and flexural strengths of ice were 9.4 MPa and 2.4 MPa, respectively.The rest of the properties can be found in Ni et al. [53].
From the perspective of ice mechanics, ice can be regarded as one of the most com plex materials in nature [45][46][47].Ice in nature contains many defects, including preexisting cracks, inclusions, pores, grain boundaries, etc. [48,49] which further aggravates the un certainty of the experiment.Therefore, some researchers [50][51][52] proposed using freshwa ter as the material to prepare experimental ice in the laboratory.Figure 3 shows a sche matic diagram of the freshwater ice plate preparation method.The ice plates in this study were made by freshwater in a cryostat at −20 ℃.The freshwater was boiled in order to maximize the removal of dissolved air in the water and to avoid the presence of bubbles The boiled fresh water was placed in a cylindrical container without a top cover.The con tainer was made of expanded polystyrene (EPS), whose good adiabatic properties ensur that the heat transfer direction was from top to bottom, in the same way as the growth direction of the ice crystal in reality.When the requirements of the thickness of the ic plates were met, the ice plates were removed from the container and moved into a cryosta at −5 ℃ for 10 h, which was to prevent the ice plate from breaking due to the excessive temperature difference between the ice plate and water [15,53].Figure 4 shows a pictur of the ice plate samples.The diameter of ice plates was 345 mm, and the thicknesses wer 6 mm, 8 mm, and 10 mm, respectively.The mechanical properties of the ice plate were obtained by testing at −5 ℃.The average value of Young's modulus of ice was 6.2 GPa and the average compressive and flexural strengths of ice were 9.4 MPa and 2.4 MPa respectively.The rest of the properties can be found in Ni et al. [53].
Experimental Setup
Figure 5 shows the experimental setup.The experimental setup can be div four systems: (1) the sphere location and releasing system; (2) the fixing syste supporting system; (4) the camera system.
The sphere location and releasing system included a lift platform (in blue) leasing device (in yellow), as shown in Figure 5a, b.The former was placed in of the bottom of the tank, which was used to manipulate the initial submergenc the sphere below the ice plate.The latter was placed above the lift platform an an electromagnet to control the release of the sphere.Finally, the light sphere w button was placed on the releasing device.The sphere was made of poly lactic a
Result and Discussion
In this section, we chose a case study to analyze the icebreaking process by the buoyant sphere before discussing the influence of several parameters, including dimensionless initial submergence depth 0 L , dimensionless density and dimensionless ice thickness h .
Case Study
A case study was chosen with the following parameters: dimensionless initial submergence depth 0 L was 2.31, dimensionless density was 0.4 and dimensionless ice thickness h was 0.089.The movement process of the floating sphere and interactions between the sphere and the ice plate were recorded and analyzed.
Figure 6 shows curves of the velocity of the sphere along with the displacement before colliding with the ice plate.It contains: (1) The sphere location and releasing system included a lift platform (in blue) and a releasing device (in yellow), as shown in Figure 5a,b.The former was placed in the center of the bottom of the tank, which was used to manipulate the initial submergence depth of the sphere below the ice plate.The latter was placed above the lift platform and adopted an electromagnet to control the release of the sphere.Finally, the light sphere with an iron button was placed on the releasing device.The sphere was made of poly lactic acid (PLA) using 3D printing and painted with black nitrocellulose lacquer.Its diameter was 112.5 mm.The size of the sphere was determined by the size of water tank.The weight of the sphere was variable by using different ballasts inside it, so the relative density of the sphere was achieved easily in the experiment.
The fixing system was adopted to restrict the motion of the ice plate on the free surface.We tried to simulate the collision of a sphere with a very large ice sheet, rather than a free-floating ice.However, due to the limitation of ice-making technology and experimental equipment, the size of the ice plate could not be very large.Considering that the displacement and ration angle of the ice sheet tend to be zero at a very large distance, we designed a fixing system to rigid fix the edge of the ice plate.The main body of the fixing system was a supporter made of polymethyl methacrylate (PMMA).The supporter had a groove with diameters of 345 mm and 325 mm, as shown in Figure 5c.During the experiment, the ice plate was first put into the groove of the supporter, and then the fixed ring was covered over the ice plate, as shown in the enlarged view of Figure 5c, and finally, the fixed ring was fixed with the supporter by four Clamps2.Under the joint constraint of supporter and fixed ring, the boundary condition of the ice plate was completely fixed.The ice fixing system was fixed with the water tank by eight Clamps1.
The supporting system consisted of a square water tank and an outside shell frame.The water tank was made of transparent glass, and its principal dimension was 0.6 m in length.
The camera system included two high-speed cameras and four LED lamps.One camera was a PHANTOM VEO-640S (Phantom/AMETEK, USA), placed on the horizontal surface with the resolution rate of 1024 × 1024, which captured photos at 10,000 frames per second.The other was a PHOTRON Fastcam Mini A1300 (Photron, Japan), placed on the vertical surface with the resolution rate of 768 × 528, which captured photos at 1000 frames per second.Camera 1 was in charge of capturing the motion trajectory of the floating sphere.The velocity and corresponding kinetic energy of the sphere were obtained by image recognition technology.Meanwhile, the destruction of the ice plate was captured by Camera 2. By sending a pulse signal from Camera 1 and receiving it by the other, synchronous triggering and shooting of two cameras were achieved.Four flicker-free LED lamps were installed on the transparent water tank's side and bottom to ensure a bright shooting environment.
Result and Discussion
In this section, we chose a case study to analyze the icebreaking process by the buoyant sphere before discussing the influence of several parameters, including dimensionless initial submergence depth L 0 , dimensionless density ρ and dimensionless ice thickness h.
Case Study
A case study was chosen with the following parameters: dimensionless initial submergence depth L 0 was 2.31, dimensionless density ρ was 0.4 and dimensionless ice thickness h was 0.089.The movement process of the floating sphere and interactions between the sphere and the ice plate were recorded and analyzed.
Figure 6 shows curves of the velocity of the sphere along with the displacement before colliding with the ice plate.According to the experiment curves ( 1) and ( 2), the sphere with zero initial velocity accelerated under the effect of net buoyant force after being released, but the acceleration amplitude gradually decreased.When the displacement of the sphere was about 0.16 m ( x = 1.42) (that is, the distance from the center of the sphere to the ice plate was about 0.1 According to the experiment curves (1) and ( 2), the sphere with zero initial velocity accelerated under the effect of net buoyant force after being released, but the acceleration amplitude gradually decreased.When the displacement of the sphere was about 0.16 m (x = 1.42) (that is, the distance from the center of the sphere to the ice plate was about 0.1 m (l = 0.89)), the velocity of the sphere was almost uniform, which indicated that the forces on the sphere were almost balanced.On the one hand, the viscous resistance increased with the velocity of the sphere.On the other hand, the additional mass force increased with the decrease of the spacing according to Equation (3).Both contributed to the balance of the net buoyant force.Therefore, in a theoretical prediction, the selections of drag and added mass coefficients C d and C m need to be discussed.
First, we considered the influence of added masses C m , which was less complicated than the choice of C d .Curves ( 3) and ( 4) showed velocities at different added mass coefficients of a sphere with and without of influence of the wall surface.When the displacement of the sphere was less than 0.16 m (x = 1.42), curves (3) and ( 4) overlapped basically.Beyond that, there was a deviation between two results, which denoted that the influence of wall surface should not be ignored when the sphere was very close to the wall.Once again, the comparison of curves ( 3) and ( 4) validated that the increase of added mass coefficient contributed to the balance of the sphere.
Second, the choice of C d was particularly worth discussing.For unsteady motion, the drag coefficient C du is different from the counterpart C d at steady state.Many researchers [54][55][56][57] have carried out experimental and theoretical studies on it.For the convenience, we temporarily assumed C du = C d .As we know, C d of a sphere is closely related to Re number.When 1.0 × 10 3 < Re < 2.0 × 10 5 , C d is around 0.44 [58][59][60][61].Re number of the sphere in this case was mostly distributed in this interval, so the resistance coefficient was taken as 0.44 in curves ( 3) and (4).By comparison, C d = 0 was adopted in curve (5).However, it can be seen from the comparison between curve (4) and curve (5) that the resistance coefficient of the sphere in actual motion was less than 0.44, which coincided with the researches of [62,63] on the point that C du was smaller than C d .For this reason, we decreased the resistance coefficient C d .By trying a series of resistance coefficients, we found that when C d = 0.12 (as shown in Figure 7) could predict the motion of the sphere well in our cases, especially considering the influence of the wall surface on C m .Under the acceleration process in water, as shown in Figure 6, the buoyant sphere obtained a certain velocity and started to collide with the ice plate.Figure 8 shows the typical characteristics and corresponding time during the process of the sphere impacting the ice plate until it is broken.The moment when the sphere just contacted the bottom surface of the ice plate was defined as the initial time ( 0 t = ms), as shown in Figure 8a.At the time of 0.2 t = ms, the first radial cracks (RCs) appeared on the ice plate clearly under the collision of the sphere, shown in Figure 8b.The patterns of cracks were quite Under the acceleration process in water, as shown in Figure 6, the buoyant sphere obtained a certain velocity and started to collide with the ice plate.Figure 8 shows the typical characteristics and corresponding time during the process of the sphere impacting the ice plate until it is broken.The moment when the sphere just contacted the bottom surface of the ice plate was defined as the initial time (t = 0 ms), as shown in Figure 8a.At the time of t = 0.2 ms, the first radial cracks (RCs) appeared on the ice plate clearly under the collision of the sphere, shown in Figure 8b.The patterns of cracks were quite similar to those during icebreaking under distributed loads by Ashton [49].Under the continuous loads from the buoyant sphere, radial cracks extended to the edge of the ice plate at the time of t = 0.6 ms in Figure 8c, as marked by the red line.Figure 8d shows the formation of circumferential cracks (CCs), which were generated on the basis of RCs [26].After generating CCs, cone cracks could be observed in the vicinity of the contacting point (stuck out by the green dotted line and partial enlarged in Figure 8e).After that, the ice plate began to break, and air entered under the ice plate through the cracks, which appeared as bubbles along the cracks in Figure 8f (stuck out by the blue dotted line).Then, cone cracks penetrated the ice plate and the debris splashed by the impact of the sphere (shown by the green circle in Figure 8g).Finally, under the action of the sphere, the wedge-shaped ice pieces in the center of the ice plate failed, and the sphere broke through the ice plate and pushed the polygonal ice fragments aside, as shown in Figure 8h.The damage process of the ice plate after impact can be summarized as follows: first, the ice plate produced RCs (RCs pattern); second, CCs were generated on the basis of RCs (RCs⊕CCs pattern); third, the ice debris splashed (debris-splashing pattern); finally, the ice plate broke up (ice-plate breakup pattern).In order to intuitively explain the cause of the destruction of the ice plate after the impact, Figure 9 is demonstrated.As shown in Figure 9a, when the sphere collided with the ice plate, a compressive wave before a tensile wave was transmitted from the collision point.When the compressive wave arrived and was reflected by the upper surface of the ice plate, it became a tensile wave due to the acoustic impedance of the different media on both sides of the interface.Because the tensile strength of the ice (about 2.2Mpa at −5°C) is much lower than its compressive strength (about 9.4Mpa at −5°C), ice is more fragile under tensile waves compared with compressive waves [27].Especially, when the reflected tensile waves encountered and interacted with incident tensile waves, as shown in In order to intuitively explain the cause of the destruction of the ice plate after the impact, Figure 9 is demonstrated.As shown in Figure 9a, when the sphere collided with the ice plate, a compressive wave before a tensile wave was transmitted from the collision point.When the compressive wave arrived and was reflected by the upper surface of the ice plate, it became a tensile wave due to the acoustic impedance of the different media on both sides of the interface.Because the tensile strength of the ice (about 2.2 Mpa at −5 • C) is much lower than its compressive strength (about 9.4 Mpa at −5 • C), ice is more fragile under tensile waves compared with compressive waves [27].Especially, when the reflected tensile waves encountered and interacted with incident tensile waves, as shown in Figure 9b, the ice plate became very fragile, as shown in Figure 9c.As a result, the ice plate broke up in spalling, leaving sloped fractures, as shown in Figure 9d.In fact, there may be more forms of wave transmission, such as shear or Rayleigh waves, whose effects complicated the destruction of the ice plate [64].However, due to the limited shooting equipment, it was hard to record the wave propagation in ice accurately.
L on Ice Plate Damage
The effect of dimensionless initial submergence depth 0 L was investigated by chang- ing 0 L from 0.9 to 2.31, with dimensionless ice thickness h = 0.071 and dimensionless density = 0.6 constant.
Figure 10 provides the ice damage at four initial submergence depths, in which the time was all chosen at 0.1 t = s.In Figure 10a, it can be observed that there were slight and inconspicuous RCs extending from the center of the ice plate to the edge, i.e., RCs pattern.With the increase of the initial submergence depth of the sphere, in Figure 10b, the ice was damaged severely with a greater number of RCs and several slight CCs at a distance from the center, i.e., RCs⊕CCs pattern.In Figure 10a, b), ice plates did not break up, or the cracks did not penetrate the ice plate, which was also named 'part-through' cracks [65].In terms of the damage pattern of the ice plate, the phenomenon shown in Figure 10c was not changed significantly from that shown in Figure 10b.However, one can observe that air bubbles were captured under the ice plate, as denoted in the blue circles.This is because the cracks had penetrated the ice plate and air entered through the cracks and edges.When the initial submergence depth increased to 2.31, as shown in Figure 10d, in addition
The Effect of Dimensionless Initial Submergence Depth L 0 on Ice Plate Damage
The effect of dimensionless initial submergence depth L 0 was investigated by changing L 0 from 0.9 to 2.31, with dimensionless ice thickness h = 0.071 and dimensionless density ρ = 0.6 constant.
Figure 10 provides the ice damage at four initial submergence depths, in which the time was all chosen at t = 0.1 s.In Figure 10a, it can be observed that there were slight and inconspicuous RCs extending from the center of the ice plate to the edge, i.e., RCs pattern.With the increase of the initial submergence depth of the sphere, in Figure 10b, the ice was damaged severely with a greater number of RCs and several slight CCs at a distance from the center, i.e., RCs⊕CCs pattern.In Figure 10a,b), ice plates did not break up, or the cracks did not penetrate the ice plate, which was also named 'part-through' cracks [65].In terms of the damage pattern of the ice plate, the phenomenon shown in Figure 10c was not changed significantly from that shown in Figure 10b.However, one can observe that air bubbles were captured under the ice plate, as denoted in the blue circles.This is because the cracks had penetrated the ice plate and air entered through the cracks and edges.When the initial submergence depth increased to 2.31, as shown in Figure 10d, in addition to RCs and CCs on the ice plate, ice debris can be clearly found splashing from the center of the ice plate, i.e., a debris-splashing pattern.As mentioned in the previous section, affecting by the nature of the ice plate, we cannot obtain the strain of the ice plate during the collision directly by using contact measurement methods, such as strain gauges attached to the surface of the ice plate.For this reason, we adopted an indirect method to describe the damage degree of the ice plate as above.For the condition in this section, it is common to expect the result before the experiment, i.e., that within a certain range (will be discussed in Section 4.4), the greater the initial submergence depth of the sphere is, the more severely the ice plate is damaged.Experimental results validated this expectation.Therefore, we can predict that the damage degree of the ice plate becomes more severe from "RCs" to "RCs⊕CCs" and then to "splashing".
The of Dimensionless Ice Thickness
h on Ice Plate Damage On the basis of Figure 10 in Section 4.2, the effect of ice thickness was further investigated by changing h from 0.053 to 0.089, with 0 L = 2.31 and = 0.6 constant.In the case of h = 0.089, both radial and circumferential cracks (namely RCs⊕CCs pattern) can be observed (Figure 11a).When the dimensionless ice thickness decreased to 0.071, not only were RCs and CCs observed in the ice plate, but debris splashing was found in Figure 11b, which showed a debris-splashing damage pattern.With the decrease of dimensionless thickness to 0.053, the damage to the ice plate became quite serious.The sphere broke up the ice plate into multiple triangular and quadrilateral pieces and the number of ice debris became larger, as shown by the green circle in Figure 11c, presenting an ice plate-breakup pattern.As mentioned in the previous section, affecting by the nature of the ice plate, we cannot obtain the strain of the ice plate during the collision directly by using contact measurement methods, such as strain gauges attached to the surface of the ice plate.For this reason, we adopted an indirect method to describe the damage degree of the ice plate as above.For the condition in this section, it is common to expect the result before the experiment, i.e., that within a certain range (will be discussed in Section 4.4), the greater the initial submergence depth of the sphere is, the more severely the ice plate is damaged.Experimental results validated this expectation.Therefore, we can predict that the damage degree of the ice plate becomes more severe from "RCs" to "RCs⊕CCs" and then to "splashing".
The effect of Dimensionless Ice Thickness h on Ice Plate Damage
On the basis of Figure 10 in Section 4.2, the effect of ice thickness was further investigated by changing h from 0.053 to 0.089, with L 0 = 2.31 and ρ = 0.6 constant.
In the case of h = 0.089, both radial and circumferential cracks (namely RCs⊕CCs pattern) can be observed (Figure 11a).When the dimensionless ice thickness decreased to 0.071, not only were RCs and CCs observed in the ice plate, but debris splashing was found in Figure 11b, which showed a debris-splashing damage pattern.With the decrease of dimensionless thickness to 0.053, the damage to the ice plate became quite serious.The sphere broke up the ice plate into multiple triangular and quadrilateral pieces and the number of ice debris became larger, as shown by the green circle in Figure 11c, presenting an ice plate-breakup pattern.Figure 12 displays final equilibrium positions of the sphere after resting.By comparison, it can be found that although the sphere did not break through the ice plate in Figure 12a, b, the final submergence depth was different.Figure 12 displays final equilibrium positions of the sphere after resting.By comparison, it can be found that although the sphere did not break through the ice plate in Figure 12a,b, the final submergence depth was different.l 1 was larger than l 2 a bit, while l 1 and l 2 were both larger than l 3 distinctly.This can be expected as the thinner the ice plate was, the damage the ice plate became more severe.The damage to the ice plate, including debris and hole, provided space for the sphere to rise. Figure 12 displays final equilibrium positions of the sphere after resting.By comparison, it can be found that although the sphere did not break through the ice plate in Figure 12a, b, the final submergence depth was different.Similar to Section 4.3, it is common to expect the result before the experiment, i.e., the thinner the ice plate was, the more severely it was damaged by the sphere with the same submergence depth and relative density.Experimental results validated this expectation.Therefore, we can show that the damage degree of the ice plate becomes more severe from "'RCs⊕CCs" to "splashing" and then to "breakup".Together with Section 4.3, we can ascertain that the pattern is becoming worse from "RCs" to "breakup", which will lay a foundation for judging the damage degree of the ice plate hereinafter.
The Effect of Dimensionless Density ρ on Ice Plate Damage
This section explores the effect of dimensionless density ρ on ice plate damage.Spheres with different dimensionless densities were used to break the ice plate with h = 0.089 and L 0 = 2.31 constant.
Figures 13-15 represent the dimensionless velocity, dimensionless kinetic energy, and dimensionless momentum of spheres with different relative densities at the moment of contact with the ice plate.There are three curves in each figure, representing the experimental value, theoretical predicted values with C m = Equation (3); C d = 0.12 and C m = 0.5; C d = 0.12, respectively.Similar to that in Figures 6 and 7, the results with C m in Equation ( 3) were better than C m = 0.5.The trend of the three curves in Figure 13 was the same, that is, the velocity of the sphere impacting the ice plate was inversely proportional to the relative density.This is in line with our common sense.In Figures 14 and 15, with the increase of relative density, the kinetic energy and momentum of the sphere at the moment of impact both rose before they fell, and the trend was little-affected by the choice of C m .As a result, there were two different relative densities that maximized the kinetic energy and momentum of the sphere, respectively.Because we tried to find an optimal dimensionless density ρ op to break the ice plate the most, we needed to sort out the failure state of the ice plate impacted by the spheres with different relative densities.
"'RCs⊕CCs" to "splashing" and then to "breakup".Together with Sectio ascertain that the pattern is becoming worse from "RCs" to "breakup", w foundation for judging the damage degree of the ice plate hereinafter.
The effect of Dimensionless Density on Ice Plate Damage
This section explores the effect of dimensionless density on ice Spheres with different dimensionless densities were used to break the ice p 0.089 and 0 L = 2.31 constant.12, respectively.Similar to that in Figures 6 and 7, the results with C (3) were better than m C = 0.5.The trend of the three curves in Figure 13 was is, the velocity of the sphere impacting the ice plate was inversely proporti ative density.This is in line with our common sense.In Figures 14 and 1 crease of relative density, the kinetic energy and momentum of the sphere a of impact both rose before they fell, and the trend was little-affected by the As a result, there were two different relative densities that maximized the and momentum of the sphere, respectively.Because we tried to find an o sionless density op to break the ice plate the most, we needed to sort out th of the ice plate impacted by the spheres with different relative densities.Figure 16 shows typical pictures of the damage on the ice plate c different densities, at 0.1 t = s from a bird's-eye view and horizonta From Figure 16a-c, it can be seen that with the increase of the rel sphere, the damage state of the ice plate changes from "debris spla breakup" patterns; while from Figure 16c-e, the damage state of the ice "ice plate breakup" to "debris splashing" and then to "RCs" patterns Figure 16 shows typical pictures of the damage on the ice plate ca different densities, at 0.1 t = s from a bird's-eye view and horizontal From Figure 16a-c, it can be seen that with the increase of the rela sphere, the damage state of the ice plate changes from "debris splas breakup" patterns; while from Figure 16c-e, the damage state of the ice "ice plate breakup" to "debris splashing" and then to "RCs" patterns.Figure 16 shows typical pictures of the damage on the ice plate caused by spheres of different densities, at t = 0.1 s from a bird's-eye view and horizontal view, respectively.From Figure 16a-c, it can be seen that with the increase of the relative density of the sphere, the damage state of the ice plate changes from "debris splashing" to "ice plate breakup" patterns; while from Figure 16c-e, the damage state of the ice plate changes from "ice plate breakup" to "debris splashing" and then to "RCs" patterns.To avoid randomness in the results, at least 10 repeated experiments were done for each density case.The failure mode of the ice plate caused by spheres with different densities is shown in Figure 17.Because it was difficult to ensure that the properties of each ice plate were the same exactly due to the limits of icebreaking technology, different failure modes may appear at the same relative density.However, it was still reasonable to classify the damage degree of ice plates by statistical data of different failure modes.As discussed in Sections 4.3 and 4.4, the "ice plate breakup" pattern is the most severe of all the patterns.We took the probability of this pattern as a criterion and tried to link the damage degree of the ice plate with kt E and t M of the sphere in Figures 14 and 15.As shown in the Figure 18, the dimensionless kinetic energy of the sphere achieves the largest at = 0.4, while the dimensionless momentum of the sphere achieves the largest at = 0.6.Compared with the probability of the ice-breakup pattern, when the kinetic energy of the sphere is the largest, the probability of the ice plate breakup peaks (91.7%).From this point, it can be concluded that the kinetic energy of the sphere, rather than momentum, at the moment of collision dominates the damage degree of the ice plate.As a result, we adopt kinetic energy of the sphere at the moment of collision as a criterion for the To avoid randomness in the results, at least 10 repeated experiments were done for each density case.The failure mode of the ice plate caused by spheres with different densities is shown in Figure 17.Because it was difficult to ensure that the properties of each ice plate were the same exactly due to the limits of icebreaking technology, different failure modes may appear at the same relative density.However, it was still reasonable to classify the damage degree of ice plates by statistical data of different failure modes.To avoid randomness in the results, at least 10 repeated experiments were done for each density case.The failure mode of the ice plate caused by spheres with different densities is shown in Figure 17.Because it was difficult to ensure that the properties of each ice plate were the same exactly due to the limits of icebreaking technology, different failure modes may appear at the same relative density.However, it was still reasonable to classify the damage degree of ice plates by statistical data of different failure modes.As discussed in Sections 4.3 and 4.4, the "ice plate breakup" pattern is the most severe of all the patterns.We took the probability of this pattern as a criterion and tried to link the damage degree of the ice plate with kt E and t M of the sphere in Figures 14 and 15.As shown in the Figure 18, the dimensionless kinetic energy of the sphere achieves the largest at = 0.4, while the dimensionless momentum of the sphere achieves the largest at = 0.6.Compared with the probability of the ice-breakup pattern, when the kinetic energy of the sphere is the largest, the probability of the ice plate breakup peaks (91.7%).From this point, it can be concluded that the kinetic energy of the sphere, rather than momentum, at the moment of collision dominates the damage degree of the ice plate.As a result, we adopt kinetic energy of the sphere at the moment of collision as a criterion for the As discussed in Sections 4.3 and 4.4, the "ice plate breakup" pattern is the most severe of all the patterns.We took the probability of this pattern as a criterion and tried to link the damage degree of the ice plate with E kt and M t of the sphere in Figures 14 and 15.As shown in the Figure 18, the dimensionless kinetic energy of the sphere achieves the largest at ρ = 0.4, while the dimensionless momentum of the sphere achieves the largest at ρ = 0.6.Compared with the probability of the ice-breakup pattern, when the kinetic energy of the sphere is the largest, the probability of the ice plate breakup peaks (91.7%).From this point, it can be concluded that the kinetic energy of the sphere, rather than momentum, at the moment of collision dominates the damage degree of the ice plate.As a result, we adopt kinetic energy of the sphere at the moment of collision as a criterion for the icebreaking ability of a floating light sphere driven by net buoyant force hereinafter.This conclusion can also be well explained from the perspective of energy.When the sphere collided with the ice plate, the sphere converted its kinetic energy into kinetic and potential energies of the ice plate including cracks (or fracture energy), debris and fragments, kinetic and potential energies of fluid, the potential energy of the sphere as well as thermal energy [29].The more kinetic energy the sphere gained before impact, the worse the ice plate was damaged (presenting in the generation of cracks, area of the hole, the motion of the debris and fragments, etc.).icebreaking ability of a floating light sphere driven by net buoyant force hereinafter.This conclusion can also be well explained from the perspective of energy.When the sphere collided with the ice plate, the sphere converted its kinetic energy into kinetic and potential energies of the ice plate including cracks (or fracture energy), debris and fragments, kinetic and potential energies of fluid, the potential energy of the sphere as well as thermal energy [29].The more kinetic energy the sphere gained before impact, the worse the ice plate was damaged (presenting in the generation of cracks, area of the hole, the motion of the debris and fragments, etc.).We further plot the curve of E kt versus ρ to find ρ op with the case of h = 0.089 and L 0 = 2.31 in Figure 19.As mentioned above, considering that the added mass coefficient has little influence on the optimal density, we chose C m = 0.5 as convenience and still chose C d = 0.12 as before.It is clear that the theoretical optimal relative density ρ op = 0.390 was very close to the result in Figure 18 in the experiment.Therefore, when the initial depth L 0 and ice thickness h were constant, one can predict the optimal relative density ρ op in theory.icebreaking ability of a floating light sphere driven by net buoyant force hereinafter.This conclusion can also be well explained from the perspective of energy.When the sphere collided with the ice plate, the sphere converted its kinetic energy into kinetic and potential energies of the ice plate including cracks (or fracture energy), debris and fragments, kinetic and potential energies of fluid, the potential energy of the sphere as well as thermal energy [29].The more kinetic energy the sphere gained before impact, the worse the ice plate was damaged (presenting in the generation of cracks, area of the hole, the motion of the debris and fragments, etc.).On the other hand, we further considered the relationship of ρ op with L 0 and C d .First, we studied a simplified model with a non-viscous assumption, i.e., C d = 0.Under this assumption, the dimensionless kinetic energy E kt of the sphere can be obtained on the basis of Equation ( 8): ρ op can be obtained by: Furthermore, for viscous assumption (C d = 0), the dimensionless kinetic energy and its derivation with respect to ρ are By solving Equation ( 13), one can obtain ρ op with given L 0 and C d .As Equation ( 13) is complex and it is hard to obtain an explicit ρ op , we solved Equation ( 13) numerically.The relationship between ρ op and L 0 at different C d is demonstrated in Figure 20.d this assumption, the dimensionless kinetic energy kt E of the sphere can be obtained on the basis of Equation ( 8): op can be obtained by: ( ) Furthermore, for viscous assumption ( 0 d C ), the dimensionless kinetic energy and its derivation with respect to are ( ) ( ) By solving Equation ( 13 According to Figure 20, one can find that no matter what L 0 is, ρ op is always not larger than 0.5.When considering the viscosity of water (C d = 0), optimal relative density ρ op of the sphere gradually approaches 0.5 with the increase of the initial depth L 0 .In fact, according to Equation ( 12), we can obtain the dimensionless kinetic energy of the sphere when the L 0 approaches infinity: and ρ op of this state can be obtained as which coincides well with the trend in Figure 20.By observing the curves with different C d in Figure 20, one can find that ρ op declines along with the decrease of C d for a given L 0 .The smaller C d is, the larger the initial depth L 0 is required for ρ op approaching to 0.5.When C d is small enough but non-zero, it will need a very large initial depth for ρ op approaching to 0.5.In other words, as long as the sphere moves in a viscous fluid (C d is a constant), ρ op increases with the initial depth L 0 until it is infinitely close to 0.5.On the contrary, when the sphere moves in a non-viscous fluid, ρ op is not affected by the initial depth L 0 , which is constantly equal to The jump of ρ op at a very large L 0 for viscous and non-viscous cases can be attributed to the properties of viscous force.As the viscous force is proportional to the square of the velocity, the acceleration of the sphere decreases as the velocity increases.The acceleration would infinitely approach zero and the velocity of the sphere approaches a stable value.By contrast, the sphere which moves in a non-viscous environment will always accelerate under the action of the combined force.Therefore, the state of motion of the sphere differs at a very large L 0 for viscous and non-viscous cases.From another point of view, the effect of fluid viscosity on the motion of the sphere needs time, and as long as the time is large enough, it changes the motion of the sphere as well as ρ op no matter how small C d is, compared to the non-viscous case.This can also be validated from another phenomenon in Figure 20.When the initial depth L 0 tends 0.5, i.e., L 0 − 1 2 → 0 , all the optimal relative density ρ op of the sphere in viscous cases tends to the counterpart of the non-viscous case, that is ρ op → √ 3−1 2 .This is because that the displacement of the sphere is too small for fluid viscosity to exert influence on the motion of the sphere.
Conclusions
Icebreaking by a free-rising sphere driven by its buoyance was studied in this paper.The main concern was determining when the light sphere breaks the ice plate the most severely, or the optimal relative density of the sphere.A set of indoor experimental devices were designed, and high-speed photography was adopted to record the whole process, including the free-rising of the sphere, the collision between the sphere and the ice plate, crack initiation and propagation as well as breakups of the ice plate.The failure mode of the ice plate caused by the impact and the influence of different parameters on the icebreaking ability of the sphere were explored.Conclusions were drawn as below: (1) Impacted by the free-rising buoyant sphere, the ice plate was broken.In some cases, a conical crevasse was formed under the reflected tensile wave at the top surface of the ice plate.A typical damage mode of the ice plate under this impact was 'radial cracks, circumferential cracks, debris splashing and ice plate breakup' in sequence.As a result, four damage patterns were concluded as "RCs", "RCs⊕CCs", "debris splashing" and "ice-plate breakup" patterns, with the damage degree of the ice plate rising; (2) Since it was impossible to directly measure the ice plate at the moment of impact of the sphere, we took the probability of the breakup of ice plate as a criterion and tried to link it with the kinetic energy and momentum of the sphere, which were two controversial parameters in determining the damage degree of ice.For the working conditions described in this paper, we found that when the kinetic energy of the sphere peaks at ρ = 0.4, the probability of the ice plate breakup is the highest, which is 91.7%.It was found that the kinetic energy of the sphere, rather than momentum at the moment of collision, dominates the damage degree of the ice plate.The greater the kinetic energy of the sphere, the more severely the ice plate was damaged.It was considered that part of the kinetic energy of the sphere was transformed into the fracture energy of the ice plate as well as the kinetic and potential energies of ice debris and fragments; (3) The optimal density of the sphere ρ op damaged the ice plate the most severely.ρ op can be estimated by theoretical analysis of the kinetic energy of the sphere.It was found that ρ op depends on the viscous effect of the fluid to a great extent.If the viscous effect is neglected, or for a non-viscous case, ρ op equals to √ 3−1 2 (or 0.366) identically.Otherwise, ρ op declines along with the decrease of C d at a given L 0 , and rises along with the increase of L 0 at a given C d , approaching to 0.5 for a very large L 0 in the end.
In the future, research on numerical modelling based on the interaction of ice, water and a buoyant sphere will be carried out.Furthermore, the effect of the boundary conditions of the ice plate will be studied, including free-floating boundary conditions, etc. ρ op an optimal dimensionless density of the sphere to break the ice plate most
Figure 1 .
Figure 1.Sketch of the problem.
is the cross-sectional area and d C is the drag coefficient.It is remarkable that m
Figure 1 .
Figure 1.Sketch of the problem.
Figure 1 .
Figure 1.Sketch of the problem.
Figure 2 .
Figure 2. Free body diagram of a rising buoyant sphere.
C
is the drag coefficient.It
Figure 2 .
Figure 2. Free body diagram of a rising buoyant sphere.
Figure 3 .
Figure 3. Schematic diagram of ice-making in the cryostat.
Figure 5 Figure 5 .
Figure5shows the experimental setup.The experimental setup can be divided into four systems: (1) the sphere location and releasing system; (2) the fixing system; (3) the supporting system; (4) the camera system.
Figure 5 .
Figure 5. Experimental setup: (a) supporting system and camera system; (b) ice-fixing system and sphere location and releasing system; (c) cross-section diagram of ice-fixing supporter and the fixation way of the ice plate; and (d) releasing device and the sphere.
24 Figure 6 .
Figure 6.Variations of the velocity of the sphere along with displacement before colliding the ice plate for the case = 0.4, 0 L = 2.31 and h = 0.089.
Figure 6 .
Figure 6.Variations of the velocity of the sphere along with displacement before colliding the ice plate for the case ρ = 0.4, L 0 = 2.31 and h = 0.089.
J
. Mar. Sci.Eng.2023, 10, x FOR PEER REVIEW 10 of 24found that when d C = 0.12 (as shown in Figure7) could predict the motion of the sphere well in our cases, especially considering the influence of the wall surface on m C .
Figure 8 .
Figure 8.The breaking process of the ice plate under the impact of the buoyant sphere at different moments, as denoted in the subfigures, for the case = 0.4, 0 L = 2.31 and h = 0.089.
Figure 8 .
Figure 8.The breaking process of the ice plate under the impact of the buoyant sphere at different moments, as denoted in the subfigures, for the case ρ = 0.4, L 0 = 2.31 and h = 0.089.
JFigure 9 .
Figure 9. Schematic sequence of events in the impact, adapted from [29]: (a) immediately after impact, stress waves are generated; (b) relief waves propagate from the upper surface and fine debris splits away off; (c) interacting relief waves cause fragmentation and spalling; and (d) the broken ice plate is reassembled, and the center of the ice plates forms some sloped fractures due to the impact of the sphere.
Figure 9 .
Figure 9. Schematic sequence of events in the impact, adapted from[29]: (a) immediately after impact, stress waves are generated; (b) relief waves propagate from the upper surface and fine debris splits away off; (c) interacting relief waves cause fragmentation and spalling; and (d) the broken ice plate is reassembled, and the center of the ice plates forms some sloped fractures due to the impact of the sphere.
JFigure 10 .
Figure 10.Typical damage patterns of ice plates with different 0 L , as denoted in the subfig- ures, with the initial condition of = 0.6 and h = 0.071 at t = 0.1 s.As mentioned in the previous section, affecting by the nature of the ice plate, we cannot obtain the strain of the ice plate during the collision directly by using contact measurement methods, such as strain gauges attached to the surface of the ice plate.For this reason, we adopted an indirect method to describe the damage degree of the ice plate as above.For the condition in this section, it is common to expect the result before the experiment, i.e., that within a certain range (will be discussed in Section 4.4), the greater the initial submergence depth of the sphere is, the more severely the ice plate is damaged.Experimental results validated this expectation.Therefore, we can predict that the damage degree of the ice plate becomes more severe from "RCs" to "RCs⊕CCs" and then to "splashing".
Figure 10 .
Figure 10.Typical damage patterns of ice plates with different L 0 , as denoted in the subfigures, with the initial condition of ρ = 0.6 and h = 0.071 at t = 0.1 s.
Figure 11 .
Figure 11.Typical damage patterns of ice plates with different dimensionless thickness h with the initial condition of = 0.6 and 0 L = 2.31 when t = 0.1 s. (upper are captured a from bird's eye view by Camera 2 and lower ones are captured from a horizontal perspective by Camera 1).
1 l was larger than 2 l a bit, while 1 l and 2 l were both larger than 3 lFigure 12 .
Figure 12.Equilibrium positions of the sphere after colliding with the ice plate at different h , as denoted in the subfigures, with = 0.6 and 0 L = 2.31.
Figure 11 .
Figure 11.Typical damage patterns of ice plates with different dimensionless thickness h with the initial condition of ρ = 0.6 and L 0 = 2.31 when t = 0.1 s. (upper photos are captured a from bird's eye view by Camera 2 and lower ones are captured from a horizontal perspective by Camera 1).
Figure 11 .
Figure 11.Typical damage patterns of ice plates with different dimensionless thickness h with the initial condition of = 0.6 and 0 L = 2.31 when t = 0.1 s. (upper photos are captured a from bird's eye view by Camera 2 and lower ones are captured from a horizontal perspective by Camera 1).
1 l was larger than 2 l a bit, while 1 l and 2 l were both larger than 3 lFigure 12 .
Figure 12.Equilibrium positions of the sphere after colliding with the ice plate at different h , as denoted in the subfigures, with = 0.6 and 0 L = 2.31.
Figure 12 .
Figure 12.Equilibrium positions of the sphere after colliding with the ice plate at different h, as denoted in the subfigures, with ρ = 0.6 and L 0 = 2.31.
Figures 13 -C
15 represent the dimensionless velocity, dimensionless k and dimensionless momentum of spheres with different relative densities of contact with the ice plate.There are three curves in each figure, represen imental value, theoretical predicted values with m
Figure 13 .
Figure 13.The dimensionless velocity of the sphere just before contacting the ice relative density obtained by three methods with h = 0.089 and 0 L = 2.31 constant.
Figure 13 .Figure 14 .
Figure 13.The dimensionless velocity of the sphere just before contacting the ice plate versus the relative density obtained by three methods with h = 0.089 and L 0 = 2.31 constant.
Figure 14 .
Figure 14.Dimensionless kinetic energy E kt of the sphere just before contacting the ice plate versus the relative density obtained by three methods with h = 0.089 and L 0 = 2.31 constant.
Figure 14 .
Figure 14.Dimensionless kinetic energy kt E of the sphere just before contactin the relative density obtained by three methods with h = 0.089 and 0 L = 2.31 co
Figure 15 .
Figure 15.Dimensionless momentum M t of the sphere just before contacting the ice plate versus the relative density obtained by three methods with h = 0.089 and L 0 = 2.31 constant.
Figure 16 .
Figure 16.Destructiveness of ice plates caused by different relative densities of spheres , as de- noted in the subfigures, from different views.
Figure 17 .
Figure 17.Probability of failure mode of ice plate impacted by spheres with different densities.
Figure 16 .
Figure 16.Destructiveness of ice plates caused by different relative densities of spheres ρ, as denoted in the subfigures, from different views.
Figure 16 .
Figure 16.Destructiveness of ice plates caused by different relative densities of spheres , as de- noted in the subfigures, from different views.
Figure 17 .
Figure 17.Probability of failure mode of ice plate impacted by spheres with different densities.
Figure 17 .
Figure 17.Probability of failure mode of ice plate impacted by spheres with different densities.
Figure 18 .
Figure 18.The relationship between kt E , t M and the probability of the ice breakup with the relative density obtained in experiments.
Figure 18 .
Figure 18.The relationship between E kt , M t and the probability of the ice breakup with the relative density obtained in experiments.
Figure 18 .
Figure 18.The relationship between kt E , t M and the probability of the ice breakup with the relative density obtained in experiments.
Figure 19 .
Figure 19.Dimensionless kinetic energy of the sphere versus dimensionless density of the sphere with h = 0.089 and L 0 = 2.31.
), one can obtain op with given 0 L and d C .As Equation (13) is complex and it is hard to obtain an explicit op , we solved Equation (13) numerically.The relationship between op and 0 L at different d C is demonstrated in Figure 20.
Figure 20 .
Figure 20.The relationship between ρ op and L 0 with different C d .
|
2023-02-01T16:24:40.136Z
|
2023-01-29T00:00:00.000
|
{
"year": 2023,
"sha1": "2ae69dc8df4e7d43f693cd6b3ff116e7d8538b00",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1312/11/2/289/pdf?version=1675400916",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8a47a17c1b97ebc717477f791c875b82dedc65de",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
73512167
|
pes2o/s2orc
|
v3-fos-license
|
Alternative Ultrasound-Assisted Method for the Extraction of the Bioactive Compounds Present in Myrtle (Myrtus communis L.)
The bioactive compounds in myrtle berries, such as phenolic compounds and anthocyanins, have shown a potentially positive effect on human health. Efficient extraction methods are to be used to obtain maximum amounts of such beneficial compounds from myrtle. For that reason, this study evaluates the effectiveness of a rapid ultrasound-assisted method (UAE) to extract anthocyanins and phenolic compounds from myrtle berries. The influence of solvent composition, as well as pH, temperature, ultrasound amplitude, cycle and solvent-sample ratio on the total phenolic compounds and anthocyanins content in the extracts obtained were evaluated. The response variables were optimized by means of a Box-Behnken design. It was found that the double interaction of the methanol composition and the cycle, the interaction between methanol composition and temperature, and the interaction between the cycle and solvent-sample ratio were the most influential variables on the extraction of total phenolic compounds (92.8% methanol in water, 0.2 s of cycle, 60 °C and 10:0.5 mL:g). The methanol composition and the interaction between methanol composition and pH were the most influential variables on the extraction of anthocyanins (74.1% methanol in water at pH 7). The methods that have been developed presented high repeatability and intermediate precision (RSD < 5%) and the bioactive compounds show a high recovery with short extraction times. Both methods were used to analyze the composition of the bioactive compounds in myrtle berries collected from different locations in the province of Cadiz (Spain). The results obtained by UAE were compared to those achieved in a previous study where microwave-assisted extraction (MAE) methods were employed. Similar extraction yields were obtained for phenolic compounds and anthocyanins by MAE and UAE under optimal conditions. However, UAE presents the advantage of using milder conditions for the extraction of anthocyanins from myrtle, which makes of this a more suitable method for the extraction of these degradable compounds.
. Influential variables with their corresponding values studied in this work.
Phenolic Compounds Anthocyanins
Temperature ( • C) 10 Analysis of variance (ANOVA) was applied to the set of results in order to evaluate the effect of the different factors on their response and the possible interactions between them. Tables 2 and 3 show the results obtained from this analysis. 47 53 This information was supplemented with Pareto Charts (Figure 1). Pareto charts show each effect and combination of effects by a bar in decreasing order of significance. From a graphical point of view, this allows to visualize the influencing variables and their degree of influence. 47 53 This information was supplemented with Pareto Charts (Figure 1). Pareto charts show each effect and combination of effects by a bar in decreasing order of significance. From a graphical point of view, this allows to visualize the influencing variables and their degree of influence. In the case of total phenolic compounds (Figure 1a), cycle (X4) was the only linear term which had a significant effect, with a p-value lower than 0.01. Its effect on the response variable was negative (b4 = −4.17). Numerous studies show that the cycle is an influential variable since the use of ultrasound cycles (pulse) improves the extraction of certain compounds of interest, such as the phenolic compounds in natural matrices [31][32][33]. The negative effect means that a decrease in the cycle increases the extraction of phenolic compounds. This may be due to the negative chemical and physical effects of cavitation [34]. The negative effect is often due to the reactions of free radicals In the case of total phenolic compounds (Figure 1a), cycle (X 4 ) was the only linear term which had a significant effect, with a p-value lower than 0.01. Its effect on the response variable was negative (b 4 = −4.17). Numerous studies show that the cycle is an influential variable since the use of ultrasound cycles (pulse) improves the extraction of certain compounds of interest, such as the phenolic compounds in natural matrices [31][32][33]. The negative effect means that a decrease in the cycle increases the extraction of phenolic compounds. This may be due to the negative chemical and physical effects of cavitation [34]. The negative effect is often due to the reactions of free radicals formed during the sonication with molecules in the medium [35], which accelerates the degradation process of phenolic compounds. In addition, the solvent composition had a significant quadratic influence (X 1 2 ) on the response variable (p-value < 0.01). The solvent composition is an important variable since it is necessary to extract the phenolic compounds with solvents of similar polarity [36]. Specifically, X 1 2 showed a negative effect (b 11 = −8.06). With regards to interactions between factors, minor interactions between methanol and temperature (X 1 X 2 ) (p-value < 0.01) and between cycle and solvent-sample ratio (X 4 X 6 ) (p-value < 0.05) were observed. Both interactions showed positive coefficients (b 12 = 6.75 and b 46 = 6.70).
In the case of anthocyanins (Figure 1b), solvent composition (X 1 ) was the only linear term that had a significant effect, with a p-value lower than 0.01. Its effect on the response variable was positive (b 1 = 4.80), which indicates that an increase in the methanol percentage in the solvent favored the anthocyanins content in the extract. Many pieces of research have been found in the literature which shows that hydroalcoholic mixtures are more efficient than pure solvents for the extraction of moderately polar molecules, such as phenolic compounds [36]. Phenolic compounds have a moderate polarity, so they are not extracted adequately when pure water mixtures are used (high polarity). The use of methanol increases the solubility of phenolic compounds and the use of water in a lower percentage helps the desorption of the solute from the sample [37]. With regard to the interactions between factors, a minor interaction between methanol and pH (X 1 X 5 ) (p-value < 0.05) was observed with a positive coefficient (b 15 = 6.51). These results agree with the bibliographic data [38] where the concentration of organic solvent used and the pH are influential variables on the extraction and stability of anthocyanins from vegetable matrices. Which regard to quadratic effects, non-significant interactions were obtained (p-value > 0.05).
The polynomial Equations (1) and (2) for anthocyanins and total phenolic compounds were obtained from the coefficients of the effects and interactions (Tables 2 and 3). Therefore, two second-order mathematical models were obtained to predict the Y TA and Y TP response values as a function of the independent variables. Lack of fit test showed p-values greater than 0.05 for phenolic compounds and for anthocyanins which means that both models fit well.
Both mathematical models can be reduced by omitting the insignificant terms (p-value > 0.05). The Equations (3) and (4) of the two reduced models were expressed as follows: Y TA (mg·g −1 ) = 23.3422 + 4.80026·X 1 − 6.60698·X 1 X 5 , 7 of 21 Y TP (mg·g −1 ) = 45.8533 − 4.17338·X 4 − 8.06168·X 12 + 6.75029·X 1 X 2 + 6.70505·X 4 X 6 (4) The trends outlined above were recorded in three-dimensional (3D) surface plots using the fitted model in order to improve our understanding of both, the main and the interaction effects, of the most influential parameters. The combined effects of cycle-methanol, methanol-temperature and cycle-ratio on the total phenolic compounds recovery are represented in Figure 2a-c. The combined effect of solvent composition and pH on the total anthocyanins recovery is represented in Figure 2b.
Optimal Conditions
According to the experimental design, the ideal UAE conditions to extract the phenolic compounds were as follows: 92.8% methanol in water as a solvent, 60 °C extraction temperature, 65.48% ultrasound amplitude, 0.2 s cycles, pH 6.8, and 10:0.5 mL:g solvent-sample ratio. With regard to the temperature, no higher temperatures were verified, since they might imply a greater degradation of the compounds of interest and a high loss of methanol that would affect the solventsample ratio [39]. With respect to pH, an almost neutral value was determined as optimal, since different research shows that acidified solvents may enhance the formation of free radicals in aqueous solutions because of their higher concentration of H + or thermal treatment [32], which would hinder the recovery of the phenolic compounds [35].
With regards to anthocyanins, optimum UAE conditions were as follows: 74.1% methanol in water solvent, 10 °C extraction temperature, 30% ultrasound amplitude, 0.3 s cycles, pH 7, and 18:0.5 mL:g as the solvent-sample ratio. With respect to temperature, the lowest end of the range studied (10 °C) was determined as the optimum value. Although anthocyanins are also phenolic compounds, they are more thermally sensitive than other phenolic compounds. High temperatures can diminish the recovery of anthocyanins due mainly to oxidation, cleavage of covalent bonds or an increase in oxidation reactions as a result of the thermal treatment [40]. With respect to the solvent pH, neutral pH was found to be optimum for the extraction of anthocyanins. Although pH between 1 and 3 usually generates stable conformation for anthocyanins, there are many articles in the literature where the highest extraction yields take place with a higher pH (3-7) [34]. This behavior might be the effect of different factors on anthocyanins stability (light, temperature, extraction time, etc.), which may turn them into other compounds [41]. Specifically, some authors affirm that ultrasound can promote the degradation of the anthocyanins because of the radical hydroxyl (OH • ) and hydrogen peroxide (H2O2) produced inside the cavitation bubbles when subjected to conditions, such as high
Optimal Conditions
According to the experimental design, the ideal UAE conditions to extract the phenolic compounds were as follows: 92.8% methanol in water as a solvent, 60 • C extraction temperature, 65.48% ultrasound amplitude, 0.2 s cycles, pH 6.8, and 10:0.5 mL:g solvent-sample ratio. With regard to the temperature, no higher temperatures were verified, since they might imply a greater degradation of the compounds of interest and a high loss of methanol that would affect the solvent-sample ratio [39]. With respect to pH, an almost neutral value was determined as optimal, since different research shows that acidified solvents may enhance the formation of free radicals in aqueous solutions because of their higher concentration of H + or thermal treatment [32], which would hinder the recovery of the phenolic compounds [35].
With regards to anthocyanins, optimum UAE conditions were as follows: 74.1% methanol in water solvent, 10 • C extraction temperature, 30% ultrasound amplitude, 0.3 s cycles, pH 7, and 18:0.5 mL:g as the solvent-sample ratio. With respect to temperature, the lowest end of the range studied (10 • C) was determined as the optimum value. Although anthocyanins are also phenolic compounds, they are more thermally sensitive than other phenolic compounds. High temperatures can diminish the recovery of anthocyanins due mainly to oxidation, cleavage of covalent bonds or an increase in oxidation reactions as a result of the thermal treatment [40]. With respect to the solvent pH, neutral pH was found to be optimum for the extraction of anthocyanins. Although pH between 1 and 3 usually generates stable conformation for anthocyanins, there are many articles in the literature where the highest extraction yields take place with a higher pH (3-7) [34]. This behavior might be the effect of different factors on anthocyanins stability (light, temperature, extraction time, etc.), which may turn them into other compounds [41]. Specifically, some authors affirm that ultrasound can promote the degradation of the anthocyanins because of the radical hydroxyl (OH • ) and hydrogen peroxide (H 2 O 2 ) produced inside the cavitation bubbles when subjected to conditions, such as high ultrasonic power, high amplitude, low temperature and long treatment time [42]. No higher pH was checked for anthocyanins since this may cause unstable structures as a result of basic hydrolysis [43].
In conclusion, for both, phenolic compounds and anthocyanins, maximum extractions were obtained when the solvent had a high percentage of methanol and neutral pH. Specifically, for the extraction of phenolic compounds a higher range of percentages was required.
Extraction Time
Once the effects of the variables on the extraction methods and the optimal values were known, the kinetics of the extractions was studied. Several extractions were carried out under optimal ultrasound conditions while extraction time varied between 2, 5, 10, 15, 20, and 25 min. The average results obtained (n = 3) for phenolic compounds and for anthocyanins are represented in Figure 3. ultrasonic power, high amplitude, low temperature and long treatment time [42]. No higher pH was checked for anthocyanins since this may cause unstable structures as a result of basic hydrolysis [43].
In conclusion, for both, phenolic compounds and anthocyanins, maximum extractions were obtained when the solvent had a high percentage of methanol and neutral pH. Specifically, for the extraction of phenolic compounds a higher range of percentages was required.
Extraction Time
Once the effects of the variables on the extraction methods and the optimal values were known, the kinetics of the extractions was studied. Several extractions were carried out under optimal ultrasound conditions while extraction time varied between 2, 5, 10, 15, 20, and 25 min. The average results obtained (n = 3) for phenolic compounds and for anthocyanins are represented in Figure 3. It can be seen that large recoveries are achieved for both types of bioactive compounds and that long extraction times are not required. The Phenolic compounds present their maximum extraction at 5 min. However, longer extraction times led to lower recoveries, probably due to degradation of the phenolic compounds [25]. With respect to the anthocyanins, 2 min was determined as their optimum extraction time, since it exhibited the same yields as with longer times, while saving both, time and costs.
Repeatability and Intermediate Precision of UAE Methods
The precision of the extraction methods was evaluated in terms of repeatability (intra-day) and intermediate precision (inter-day). Repeatability was evaluated by performing 10 extractions under the same conditions on the same day. Intermediate precision was evaluated by performing 10 additional extractions on each one of the following two days. Altogether, 30 extractions were carried out under optimal extraction conditions to evaluate the precision of the extraction method for phenolic compounds and for anthocyanins. This method is employed in numerous studies [25,44]. The results were expressed by the coefficient of variation (CV) of the means. The repeatability results obtained were: 2.95% for phenolic compounds and 2.23% for anthocyanins. The intermediate precision results were: 4.66% for phenolic compounds and 4.15% for anthocyanins. As it can be seen, all the results are within acceptable limits (±10%) according to AOAC [45] and supported the accuracy-with diversions lower than 5.0%-of the extraction methods for total anthocyanins and total phenolic compounds. It can be seen that large recoveries are achieved for both types of bioactive compounds and that long extraction times are not required. The Phenolic compounds present their maximum extraction at 5 min. However, longer extraction times led to lower recoveries, probably due to degradation of the phenolic compounds [25]. With respect to the anthocyanins, 2 min was determined as their optimum extraction time, since it exhibited the same yields as with longer times, while saving both, time and costs.
Repeatability and Intermediate Precision of UAE Methods
The precision of the extraction methods was evaluated in terms of repeatability (intra-day) and intermediate precision (inter-day). Repeatability was evaluated by performing 10 extractions under the same conditions on the same day. Intermediate precision was evaluated by performing 10 additional extractions on each one of the following two days. Altogether, 30 extractions were carried out under optimal extraction conditions to evaluate the precision of the extraction method for phenolic compounds and for anthocyanins. This method is employed in numerous studies [25,44]. The results were expressed by the coefficient of variation (CV) of the means. The repeatability results obtained were: 2.95% for phenolic compounds and 2.23% for anthocyanins. The intermediate precision results were: 4.66% for phenolic compounds and 4.15% for anthocyanins. As it can be seen, all the results are within acceptable limits (±10%) according to AOAC [45] and supported the accuracy-with diversions lower than 5.0%-of the extraction methods for total anthocyanins and total phenolic compounds.
Application of the Developed Methods to Ecotypes from Two Locations
In order to determine the applicability of the developed methods, once they had been optimized, they were applied to a new set of samples. Specifically, 14 ecotypes of myrtle were evaluated. 8 ecotypes from Puerto Real region (My-1, My-2, My-3, My-4, My-5, My-6, My -7, and My-8) and 6 from San José del Valle region (My-9, My-10, My-11, My-12, My-13, and My-14). The phenolic compounds were extracted from the 14 ecotypes in duplicate by applying the UAE method according to the optimum conditions previously determined. This should ensure the greatest possible yields. The quantification of the total phenolic compounds content in the extracts was carried out by Folin-Ciocalteau reagent.
The anthocyanins compounds were also extracted from the 14 samples according to the optimal conditions determined for the developed UAE method. The anthocyanins content in the extracts was quantified by UHPLC-UV-vis. The total anthocyanins content is the result of adding up each separate anthocyanin content. The average extraction and quantification results are shown in Table 4.
Analysis by Conglomerate
As a consequence of the chemical results obtained, it can be observed that there are differences between the average values obtained from the different myrtle ecotypes. To objectively study if these visual differences are related to the origin of the ecotypes, a comparative chemometric study was carried out using all the average values. Specifically, the data matrix D 13 × 14 (D variables × ecotypes ) ( Table 4) was evaluated using an exploratory tool, i.e., Hierarchical Cluster Analysis (HCA). Ward's method and square Euclidean distance were employed and the variables for the differentiation were: The total phenolic compounds (mg·g −1 ) from each experiment; each individual anthocyanin content, 11 anthocyanins (mg·g −1 ), and total anthocyanins (mg·g −1 ). Myrtle is a shrub that grows better in warm and humid areas, and requires rich, humid soils. These characteristics match those of Puerto Real, which is near the sea with humid and sandy soils. San José del Valle has drier climate conditions and its clay soil is not so fertile [46,47]. These differences may lead to differences between the maturation processes of the autochthonous ecotypes in Puerto Real and San José del Valle, and consequently, to variations in their bioactive composition. The results of the analysis are graphically represented as a dendrogram in Figure 4. An obvious differentiation of the samples into two groups can be observed: Cluster A, includes only the ecotypes from Puerto Real, and Cluster B, only includes the ecotypes from San José del Valle. Therefore, based on their tendency to fall into a particular group in accordance with their origin, it can be said that phenolic compounds and anthocyanins contents in each ecotype is related to the berries' geographical area of origin. Specifically, the ecotypes in Cluster A, Puerto Real, present a total phenolic compounds and anthocyanins content greater than the ecotypes in Cluster B, from San José del Valle. The differences can be attributed to the different climatic and soil conditions above mentioned. Cluster A, includes only the ecotypes from Puerto Real, and Cluster B, only includes the ecotypes from San José del Valle. Therefore, based on their tendency to fall into a particular group in accordance with their origin, it can be said that phenolic compounds and anthocyanins contents in each ecotype is related to the berries' geographical area of origin. Specifically, the ecotypes in Cluster A, Puerto Real, present a total phenolic compounds and anthocyanins content greater than the ecotypes in Cluster B, from San José del Valle. The differences can be attributed to the different climatic and soil conditions above mentioned. Table 3).
Comparison Study: UAE vs. MAE
As above mentioned, the greatest phenolic compounds and anthocyanins yields using UAE were obtained under the following optimal conditions: methanol:water solvent ratio 92.8% v/v at 60 °C for phenolic compounds and methanol:water solvent ratio 74.1% v/v at 10 °C for anthocyanins.
In comparison with traditionally used methods for the extraction of the phenolic compounds in myrtle berries, UAE achieves a greater recovery of the compounds of interest, while using less solvent and in a shorter time, with the consequent cost reduction [11,17]. This increased effectiveness with greater extraction yields of both, total phenolic compounds and total anthocyanins, could be based Table 3).
Comparison Study: UAE vs. MAE
As above mentioned, the greatest phenolic compounds and anthocyanins yields using UAE were obtained under the following optimal conditions: methanol:water solvent ratio 92.8% v/v at 60 • C for phenolic compounds and methanol:water solvent ratio 74.1% v/v at 10 • C for anthocyanins.
In comparison with traditionally used methods for the extraction of the phenolic compounds in myrtle berries, UAE achieves a greater recovery of the compounds of interest, while using less solvent and in a shorter time, with the consequent cost reduction [11,17]. This increased effectiveness with greater extraction yields of both, total phenolic compounds and total anthocyanins, could be based on the phenomenon of cavitation, which breaks cell walls and releases the compounds of interest from the myrtle berries' matrices.
With the purpose of rounding up this study, the results obtained using UAE were compared to those achieved by MAE in previous work. For that purpose, the same number of samples were run under optimum conditions and later on analyzed [22]. The total phenolic compounds and total anthocyanins content extracted from myrtle berries at different times using UAE and MAE are shown in Figure 5.
With respect to the phenolic compounds (Figure 5a), a similar trend is observed in both extraction methods. The phenolic compounds yield increases until the maximum extraction value is reached at 5 min for UAE and at 15 min for MAE. From then on, the quantity of the extract begins to decrease. The optimum time for UAE, 5 min, indicates that UAE degrades phenolic compounds faster than MAE and the recovery is also lower. When compared to UAE, as recently reported by Ghafoor et al. [48,49], MAE obtains extracts with a substantially greater content of phenolic compounds than the one obtained by UAE.
With respect to the anthocyanins (Figure 5b), their content levels are very similar in MAE and UAE extracts. In addition, the extraction time required to achieve good yields (2 min) is low for both methods, which would considerably reduce costs when operating at an industrial scale. At the optimal time of two minutes, the anthocyanins extraction is slightly higher when MAE is used. However, at longer times, MAE extracts have lower anthocyanins content [50]. MAE optimal operating temperature (50 • C) makes anthocyanins, thermally labile, begin to degrade before.
Additionally, both techniques were applied to different myrtle ecotypes ( Figure 6). As above noted, MAE stands out as a more efficient method for the extraction of the phenolic compounds in myrtle berries. With respect to anthocyanins, although some particular ecotypes produced greater yields by MAE, most of them produced greater yields when UAE was employed. Anthocyanins are extremely susceptible to degradation and the combination of high pressure and temperature that is employed for MAE would enhance such degradation and affect negatively their recovery. on the phenomenon of cavitation, which breaks cell walls and releases the compounds of interest from the myrtle berries' matrices.
With the purpose of rounding up this study, the results obtained using UAE were compared to those achieved by MAE in previous work. For that purpose, the same number of samples were run under optimum conditions and later on analyzed [22]. The total phenolic compounds and total anthocyanins content extracted from myrtle berries at different times using UAE and MAE are shown in Figure 5. With respect to the phenolic compounds (Figure 5a), a similar trend is observed in both extraction methods. The phenolic compounds yield increases until the maximum extraction value is reached at 5 min for UAE and at 15 min for MAE. From then on, the quantity of the extract begins to decrease. The optimum time for UAE, 5 min, indicates that UAE degrades phenolic compounds faster than MAE and the recovery is also lower. When compared to UAE, as recently reported by Ghafoor et al. [48,49], MAE obtains extracts with a substantially greater content of phenolic compounds than the one obtained by UAE.
With respect to the anthocyanins (Figure 5b), their content levels are very similar in MAE and UAE extracts. In addition, the extraction time required to achieve good yields (2 min) is low for both methods, which would considerably reduce costs when operating at an industrial scale. At the optimal time of two minutes, the anthocyanins extraction is slightly higher when MAE is used. However, at longer times, MAE extracts have lower anthocyanins content [50]. MAE optimal operating temperature (50 °C) makes anthocyanins, thermally labile, begin to degrade before.
Additionally, both techniques were applied to different myrtle ecotypes ( Figure 6). As above noted, MAE stands out as a more efficient method for the extraction of the phenolic compounds in myrtle berries. With respect to anthocyanins, although some particular ecotypes produced greater yields by MAE, most of them produced greater yields when UAE was employed. Anthocyanins are extremely susceptible to degradation and the combination of high pressure and temperature that is employed for MAE would enhance such degradation and affect negatively their recovery. Therefore, the UAE should be seriously considered as the preferred method for the extraction of the anthocyanins in myrtle berries.
Plant Materials
The biological materials used for this study were different myrtle berries (14 ecotypes) collected by the authors from two different areas in Cadiz province during their optimum ripeness stages in 2016. The first collection area was Puerto Real (eight ecotypes). This area is characterized by its humid climate due to its proximity to the sea. The second collection area was San José del Valle (6 ecotypes), Therefore, the UAE should be seriously considered as the preferred method for the extraction of the anthocyanins in myrtle berries.
Plant Materials
The biological materials used for this study were different myrtle berries (14 ecotypes) collected by the authors from two different areas in Cadiz province during their optimum ripeness stages in 2016. The first collection area was Puerto Real (eight ecotypes). This area is characterized by its humid climate due to its proximity to the sea. The second collection area was San José del Valle (6 ecotypes), also within Cadiz province, but located inland at 50 km from the coast. This location has a drier climate and its soils have a lower water content. The guidelines described by M., Mulas and M.R., Cani [47] were applied to characterize the morphology of both, leaves and berries, to confirm that the samples had been collected from different ecotypes. The samples were subjected to a pretreatment to improve the contact surface with the solvent [51]. First, the seeds were separated from the pulp. Secondly, the pulp was lyophilized in a Virtis Benchtop K freeze dryer (SP Cientific, New York, United States) and crushed by means of a regular spice grinder. Finally, the samples were stored in a freezer at −20 • C prior to analysis.
Chemicals and Solvents
The solvents used for the extraction were a mix of methanol and water at different concentration levels and with different pH. The methanol (Fischer Scientifics, Loughborough, United Kingdom) was HPLC grade. Ultra-pure water was obtained from a Milli-Q water purification system (EMD Millipore
Ultrasound-Assisted Extraction Procedure
To extract the total phenolic compounds and the total anthocyanins from the myrtle berries, UAE was used. A UP200S probe (Hielscher Ultrasound Technology, Berlin, Germany) was employed, coupled to a processor that allows adjusting the amplitude and the cycle. For the adjustment of the temperature, a thermostatic bath (Frigiterm-10, Selecta, Barcelona, Spain) was employed. The temperature, the cycle, and the amplitude were selected for each extraction according to the experiment. About 0.5 g of the lyophilized and homogenized sample was weighed in a Falcon tube and the corresponding volume of solvent was added depending on the experiment. The Falcon tube was placed in a double vessel through which the water from the thermostatic bath circulated. The initial extraction time set was 10 min, followed by a sample cooling time. After that time, the extracts were centrifuged (7500 rpm, 5 min) and the supernatants were placed in a 25 mL volumetric flask. The precipitates from the extraction were redissolved in 5 mL of the same extraction solvent and centrifuged again under the same conditions. The new supernatants were placed in the volumetric flask and it was completed with the same solvent. The final extracts were stored at −20 • C for their correct conservation until further analysis. The UAE conditions set for the extractions were: Solvent composition (50-100% methanol in water for phenolic compounds and 25-75% for anthocyanins), temperature (10-60 • C), amplitude (30-70%), cycle (0.2-0.7 s), pH (2-7) and solvent-sample ratio (10:0.5-20:0.5 mL:g).
Determining the Content of Total Phenolic Compounds by Folin-Ciocalteau Essay
The total phenolic compounds content in myrtle berry was determined by adapted/modified Folin-Ciocalteau (FC) method [52]. This method has been previously used by many researchers to determine the total phenolic compounds content [53,54]. It is based on a redox reaction in a basic medium that gives rise to a complex of blue coloration with a wide absorption up to 765 nm. The extracts were filtered using a 0.45 µm nylon syringe filter (Membrane Solutions, Dallas, United States). The protocol of the method is the following: 250 µL of the previously filtered extract was transferred to a 25 mL volumetric flask. After this, 12.5 mL of water, 1.25 mL of Folin-Ciocalteau reagent and 5 mL of a 20% aqueous sodium carbonate solution were added. Finally, the flask was made up with water, and after 30 min the absorbance was measured at the maximum. All the extracts were analyzed in duplicate. The range of absorbance obtained for the studied samples was 0.4-1.4. The equipment used to measure the absorbance was a Helios Gamma (γ) Unicam UV-vis Spectrophotometer (Thermo Fisher Scientific, Waltham, MA, United States). The calibrated curve for the quantification was constructed based on a reference standard gallic acid pattern under the same conditions as the extracts [55]. The computer application used to process the data was Microsoft Office Excel 2013. The following regression equation y = 0.0010x + 0.0059 and the following correlation coefficient R 2 = 0.9999 were obtained. The linear range of work was 100-2600 mg L −1 . The results are expressed in milligrams of gallic acid equivalent per gram of lyophilized weight.
Identification of Anthocyanins by UHPLC-QToF-MS
Ultra-high performance liquid chromatography (UHPLC) coupled to quadrupole-time-of-flight mass spectrometry (QToF-MS) (Xevo G2 QToF, Waters Corp., Milford, MA, United States) was used to identify the anthocyanins in the UAE extracts. The column employed was a reverse-phase C18 analytical column with 1.7 µm particle size, 2.1 mm × 100 mm (ACQUITY UPLC CSH C18, Waters). The mobile phase was 2% formic acid-water solution (phase A) and methanol solution (phase B). The studied bioactive compounds were determined by employing the UHPLC-QToF-MS method described in a previously research [22]. The individual anthocyanins were identified based on their retention time and molecular weight. The following eleven anthocyanins were identified in the samples:
Determination of Anthocyanins by UHPLC-UV-Vis System
For the separation and quantification of the anthocyanins present in UAE extracts from myrtle berries, an Elite UHPLC LaChrom Ultra System (Hitachi, Tokyo, Japan) was used. The UHPLC system consists of an L-2420U UV-Vis detector, an L-2200U autosampler, an L-2300 column oven set at 50 • C and two L-2160 U pumps. The column used was a "Fused Core" C18 with 2.6 µm particle size, 2.1 mm × 100 mm (Phenomenex Kinetex, Torrance, CA, United States). The mobile phase consisted of a 5% formic acid-water solution (phase A) and a methanol solution (phase B). The studied bioactive compounds were determined by employing the UHPLC-UV-Vis method described in previous research [22]. Before their analysis, all the UAE extracts were filtered through a 0.20 µm nylon syringe filter (Membrane Solutions, Dallas, TX, United States) and diluted in Milli-Q water. The individual anthocyanins present in myrtle extracts were quantified in cyanidin equivalents by means of a regression curve of anthocyanidin standard, cyanidin chloride (y = 252640.4136x − 28462.4337; R 2 = 0.9999). The standards with a known concentration were prepared between 0.06 and 35 mg·L −1 . The limit of detection (LOD) (0.196 mg·L −1 ) and the limit of quantification (LOQ) (0.653 mg·L −1 ) were calculated as three and ten times respective to the standard deviation of the blank divided by the slope of the calibration curve. Assuming that the 11 anthocyanins have similar absorbance, and taking into account the molecular weight of each anthocyanin, a calibration curve was plotted for each anthocyanin present in myrtle, which allowed to quantify the compounds of interest. All the analyses were carried out in duplicate. Figure 8 shows the HPLC chromatogram that represents the eleven anthocyanins detected in the analyses.
Determination of Anthocyanins by UHPLC-UV-vis System
For the separation and quantification of the anthocyanins present in UAE extracts from myrtle berries, an Elite UHPLC LaChrom Ultra System (Hitachi, Tokyo, Japan) was used. The UHPLC system consists of an L-2420U UV-Vis detector, an L-2200U autosampler, an L-2300 column oven set at 50 °C and two L-2160 U pumps. The column used was a "Fused Core" C18 with 2.6 µm particle size, 2.1 mm × 100 mm (Phenomenex Kinetex, Torrance, CA, United States). The mobile phase consisted of a 5% formic acid-water solution (phase A) and a methanol solution (phase B). The studied bioactive compounds were determined by employing the UHPLC-UV-Vis method described in previous research [22]. Before their analysis, all the UAE extracts were filtered through a 0.20 µm nylon syringe filter (Membrane Solutions, Dallas, TX, United States) and diluted in Milli-Q water. The individual anthocyanins present in myrtle extracts were quantified in cyanidin equivalents by means of a regression curve of anthocyanidin standard, cyanidin chloride (y = 252640.4136x -28462.4337; R 2 = 0.9999). The standards with a known concentration were prepared between 0.06 and 35 mg·L −1 . The limit of detection (LOD) (0.196 mg·L −1 ) and the limit of quantification (LOQ) (0.653 mg·L -1 ) were calculated as three and ten times respective to the standard deviation of the blank divided by the slope of the calibration curve. Assuming that the 11 anthocyanins have similar absorbance, and taking into account the molecular weight of each anthocyanin, a calibration curve was plotted for each anthocyanin present in myrtle, which allowed to quantify the compounds of interest. All the analyses were carried out in duplicate. Figure 8 shows the HPLC chromatogram that represents the eleven anthocyanins detected in the analyses.
Application of Box-Behnken Design (BBD) to the Optimization of the Extraction Methods
In order to optimize the extraction variables, a response surface experiment (RSM) known as Box-Behnken (BBD) was carried out [56]. Box-Behnken design (BBD) is an independent rotatable quadratic design with no embedded factorial or fractional factorial points. The variable combinations are at the midpoint of the edges and at the center of the space [57]. It is useful because it allows one
Application of Box-Behnken Design (BBD) to the Optimization of the Extraction Methods
In order to optimize the extraction variables, a response surface experiment (RSM) known as Box-Behnken (BBD) was carried out [56]. Box-Behnken design (BBD) is an independent rotatable quadratic design with no embedded factorial or fractional factorial points. The variable combinations are at the midpoint of the edges and at the center of the space [57]. It is useful because it allows one to avoid carrying out experiments under extreme conditions and, therefore, the possibility of deceiving results [58]. When this statistical experiment design is employed in conjunction with a response surface methodology (RSM) the effects of six independent factors on each response can be studied. The independent factors studied were: Solvent composition (% methanol in water) (X 1 ), solvent pH (X 2 ), extraction temperature (X 3 ), ultrasound amplitude (X 4 ), cycle (X 5 ), and a solvent-sample ratio (X 6 ). For each variable, there are three levels, coded as -1 (low), 0 (central point or middle), and +1 (high). Specifically, the studied ranges were as follows: Solvent composition: 50, 75, 100% for phenolic compounds and 25, 50, 75% for anthocyanins; temperature: 10, 35, 60 • C; amplitude: 30, 50, 70%; cycle: 0.2, 0.45, 0.7 s; pH: 2, 4.5, 7 and solvent-sample ratio: 10:0.5, 15:0.5, 20:0.5 mL:g. The ranges for the study were selected taking into account previous experiences by the research team. The response variables studied were: The experimental results for total phenolic compounds (Y TP , mg·g −1 ) and the experimental results for total anthocyanins (Y TA , mg·g −1 ). The design consisted of 54 treatments with six repetitions at the center point. All the trials were performed in random order. The whole experimental design matrix used can be seen in Table 5. My-9 from San Jose del Valle was the myrtle sample used for the optimization procedure. The results for total phenolic compounds and total anthocyanins contents were entered into a polynomial equation. The response of the total phenolic compounds and the anthocyanins obtained in each of the experiments was entered into a second-order polynomial equation in order to correlate the relationship between the independent variables and the response (Equation (5)): where Y is the predicted response (Y TP and Y TA ); β 0 is the model constant; X i and X j are the independent variables; β i are the linear coefficients; β ij are the coefficients corresponding to the interactions; βii are the quadratic coefficients and r is the pure error sum of squares. Design Expert software 11 (Trial Version, Stat-Ease Inc., Minneapolis, MN, USA) was the software employed for experimental design, the data analysis, and the model building. The statistical significance of the model, lack of fit, and regression terms were evaluated based on the analysis of variance (ANOVA).
The results of applying the extraction method to different myrtle ecotypes were studied using a multivariate analysis, hierarchical clustering analysis (HCA). Ward's method and the Euclidean square distance, were employed. Statgraphic Centurion XVII (Statgraphics Technologies, Inc., The Plains, VA, United States) was the software used.
Conclusions
This work has successfully developed quick and effective methods to extract bioactive compounds, such as anthocyanins and total phenolics from myrtle (Myrtus communis L.) pulp. A thorough search in the relevant literature showed that this is the first study in which UAE has been optimized for the extraction of phenolic compounds and anthocyanins from myrtle berries. The following optimal UAE conditions have been determined to extract the phenolic compounds: 92.8% methanol in water, 6.8 pH, 60 • C temperature, 65.48% ultrasound amplitude, 0.2 s cycle, and 10:0.5 as the optimum solvent-sample ratio. With regards to anthocyanins, optimal UAE conditions were: 74.1% methanol in water, 7 pH, 10 • C temperature, 30% ultrasound amplitude, 0.3 s cycle, and 18:0.5 as the optimum solvent-sample ratio. The optimum extraction times were only 5 and 2 min for phenolic compounds and anthocyanins, respectively. Both extraction methods presented satisfactory intra-day repeatability and inter-day repeatability (CV < 5%). The methods were applied to 14 different myrtle ecotypes.
The hierarchical cluster analysis (HCA), showed a correlation between the bioactive composition (total phenolic compounds and total and individual anthocyanins contents in the extracts) and the ecotypes' geographical area of origin. In conclusion, the results have indicated that UAE is a feasible alternative to conventional methods for the extraction of valuable components from myrtle berries. These results would mean a substantial improvement at the industrial level, since they would allow the manufacturers to quickly determine the quality of the raw materials and save costs. Furthermore, UAE results were compared to those achieved by MAE. The proposed UAE method proved to be an effective procedure to extract the bioactive compounds in myrtle berries, and a particularly efficient alternative for the extraction of anthocyanins.
|
2019-03-11T17:24:13.017Z
|
2019-03-01T00:00:00.000
|
{
"year": 2019,
"sha1": "ed77398e78cac51af15b80f5d4273df01f8c4db9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/24/5/882/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed77398e78cac51af15b80f5d4273df01f8c4db9",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
251018584
|
pes2o/s2orc
|
v3-fos-license
|
Humans plan for the near future to walk economically on uneven terrain
Humans experience small fluctuations in their gait when walking on uneven terrain. The fluctuations deviate from the steady, energy-minimizing pattern for level walking, and have no obvious organization. But humans often look ahead when they walk, and could potentially plan anticipatory fluctuations for the terrain. Such planning is only sensible if it serves some an objective purpose, such as maintaining constant speed or reducing energy expenditure, that is also attainable within finite planning capacity. Here we show that humans do plan and perform optimal control strategies on uneven terrain. Rather than maintain constant speed, they make purposeful, anticipatory speed adjustments that are consistent with minimizing energy expenditure. A simple optimal control model predicts economical speed fluctuations that agree well with experiments with humans (N = 12) walking on seven different terrain profiles (correlated with model ro=0.55+-0.11, P<0.05 all terrains). Participants made repeatable speed fluctuations starting about six to eight steps ahead of each terrain feature (up to +-7.5 cm height difference each step, up to 16 consecutive features). Nearer features matter more, because energy is dissipated with each succeeding step collision with ground, preventing momentum from persisting indefinitely. A finite horizon of continuous look ahead and motor working space thus suffice to practically optimize for any length of terrain. Humans reason about walking in the near future to plan complex optimal control sequences.
Introduction
Walking over irregular terrain presents considerable control challenges compared to level ground ( Figure 1A). Whereas level walking is periodic and largely explained by minimizing energy expenditure (1)(2)(3), uneven terrain disrupts periodicity and costs considerably more energy (4,5).
Steps onto ground of uneven height perturb forward walking and cause walking speed to fluctuate ( Figure 1B), in a pattern dictated by a combination of the terrain and the human compensatory control strategy ( Figure 1C). Any number of control strategies are possible, and could be determined by criteria such as to maintain constant speed or minimize energy expenditure. But such control is also potentially complex, because the optimal trajectory could depend on the entire terrain profile, which must therefore be anticipated and planned for at once. It is unknown whether humans anticipate and plan for uneven steps, whether for energy economy or other criteria, and how they manage the complexity. Walking on uneven terrain may thus be indicative of human capacity to anticipate and plan complex control tasks.
Many environmental challenges to walking are anticipated ahead of time. Humans obviously look far ahead to navigate and to plan an overall route. They also look more closely-only a few steps ahead-to step over an obstacle (6) or to select a foothold (7,8). But in addition to kinematics, walking also has significant inertial dynamics (9) and thus momentum. For example, birds appear to use dynamics and leg posture to adjust for a change in ground height while walking (10). Similarly, humans appear to compensate for an upward step such as a sidewalk curb by gradually speeding up a few steps ahead, losing momentum while ascending the curb, and finally gradually regaining speed a few steps after (11). A nearly opposite one applies to stepping down a curb, and the compensations of birds and humans seem compatible with energy minimization. It appears that a single uneven step is anticipated ahead of time, and that the compensation takes place over several surrounding steps.
A sequence of multiple uneven steps may require more complex compensation ( Figure 1A). The momentum of one step carries forth into succeeding ones, and thus the compensation for one isolated uneven step might be incompatible with its neighbors. It may therefore be important to anticipate and plan a trajectory for many steps at once ( Figure 1B), but for computational complexity that typically increases exponentially with the number of steps, termed a "curse of dimensionality" (12,13). There is thus need for either a means to plan optimal trajectories despite high dimensionality, or a heuristic and practical but suboptimal compensation strategy.
These possibilities depend critically on the mechanics and energetics of walking. One of the key models of walking treats the stance leg like an inverted pendulum, which carries the body center of mass (COM, near the pelvis) in a pendular arc (14). That motion is relatively conservative of mechanical energy, except during transition from one pendulum-like leg to the next, or step-to-step transition (15). There is a dissipative collision between leading leg and ground, which is restored by active work by muscles of the trailing leg, at a proportional cost that accounts for most of the metabolic energy expended for level walking (3). Such a model can predict the optimal forward speed fluctuations for ascending the sidewalk curb mentioned above (16). The dissipation also causes one step's forward momentum to decay within only a few consecutive steps, termed a persistence distance (16). It may thus be unnecessary to plan an unlimited number of neighboring steps at once, but rather only a finite, receding horizon of upcoming steps. Indeed, robotic locomotion has long used a similar paradigm of model predictive control (17), whereby optimization is performed for a finite horizon (18), and re-computed each step within a fast timing loop to act as a form of feedback control. Planning over a finite receding horizon is often practically quite similar to truly optimizing over all steps.
The optimization approach has been applied to a variety of human motions. Upper extremity reaching motions have long been thought to minimize kinematic objectives (19), and more recently energy expenditure (20,21). Similarly, walking routes have been proposed to minimize kinematic objectives such as speed-curvature trade-offs (22), and empirical energy objectives to predict curved walking paths through doorways (23). These results suggest the potential to mechanistically predict the dynamics of walking, but most studies to date have been concerned with steady, periodic gait (24). The challenge of walking on uneven terrain suggests the need to integrate mechanics, energetics, and planning, while somehow avoiding or mitigating high complexity.
The purpose of the present study was to test whether humans can and do optimize their compensations for uneven terrain consisting of a complex pattern of unequal height steps. We used optimization of the pendulum-like walking model to predict speed fluctuation trajectories for a variety of uneven terrain profiles ( Figure 1B). We also considered whether a finite planning horizon can yield similar predictions, but without the complexity of a full horizon. We then performed a human subjects experiment, to test whether the optimal compensations could predict human responses to a similar variety of terrains. If humans do favor energy economy as modeled by stepto-step transition dynamics, and the optimization is feasible and predictive for multiple steps, then the model should reasonably predict human responses. This will reveal whether humans favor energy economy when walking on uneven terrain, and whether and how far they plan into the future.
Model Predictions
The simple walking model demonstrates compensatory control strategies for walking on uneven terrain. The model shows how optimization criteria predict specific trajectories of walking speed, time duration, and energy cost. Here the proposed Minimum energy objective is contrasted with two examples of alternative criteria. These criteria, applied to multiple uneven terrain profiles, result in distinct, terrain-specific predictions, which are to be tested experimentally. Finally, the model predicts how planning horizon affects walking motions, to contrast full vs. finite horizon planning.
Three alternative control strategies are examined: Reactive control, Tight regulation, and the proposed Minimum energy objective. The task is to traverse multiple steps of uneven height in nominal time ( Fig. 2A; nominal defined as steady level walking), using a model of pendulum-like leg dynamics (Fig. 2B), where momentum is controlled by modulating active push-off during on each uneven step (Fig. 2C, push-off "PO"). The control strategies are as follows. Reactive control does not anticipate the upcoming terrain, and instead compensates for disturbances after they happen, with the minimum work achievable for this case. Tight regulation is a high gain compensation that looks only one step ahead, and immediately restores speed for that step, regardless of energetic cost. Minimum energy plans an optimal strategy for the entire (full horizon) terrain sequence in advance, to minimize the push-off work for step-to-step transitions. As demonstration, these strategies are applied to a sample uneven terrain profile that gently slopes up and then down over several steps (termed Pyramid, Fig. 2D).
The model predicts that all three strategies can traverse the terrain in nominal overall speed and time (zero cumulative time gain) after traversing the Pyramid terrain, but at different costs. Reactive control and Tight regulation respectively perform 12.1% and 9.2% more push-off work than nominal level gait (0.718 overall work; is the gravitational potential energy of the model's center of mass, COM). The Minimum energy strategy is most economical, performing about 6.2% more work (0.044 ) than nominal. The optimal speed profile starts several steps before, and ends several steps after the uneven terrain. It achieves economy by pushing off harder and speeding up several steps ahead, in anticipation of a loss of speed while ascending the upward steps. The optimal descent is nearly the reverse in time, regaining speed on the downward steps, and then slowing down again after them. The overall strategy gains time in the first half and then loses it in the second (Fig. 2, Time gain). This saves work compared to any other alternative, and requires 80% less work than the gravitational potential energy of the Pyramid itself (0.225 ). It is thus possible to ascend steps with less work than would be needed to lift the body against gravity alone. This demonstrates the advantage of planning strategically for an entire terrain sequence at once.
Using the Minimum-energy strategy, the model also predicts distinct compensation strategies for each of seven experimental terrain profiles (Fig. 3), in comparison to level nominal walking (Control). The terrains consist of sequences of discrete ground-height changes, each up to 0.075 (leg length ). The profiles have between one and sixteen such height changes, and are termed Up-step (U), Down-step (D), Pyramid (P, see also Fig. 2), Up-Down (UD), Down and Up-Down (D&UD), Complex 1 (C1), and Complex 2 (C2). The task is to traverse the terrain and regain nominal speed and time, with compensations starting no earlier than six steps before the first uneven step and ending no later than six steps after. Given a full planning horizon, the optimal strategy for Up-step (Down-step) is a tri-phasic speed fluctuation pattern, as described previously (16). The Pyramid strategy (repeated from Fig. 2) is accompanied by parameter variations (Fig. 3, Parameters inset) for three different speeds and four different step lengths (shorter, nominal, and longer fixed lengths, and lengths increasing with speed according to the human preferred relationship, as detailed in Methods). The speed profiles vary in overall amplitude and time for these cases, but all retain a similar scalable shape in terms of discrete steps. The remaining terrains also yield deterministic, scalable, and distinct optimal profiles, whose fluctuation patterns are to be tested against human.
The Minimum-energy strategies (Fig. 3) constitute the main testable predictions for humans, who regardless of self-selected speed and step length, are expected to produce speed fluctuation patterns similar to model. The predictions extend beyond the section of uneven terrain alone, and also include anticipatory speed changes before the first uneven step, and recovery beyond the final uneven step. There are, of course, many other models and optimization objectives possible, but each would be expected to produce distinct predictions for various terrain profiles, making the present model falsifiable.
The model also predicts compensation strategies for shorter, finite planning horizons (Fig. 4). In contrast to the previous full horizon, here the optimization plans based only on horizons of upcoming steps at time. Only the first step of the -step plan is executed, because the horizon is continually updated and a new compensation strategy planned, based on this finite receding horizon. The planner is aware of the ultimate goal of regaining nominal time, but not of the distant terrain until it enters the horizon. Very short horizons yield speed fluctuations and push-off work trajectories ( Fig. 4A and 4B, respectively) very different from full horizon control. Short horizons are less optimal and expend more work, whereas longer horizons are less costly and asymptotically approach the full-horizon optimum (Fig. 4C). There is at least 90% correlation for of eight or more steps, and thus diminishing advantage to be gained from planning over longer horizons.
Experimental results
Human subjects produced a unique speed fluctuation profile as they walked on each terrain (Fig. 5). Even though each person walked at their own self-selected overall speed, they exhibited similar patterns in speed fluctuations (Fig. 5, compare individual speeds and overall speed fluctuations). These patterns compared reasonably well with model predictions. We first report a basic summary of the control condition of level walking, which serves as a basis of comparison for the uneven terrains (Fig. 6). There was a small range of self-selected control speeds (see Figure 6A), 1.38 ± 0.10 m/s, 1.52 ± 0.13 m/s, and 1.51 ± 0.12 m/s (mean ± s.d. across subjects) in the control conditions for each of three sets of terrain (see Methods). Each person walked fairly consistently, with average speed varying about 2.4-4.8% c.v. (coefficient of variation) across their control trials. This consistency occurred despite the fact that subjects receiving no feedback regarding walking durations or speeds. Speed also varied somewhat within each control trial, with about 0.034 ± 0.006 m/s root-mean-square variability, or 2.2% c.v.
Humans approximately conserved overall walking speed and duration on uneven terrain
Subjects walked at similar overall speeds on all uneven terrains regardless of complexity (Fig. 6A). The overall speed and walking duration differences within each set of terrain conditions including corresponding level Control were typically less than 6% and none were statistically significant: Repeated measures ANOVA yielded (experiment set 1) = 0.63 for speed, = 0.97 for segment duration; (set 2) = 0.51 for speed, = 0.78 for segment duration; (set 3) = 0.96 for speed, = 0.52 for segment duration. Each individual's walking speeds were also fairly consistent across trials of a specific terrain, varying by 2 -3% (coefficient of variation, c.v.). A time loss or difference would be expected if there were no compensation (Fig. 2). The observed conservation of the overall walking duration and walking speed, regardless of the terrain, indicates that humans compensated for all terrains. The approximate time conservation was in accordance with experimental instructions, and despite no feedback having been provided regarding speed or duration at any point in the experiment.
Humans produced a repeatable speed fluctuation pattern for each uneven terrain
Each terrain yielded a specific speed fluctuation profile vs. time (Fig. 5). Two basic quantifications are the overall speed variability for a terrain (Fig. 6B, root-mean-square variability) and the maximum speed fluctuations within that terrain (Fig. 6C). The variability was greater than corresponding control (all P < 0.05; paired t-tests), ranging about 2 to 5% (c.v.). The maximum fluctuations differed in magnitude and location for different terrains. For example, the largest deviation was +5.68% at = −1 for Up-step terrain, and +4.07% at = 1 and +4.4 at = 2 for Down-step. For the other terrains, the greatest deviations occurred at other locations: = 1 for Up-Down (-4.97%), = 3 for Down-and-Up-Down (-11.08%), = 9 for Pyramid (+7.79%), = 16 for Complex 1 (+7.31%), = 16 for Complex 2 (+7.14%). These maximum speed fluctuations were all greater (all P < 0.05; paired t-tests) than those in the Control conditions (maximum magnitudes 0.86%, 0.89%, 1.03% for the three sets of terrain, respectively). On average, the maximum speed fluctuations were about 7.6 times greater in magnitude on uneven terrain than Control. The uneven terrain speed fluctuations do not support the Tight regulation hypotheses, for which very little deviations were expected.
The compensations included anticipatory and recovery components. Anticipation may broadly be summarized as a tendency to speed up before a first upward step (U, UD, P, C1, C2 terrains), and to slow down before a first downward step (D and D&UD terrains). For initial upward steps, the preceding step ( = −1) appeared to speed up 3.74 -5.70% relative to nominal speed, and the step preceding initial downward steps to slow down about 2.05%. Moreover, the two preceding steps (average for = −2, −1) for uneven terrains tended to exhibit significant anticipatory speedup or slow-down consistent with prediction ( Fig. 6D, all terrains except D&UD P < 0.007; paired ttests with Bonferroni correction). Similarly, there was also a significant recovery component, summarized by the average speeds of the two steps after uneven terrain (Fig. 6E, all P < 0.007; paired t-tests with Bonferroni correction). The anticipatory adjustments do not support the Reactive compensation and Tight regulation hypotheses, and the recovery adjustments do not support Tight regulation.
The individual speed responses were similar across subjects (Fig. 6F). This is quantified by the correlation between individual speed fluctuations and the (across-subject) average patterns, which were significant and positive for all terrains. The correlations ranged 0.66 ± 0.13 to 0.85 ± 0.11 (for Complex 2 terrain and Up terrain, respectively; mean ± s.d.). Some control trials also had a small amount of correlation (0.46 ± 0.33, 0.32 ± 0.32, 0.29 ± 0.40 respectively for the three sets), although the maximum speed fluctuations were generally about one-sixth the magnitude for uneven terrain (Fig. 6C). Incidentally, compensation strategies that were predicted to be nearly opposite each other were found to be negatively correlated in data. For example, the individual responses for Up-and Down-steps were negatively correlated with average Down-and Up-steps, respectively ( -0.36 ± 0.17 and -0.27 ± 0.29), and similarly for Up-Down and Down-and-Up-Down (0.52 ± 0.083 and -0.53 ± 0.15, respectively). The similarity between individuals across all terrains is indicative of systematically repeatable responses.
Humans walking speed fluctuations were consistent with minimum-energy predictions
The compensation strategies agreed reasonably well with model predictions for minimizing energy expenditure over a full horizon (Fig 7). Visual inspection reveals an overall resemblance in speed profiles between human and Minimum energy model, particularly when plotted against each other (Fig. 8A). The agreement is quantified by the correlation between experimental fluctuations for a terrain and the corresponding model prediction (13 -28 steps per terrain for each subject across seven terrains, at least 11 subjects per terrain). Zero correlation would be expected if the model were not predictive or human strategies were random. Instead, the experimental correlation coefficients ranged from 0.35 to 0.67 across the seven terrains (Figs. 7, 8A & B), all of them statistically significant (P-values by terrain: U 1.5e-12, D 1.7e-21, UD 8.9e-19, D&UD 1.3e-22, P 5.7e-21, C1 1.5e-10, C2 6.2e-18). The lowest correlations were for the two Complex terrains (0.35 ± 0.10 for C1 and 0.46 ± 0.09 for C2, mean ± 95% CI), whereas higher correlations were observed for the shorter and simpler terrains. The average correlation across terrains was 0.55 ± 0.11 s.d. Log likelihood ratios for optimal model compared to random models were (mean ± s.d. across randomly shuffled models) U 33 ± 14, D 55 ± 14, UD 50 ± 15, D&UD 55 ± 15, P 81 ± 16, C1 51 ± 15, C2 72 ± 16. This was equivalent to a range of 2.6 -6.1 bits per step of predictive information.
(For reference, a model predicting only the direction of speed change each step would have 1 bit/step of information.) The minimum Bayes factor was 4.9e14, for U terrain; it is the factor by which a prior odds ratio (proposed model to random model) is adjusted to yield a posterior odds ratio. The correlations and highly significant P-values support the hypothesis that the model is predictive of experimental speed fluctuations, and the log likelihood ratios show that the model adds substantial predictive information.
There were some steps and some terrains where human and model did not agree well. By visual inspection of Up-and Down-steps (Fig. 7), humans usually spent an extra step (about = 2) at greatly deviated speed compared to the model. On Down-and-Up-Down, humans were slower than model on step = 4. And in the middle of both Complex terrains, human speed fluctuations did not vary as much as model (e.g. = 8); they were similar to applying a low-pass filter to model responses. Such differences emphasize that the general agreement between human and model did not apply to every step of every terrain.
A finite planning horizon predicts most human compensations
The human speed fluctuations could also be predicted reasonably well with a finite planning horizon. This was quantified by the correlation between human responses and model predictions for a range of -step horizons (Fig. 8C). For all terrains, short planning horizons had almost no predictive ability, and longer horizons generally did better. The overall correlation, averaged across terrains (Fig. 8C, gray line) resembled a saturating exponential, with a nearly monotonic increase with horizon length , maximized with the full horizon. The saturation is consistent with model dynamics (Fig. 4), for which the immediate upcoming step is most important, and succeeding steps exponentially less so. This makes it highly advantageous to look at least a few steps ahead, but with diminishing returns for farther lookahead. The observed advantage of longer planning horizons was terrain-specific, in that several (U, UD, D&UD, C1) had peak correlation with a model planning for as few as 5 to 8 steps, and others (D, P, C2) only for a full horizon. As a basic summary, the average correlation was within 92-93% of the saturation value with horizons of six to eight steps. A finite planning horizon of at least such length could thus be regarded as sufficient to predict most human responses.
Discussion
We had sought to determine whether humans compensate for uneven terrain by planning and adjusting their forward momentum. The human data showed systematic speed fluctuations that were consistent across individuals, and specific to each of the terrain patterns, including complex patterns of consecutive uneven steps. The speed fluctuations included an anticipatory component prior to actually contacting the uneven terrain and showed patterns that were consistent with the minimum-energy optimization model. That optimization becomes increasingly complex for more uneven steps. However, we found it sufficient to consider only a finite horizon of upcoming steps to yield near-optimal economy. We interpret these results to suggest that humans optimally plan and control for uneven terrain, in a manner that is bounded in planning complexity.
The human speed fluctuation patterns were not mere noise. The fluctuations might superficially seem to have little organization (e.g., Fig 5 complex terrain C1), but quantification shows that the patterns were repeatable across subjects, unique to each terrain, and predictable by model. Part of the repeatability may be explained by inverted pendulum dynamics, for example downward steps should gain (and upward steps should lose) forward speed and momentum. But dynamics alone do not explain how walking durations could be conserved across different terrains. Without any compensatory control for terrain variations (i.e. using nominal push-off for all steps), overall walking speeds would have been reduced (16). With compensatory but purely reactive control, it would be possible to regain lost time (Fig. 2, Reactive compensation), but not before losing it. Another possibility would have been to tightly regulate speed to nearly constant with each uneven step (Fig. 2, Tight regulation). But the significant and repeatable speed fluctuations observed (Fig. 6C) do not support these possibilities. Instead, there was clear anticipatory compensation, consistent with the model. There were systematic adjustments before the first uneven step, where subjects sped up significantly before a first upward step and slowed down before a first downward step (Fig. 6D, compare D and D&UD against other terrains). There was also a systematic recovery component beyond the final uneven step (Fig. 6E, compare D and D&UD against other terrains), as predicted by model. Overall speed was conserved by conclusion of the recovery, which is suggestive that the entire trajectory was planned. Our interpretation is that humans plan over multiple steps, perhaps extending from several steps before to several steps after a segment of uneven terrain. Our subjects approximately conserved speed and duration (Fig. 6A), and in part by adjusting their speed ahead of time (Fig. 6D).
Such planning appears to be performed over a multi-step horizon. A simple indication of look-ahead is that walking speed started deviating from steady state ahead of the first uneven step (Fig. 5, Fig. 6D). A better indicator is the predictability of human responses with the model's horizon length (Fig. 8C). Longer planning horizons generally allowed the model to better predict human (Fig. 8C, gray line) with a gradually saturating behavior. The advantage of planning ahead is explained by simple governing principles, namely the step-to-step transitions of pendulum-like walking, which dissipate energy with each ground collision. About 70% of the forward momentum from one step is carried into the next, and less than 1% of the forward kinetic energy persists beyond seven steps (25), also described as a persistence distance (16). A momentary perturbation, whether from the ground or by the person, thus has consequences for succeeding steps, making it generally optimal to plan a sequence of steps at once. The persistence is also limited, so that there is no advantage to planning infinitely into the future. Much of the predictive ability of the finite horizon model (Fig. 8C) comes from the first few steps of look-ahead, and six to eight steps of look-ahead appear sufficient to explain over 90% of the observed human responses.
This does not, however, mean that humans literally optimize for individual step variations. It might be practical to reduce complexity by reasoning about chunks of steps, for example treating Pyramid terrain (Fig. 4) as simply one chunk with a gentle upward followed by downward undulation rather than twelve individual steps (Fig. 8C, P terrain). It may also be more important for the human to attend to overall, low-frequency ups and downs rather than the detailed variations. This may explain why human responses on complex terrains seemed to be low-pass filtered versions of model (see Results). It is quite possible that the long look-ahead observed here is better represented as a smaller number of chunks or low-frequency step combinations.
The proposed criterion for this planning is energy economy. Much of the natural world imposes unsteady and uneven conditions, making it important to economize not only for the steady, level case, but also for when energy costs are highest. We used a mechanistic and quantitative model to predict how economy may be achieved, as governed step-to-step transitions (16). These dynamics describe how speed is lost with an upward step and gained by a downward one, and how push-off may be modulated to change speed and affect the collision loss.
Step-to-step transitions have previously been tested against humans during steady walking, where parameters such as step length or width (15) are readily manipulated and the associated energy cost tested. Such experiments do not apply to unsteady conditions, and so here we experimentally manipulated only the terrain and computationally predicted the compensatory trajectories. And through testing on seven different terrains, there is low probability that a random model could predict human responses well by chance.
Our results suggest that humans reason about energetics, dynamics, and timing for locomotion. Energetics refers to the ability to judge the upcoming terrain profile with respect to the hypothesized criterion of energy minimization. Although humans are understood to prioritize economy for steady, level walking (26), some form of energetic prediction is at least as important for selecting from the many options available for uneven terrain. Dynamics refers to the translation of that criterion into an appropriate sequence of control actions. Humans seem to reason about the momentum lost or gained by a change in ground height, and how active push-off and other control can influence that change. Just as the ability to catch a ball suggests reasoning about the ball's dynamics in flight, the ability to conserve time and energy on uneven terrain implies reasoning about the body's own dynamics, described here by a model of pendulum dynamics and the step-to-step transition. Timing refers to an ability to form an expectation of the time to traverse a given distance (Fig. 1), and to use that to guide the dynamic control actions with minimum energy expenditure. We treat the overall reasoning as tantamount to a central nervous system internal model (27) of walking that enables planning for economy.
The present study is concerned with a type of control that is intermediary between lower-and higher-level concerns. At the lower level, the central nervous system performs control in real time with relatively fast feedback (within tens of milliseconds) from somatosensory and other inputs, much of it mediated at the level of the spinal cord (28). At a higher level, humans can consciously plan many seconds or minutes into the future, for example the best route to the supermarket, the amount of food to be carried on a trek, or whether to walk at all. Much of that planning requires cognition and need not consider step-to-step momentum. The anticipatory, dynamic planning considered here is intermediate, with an apparent temporal update rate on the order a walking step, or about half a second. Spatially, it integrates higher-level terrain awareness with lower-level walking control, and appears to work subconsciously, in that subjects exhibit little cognitive awareness of what they are planning or how their momentum varies on uneven terrain. The planning observed here is reminiscent of upper extremity reaching movements, which are thought to be mediated by internal dynamical models represented in the cerebellum and motor cortex (29,30). We speculate that similar neural internal models, with short-term storage or working space for anticipatory adjustments, are employed for dynamic planning of locomotion.
The proposed planning horizon may seem longer than where people usually look. Humans allocate most of their gaze two to four steps ahead on rocky, rough terrain (7,31). Such information is especially important for foot placement, which impacts balance (32)(33)(34) and energy expenditure (35). The nearest steps are considered critical for not only humans (6,36) and robots (37), but also our simple models (16,32). But on less challenging terrain, humans walk faster, look ahead farther (31), and expend less energy (4,38). The present terrain is in that category, consisting of broad and flat surfaces that placed little demand for foot placement and perhaps visual attention. Our interpretation is that humans may also look ahead a variety of distances, including far ahead to determine and anticipate a path (36,39), and intermediate distances (say, at least six to eight steps ahead) to plan forward momentum. Such lookahead might require only a brief glance yet still serve valuable purpose.
The present optimization approach is related to studies of humanoid robot control and simulation. Real-time control for many legged robots is performed with model predictive control (MPC) over a short horizon, computed repeatedly within a timing loop on the order of a millisecond. This acts as a feedback control that achieves stability and near-optimal performance with manageable computations (18,40,41), for robots with many more degrees of freedom and more complex contact conditions than modeled here. We employed a slower, per-step timing loop and a longer multi-step planning horizon, because our focus was on economically managing momentum. However, MPC control is generally applicable over a variety of time scales and horizons. Although we designed the control explicitly, it could also be adapted via reinforcement learning techniques, which can learn quite robust control over uneven terrain, typically also with a finite horizon terrain profile obtained through vision and mapped directly to control commands (42,43). Thus, humans could potentially embed the optimization within a vision-to-control mapping, rather than continually optimize for each step. An interesting observation is that robustness may be achieved with simple rewards such as to make forward progress, without need for explicit stability criteria, because successful progress implies stability. Our present model for human momentum planning assumes the existence of an executive goal set at higher levels, and a lower-level control for fast feedback. This planner could potentially achieve both economy and stability by looking several steps ahead to direct the upcoming step-to-step transition. At present, finite horizon optimization or learning is a highly viable approach, for both understanding humans and controlling robots. This is the first study we are aware of that uses mechanistic principles to predict transient walking on uneven terrain. There is ample evidence that humans direct their gaze (7) and start taking actions (44) only a few steps ahead, but with few operational principles for determining the actions. Our model is mechanistic in that it produces a walking gait governed entirely by physical first principles (e.g., inverted pendulum and the step-to-step transition). There were no free parameters or opportunities to fit model to data. We are unaware of any other models that are similarly principled and can predict compensation for uneven terrain. Perhaps the closest analogy is an empirical energetic cost model (23) that predicts dynamic paths for economically taking turning paths such as through doorways (on level ground). We consider our mechanistic model to be largely compatible with such empirical costs, and suspect that a single, first-principles model might therefore explain both uneven and turning walking trajectories.
There are a number of limitations to this work. In terms of experimentation, we found the model to be significantly but only modestly predictive of humans (e.g., correlation 0.66 on Down terrain, Fig. 7). This was due in part to the noisiness of human walking, as individuals did not behave identically to each other (correlation 0.85 on Up terrain Fig. 6F), let alone to any model. It may be better to apply larger terrain disturbances to exceed the noise floor, but we intentionally examined relatively small disturbances where the inverted pendulum model of walking is most applicable. Another limitation is that our terrain disturbances consisted of discrete and flat steps, whereas actual uneven ground is more continuous and requires additional decisions regarding foot placement and orientation. We also experimentally tested the optimization hypothesis through speed trajectories, but did not test actual mechanical work or metabolic energetics. The task was too long to measure ground reaction forces to estimate work over many steps, and too short to allow for the steadystate measurement of oxygen consumption needed to estimate energetics. These remain avenues for further testing of model and its alternatives.
Other limitations were from the simplicity of the dynamic walking model. The model applies inverted pendulum dynamics as a fundamental feature of walking. Here we observed some cases where the model's rapid speed fluctuations were not matched by human ( Fig. 7; see central step of Pyramid, several steps of Complex 1 and 2). We interpret this low-pass filter effect as the human stance leg behaving less like an inverted pendulum, allowing the COM to deviate somewhat from a pendular arc. Additional model features such as a previously-hypothesized energetic cost for rapid force production (45) could potentially explain some of the mismatches observed here. But even if humans approximate an inverted pendulum on flat and slightly uneven ground, that is certainly not the case for larger terrain disturbances or for stair steps, where it is necessary to use the knees. When ascending large steps, the leading stance knee flexes and extends substantially and can perform substantial positive work for climbing. And when descending large steps, the trailing leg does not behave like a (non-inverted) pendulum, perhaps to actively dissipate gravitational potential energy. Once in swing, it might also be actively flexed to allow the swing foot to clear the step. Dynamic walking models have been devised to include knee joints with actuation (46,47), legs that telescope (48), actuators with elasticity (49). Such models have potential to capture more aspects of human walking, especially larger terrain disturbances than examined here.
There are, of course, many other degrees of freedom present in human. For example, a feature missing here is actuation of the swing leg, which humans might use to modulate step length, for example to line up with an uneven step or ascend or descend it more economically. Our model takes fixed step lengths at fixed cost, where the control of step length may also exact a cost (24). But transient step adjustments deviate from steady-state step lengths (50,51), at a cost yet to be modeled. Other relevant degrees of freedom that could be included in a dynamic walking model include the ankles (52) and lateral body motions (32,53,54), although with less effect on economy than pendulum-like step-to-step transitions. It is, however, not straightforward to devise appropriate optimization objectives to predict behavior for many degrees of freedom. We adopted a highly simplified model in part to avoid fitting to data, in favor of first principles.
We showed that humans plan and control their gait to economically locomote over uneven terrain. They anticipate upcoming steps and can adjust their speed and momentum to conserve energy and time. These actions resemble a dynamic optimization procedure, which has potential for impractically high dimensionality. Fortunately, walking momentum does not persist for long, and so it appears sufficient to plan dynamics for as little as six to eight steps into the future. Economy has long been established as a governing principle for level, steady walking, and our findings suggest that it similarly applies to unsteady and uneven walking of arbitrary distance.
Materials and Methods
We performed an experiment to test whether humans optimally compensate for uneven terrain. Predictions were obtained from a simple optimal control model of walking, described in detail previously (11,16). Here we briefly summarize the model, followed by a description of the present study's experiment.
Model dynamics
The optimal control model determines a compensation strategy for traversing a sequence of uneven-height steps at minimum energetic cost while conserving travel time ( Fig. 2A). The task is to walk down a walkway interrupted by uneven terrain, starting and ending from steady walking on flat ground. The model's only energetic cost is from positive mechanical work needed to power walking. The key feature determining the optimal strategy is that work is required to redirect the body center of mass (COM) velocity between pendulum-like steps. This work is performed by pushing off with the trailing leg each step, and the sequence of such push-offs comprises the decision variables for optimal control. Conservation of traversal time is a constraint that the total time be equal to that for steady walking on flat ground alone. The overall time and the walking dynamics governing the momentum of each step, are expressed as constraints in the optimal control problem.
We used a simple dynamic walking model, in which the legs act like rigid simple pendulums (16,24). The stance leg acts like an inverted pendulum supporting a point-mass pelvis of mass (Fig. 2B), and the swing leg act like a simple pendulum of infinitesimal mass. The inverted pendulum passively conserves mechanical energy with an exchange between potential kinetic energy, so that upward (downward) steps come at a loss (gain) of speed and time. Forward speed is sampled at the mid-stance instance of each step, when the inverted pendulum is vertical. The main energetic event is the step-to-step transition when the COM velocity must be redirected from forward-anddownward at the end of one step, to forward-and-upward at the beginning of the next ( − and + respectively in Fig. 2B). As a simplification, we ignore the swing leg's rotational dynamics and energetics (11), and are presently concerned only with its collision with ground, modeled as a perfectly inelastic impulse dissipating COM energy and speed. The losses may be restored by pushing off impulsively from the trailing leg just before the collision. In the present study, active mechanical work is performed by push-off alone, and passive dissipation only occurs with collision (PO and CO, respectively in Fig. 2C), and steady walking is a matter of equal magnitudes of pushoff and collision. It will be shown that on uneven terrain (step height in Fig. 2C), it is actually optimal to purposefully modulate push-off with each step, causing walking speed to fluctuate. For brevity, the equations presented here use dimensionless quantities, with , gravity , and leg length as base units.
The discrete dynamics between steps are as follows (see 25 for additional detail). Each of steps is indexed , with the first uneven step located at = 0 ( Fig. 2A), so that negative refers to preparatory steps beforehand. The model takes steps that end with pre-transition velocity − at an inter-leg angle of 2 . Work is performed by a pre-emptive push-off (in units of mass-normalized work), followed immediately by the heel-strike collision along the leading leg, resulting in postcollision velocity + . From impulse-momentum (55) principles, + = − cos 2 + √2 sin 2 . (1) Uneven steps are described by heights above nominal, equivalent to an angular disturbance in landing configuration. For a given step length , the disturbance depends on the difference between successive step heights, Using linearized inverted pendulum dynamics, the dimensionless step time of step is These quantities may be converted to mid-stance instances as follows. The forward speed at mid-stance is where mid-stance time ′ is Nominal model parameters were selected to correspond to typical human walking. The nominal gait was for a person with leg length of 1 m walking at 1.5 m/s, with step length of 0.79 m and step time of 0.53 s. Here we constrained the model to take steps of fixed length, and examined alternative step lengths in parameter sensitivity studies. These included shorter and longer steps (0.59 and 0.96 m, respectively), as well as preferred step lengths increasing with speed according to the preferred human relationship, approximately 0.42 (24,56). The nominal parameter values were = 0.41, push-off 0.0342 , step time 1.665 −0.5 0.5 , pre-collision speed 0.601 0.5 0.5 , and mid-stance speed 0.44 0.5 0.5 . Most step heights were in increments of = 0.075 , equivalent to about 7.5 cm for a human. Previous parameter studies with this model (16,25) have shown that similar fluctuation patterns across a wide range of parameter values.
The energetics of the present model only include push-off work. This is motivated by the observation that the human COM moves like an inverted pendulum each step, accompanied by an exchange of kinetic with gravitational potential energy (57). Although a pendulum conserves energy, the step-to-step transition requires mechanical work. It predicts how humans perform increasing positive mechanical work per step with variables such as walking speed and step length (15,58), along with an approximately proportional contribution to metabolic cost. These mechanistic dynamics also produce a periodic walking gait, as also demonstrated by walking robots (59). Of course, humans have many more degrees of freedom and many muscles to control them, which could result in energetics very different from model. Even in the simple dynamic walking models, there are other costs such as for moving the legs back and forth (24) or adjusting foot placement (32). Here we focus on the fundamentals of step-to-step transitions, which are hypothesized to explain most of the human energetic cost (3,15). The model does not have enough parameters to allow for arbitrary fitting to data and is to be tested in its ability to predict distinct speed adjustments for multiple terrains.
Optimal control problem formulation
The optimal control problem is to produce a sequence of push-offs to negotiate a series of uneven steps with minimum work, and with no loss of overall speed compared to steady level waking. The sequence consists of push-offs where the control is exerted for each step . The problem starts and ends with nominal walking at speed , and takes the same amount of overall time as steps of nominal walking with nominal step time , despite a middle interval of uneven steps. Also serving as a constraint are the model dynamics, including pendulum-like walking punctuated by the stepto-step transition. The optimization may be formulated as: Nominal speed at beginning and end: 0 = , = and Nominal total time: ∑ = ⋅ and Walking dynamics: ( +1 , , , ) = 0 The walking dynamics describe how the forward speed of the next step +1 and the intervening step duration (by combining Eqns. 1 -6) are related to the current step's speed and push-off ( and ). The steps are indexed such that the zero index = 0 is the first uneven step, 0 is the level, nominal step at the beginning, and the nominal step at the end. Any number of uneven steps occur in the middle, padded on both sides by several level (but not necessarily nominal) steps to allow the model to anticipate and recover from the disturbance. We chose a padding of six steps, for example the longest uneven interval was sixteen steps long (Complex 1 and 2), padded on each side to yield = 28. The six-step advance padding is sufficient to gain most of the economic advantage of planning ahead, which keeps increasing but negligibly so for more steps (16). Padding after the last uneven step serves a different purpose, which is to define an objective terminal goal of regaining nominal walking and timing on level terrain.
The optimization was performed over horizons of various lengths. A full horizon refers to optimizing all steps until arriving back to nominal walking, yielding a full control trajectory. A finite horizon refers to repeatedly optimizing with knowledge of only upcoming steps, and nothing beyond that horizon. The objective is to regain nominal timing by the th step (or sooner if the th step overall occurs first). To keep track of nominal timing, the optimization is informed of the cumulative time gain or loss thus far relative to nominal. The finite horizon yields commands (which are suboptimal with respect of full horizon) and executes only the first one. The optimization is then repeated anew each step, starting from the end of the previous step and still intending to meet the original nominal timing, but over a new -step horizon. This receding horizon is generally suboptimal but can approach the full horizon's optimality with sufficient .
The primary prediction of interest was speed fluctuation patterns. This refers to the waveform shape for , as opposed to absolute waveform amplitude. The point-mass model is not expected to accurately predict speed amplitudes, which scale slightly with leg length (or relative terrain amplitude) and self-selected walking speed (16). However, the fluctuation patterns are remain quite similar regardless of parameter values such as average speed, body mass, and leg length (11,16). The optimization hypothesis is therefore to be tested through scale-invariant correlation of speed fluctuations (Fig. 3).
Alternative hypotheses: Reactive compensation and Tight regulation
We considered two alternative ways to compensate for uneven terrain while conserving time. One, termed Reactive compensation, refers to an optimal control that does not act until the terrain has been encountered. It then reactively optimizes a control sequence that will regain time and conserve energy. Even though it is optimal, it is not anticipatory, and is thus suboptimal compared to a control that acts ahead of time. The second alternative, Tight regulation, refers to a feedback controller that maintains constant step time for each step, despite disturbances. It adjusts each push-off to ensure constant time, and thus need not plan ahead. This type of control is simple to perform but is suboptimal because it does not take advantage of past or future information. A prediction from Reactive compensation is that there should be no anticipatory speed fluctuations before the uneven terrain. A prediction from Tight regulation is that there should be very small speed fluctuations across uneven terrain.
Experiment
We measured speed fluctuations as healthy adult subjects walked on each of seven different terrain profiles plus level control, for a total of eight conditions. The profiles (Fig. 3) consisted of an integer number (one to sixteen) of evenly spaced steps, each deviating in height by a small integer multiple (between -3 and 3) of 2.54 cm. The eight profiles were labeled with the following names: Control, Up-step (U), Down-step (D), Up-Down (UD), Down-and-Up-Down (D&UD), Pyramid (P), Complex 1 (C1), and Complex 2 (C2). Each was assembled from layers of polystyrene insulation foam providing a flat surface for each footfall. Each profile occurred about halfway down a level walkway, with subjects initially walking with a steady, level gait, and also ending with a similar steady, level gait after the terrain. There were two groups of subjects walking in three sets of trials. The first group ( = 12; 7 males, 5 females, all under 30 years age) performed trials of Control, U, D, UD, D&UD. The second group ( = 11; 7 males, 4 females, all under 30 years age) performed trials of Control, P in one set, and Control, C1, C2 in another set. A separate Control was collected for each set of terrains (called n1, n2, n3), and consisted of level ground in the same walkway, but to the side of the uneven steps. The entire walkway was about 30-40 m in length, but data were only analyzed for a middle portion encompassing the terrain plus six steps (about 3.5 m) before and after the terrain. The six-step padding was included to allow walking speed to deviate from nominal both before and after the uneven terrain, as has been observed in both model and human (11,16).
In all conditions, subjects walked at self-selected, comfortable speed. The main instruction was to walk from a start line and past a finish line "in about the same time" throughout the experiment, to avoid large variations in overall speed across conditions. This instruction was intended to provide only broad context, because the hypotheses were not dependent on any particular speed, and so subjects received no feedback about their timing. There was a brief pause between trials, in which subjects turned around and stood briefly before starting the next trial, in opposite direction. For example, the U and D conditions, and C1 and C2, both consisted of the same terrain in opposite directions. Except for this pairing, the conditions were conducted in random order, with at least four to seven trials per condition in each experimental set (all chosen randomly), also interleaved with occasional Control trials inserted at random within each condition. Prior to data collection, subjects walked several times on Control and uneven terrains to gain familiarity with the walkway and terrains. For the first set of terrains (U, D, UD, D&UD, ordered randomly), there was a visual cue on the floor, a paper sticker placed approximately 5 m from the first uneven step. This was intended to help subjects line up their steps, but we observed that subjects paid little attention to the cue, and it was therefore eliminated for the three remaining terrains (P, C1, C2).
Walking speed and timing was measured inertially, using inertial measurement units (IMUs) fixed atop the instep of each foot. Our interest was in the forward speed of the body each stride, defined as the stride length divided by stride duration for each foot, assuming the body travels the same distance as the feet. Stride measures were found by integrating IMU accelerations to yield instantaneous foot displacement. We used a standard algorithm to fuse sensor data and reduce integration drift (60), where zero-velocity update is performed when the foot is briefly stationary in the middle of stance. We estimated body speed by interleaving each alternating foot's speed for each stride, assigning it to the preceding mid-stance instance in time. There was generally some drift between the two feet, which was reduced by linear de-trending their displacements to ensure agreement on overall distance travelled. The resulting body speed served as basis of comparison against corresponding values calculated from the walking model. Data were analyzed for a central segment of the walkway including uneven terrain plus six steps before and after to capture the deviation from steady walking. To compare between trials and conditions, the time = 0 was assigned to the footfall instant for the first uneven step of any terrain, or next to it for Control. A trajectory of discrete walking speeds was thus measured for each trial, and subject's trials within a condition averaged at the discrete step numbers. The corresponding step timings were also averaged to yield discrete sequence of average speeds and average timings per subject and condition.
We compared each subject's average speed trajectory against all subjects and against model, for each terrain condition. To determine whether subjects were consistent with each other, each subject's trajectory was compared against the average trajectory across subjects, using a Pearson correlation coefficient , as a measure of similarity, with one correlation per subject and terrain. To test whether human subjects were consistent with the model, their trajectories were also compared against the model's, also using correlation. Here, all experimental trajectories for a terrain (13 -28 steps per terrain, a total of 133 steps for each subject) were correlated against the model's, yielding one correlation coefficient per terrain. Such correlations are independent of scale, and thus test the model's predicted fluctuation patterns (16) regardless of an individual fluctuation amplitudes, which may depend on a subject's absolute walking speed, body mass, or body size. A truly predictive model should be positively correlated with human responses, whereas a random model (null hypothesis) would be expected to have zero correlation. Statistical significance of the human vs. model correlation coefficients was tested with a threshold of P <0.05. We also performed a few additional tests of hypothesized planning criteria based on speed data. For the No compensation hypothesis, we expected that average speeds would be lower (or walking durations higher) on uneven than level terrain. For the Tight speed regulation hypothesis, we expected that speeds would fluctuate very little, similar to level. And for the Minimum energy hypothesis, we expected that speeds would fluctuate significantly, including before and after the uneven terrain (termed anticipatory and recovery compensations, defined as two steps preceding or following terrain).
To test the effect of horizon length, we computed the correlations above for models of varying horizon length . Optimal predictions were computed for ranging up to full horizon, and these were correlated against empirical human responses. If humans plan with a finite horizon, they may exhibit peak correlation with a finite model. If humans plan optimally, they may correlate best with a full horizon model. We used this same approach to also test for sensitivity to shorter (3-step) padding before and after terrain. We found shorter padding to yield negligible difference in results, with the same correlation patterns and no more than 0.05 difference in correlation coefficients compared to the nominal 6-step padding.
We also computed the log likelihood ratio (Bayes factor) for the model relative to random predictions. To facilitate comparison between different individuals with the dimensionless model, each subject's speed fluctuation trajectories were scaled to model's amplitude, using linear regression between model and human. The scaled trajectories across individuals represent a distribution of speed fluctuation trajectories that are potentially predicted by model. The log likelihood of each terrain trajectory was calculated from the summed log-likelihoods across steps, assuming a t-distribution centered at the model prediction for each step. Alternative random (null) hypotheses were generated by shuffling the model's trajectories, thus preserving the distribution of predicted speed fluctuations but not the sequence. A log likelihood ratio for each terrain was computed by subtracting (from model's) log likelihoods from each of 1000 shuffled trajectories, to yield a distribution of log-likelihood ratios (tending toward normal distribution), which were divided by the number of steps for each terrain to yield per-step log likelihoods, reported as mean and standard distribution. These were also converted to base 2 and reported as binary bits of added information per step, relative to an average random model. Figure 1. Anticipation of uneven terrain over a full or finite horizon. (A.) The human can anticipate upcoming terrain, using vision to observe varying ground height for each step, and plan how momentum should be adjusted for economy or other objectives. The planning could be based on a full horizon of an unlimited number of uneven steps, or a finite horizon of steps. (B.) Terrain of varying height is a sequence ( for steps numbered ) of perturbations to walking. For a given terrain, the human produces a trajectory of walking speed ( ) varying with each step. Speed can fluctuate about the nominal level speed ( * ), and is influenced by the dynamics of human walking, the terrain, and compensatory control performed by human. A few possible trajectories are shown, dependent on terrain and control strategy. (C.) The human control strategy may include anticipatory planning to determine a dynamic, compensatory plan. The planning criteria could include energy economy or other objectives, subject to a time expectation constraining how much time to take. The plan may be represented as a trajectory of reference speeds ("target momentum") for upcoming steps. We assume that planning is informed by vision of the upcoming terrain sequence, and that the planned trajectory may drive a lower-level walking controller that can produce motor commands based on the target and local feedback of body state and speed. This study tests whether humans produce anticipatory compensations to conserve energy and time, and whether a finite horizon of future steps is sufficient to predict the compensation strategy. Steps are numbered consecutively with index , where = 0 corresponds to first step deviating from level ground, (b.) Simulations are performed with a simple walking model with pendulum-like legs. Each step, whether even or (c.) uneven, is punctuated by a step-to-step transition, where the trailing leg performs active, impulsive push-off (PO) just prior to the leading leg's dissipative and impulsive collision (CO) with ground. (d.) Model predictions are illustrated for a sample terrain (termed Pyramid) of several steps up and then several down, for three hypothetical strategies: Reactive control that optimally adjusts the walking pattern only after encountering uneven terrain (i.e., no anticipation); Tight time regulation, where each step's timing is kept as constant as possible to reject disturbances; and Minimum energy control, which looks ahead to a full horizon of uneven steps and plans an optimal control including anticipatory adjustments. Strategies are described in terms of (top row:) speed fluctuations vs. time, (middle:) cumulative time gain relative to nominal walking, and (bottom:) push-off work vs. time. All strategies avoid an overall loss of time, but Minimum energy required the least work. Data are sampled discretely (dot symbols) for each step, with speed sampled at mid-stance instant ( = 0 denoted by vertical dashed line). In the corresponding experiment, human subjects walked over similar Pyramid terrain in a walkway (30 m long), with step height increments of 7.5 cm. Model parameters: = 0.41, nominal midstance speed = 0.44. Correlation is between speed fluctuations for each finite horizon, and a full horizon (defined as 21 steps here). For all plots, the longer the horizon, the greater the resemblance to planning over a full horizon. As few as seven to eight steps are sufficient to gain most of the economy and performance of a full horizon. Optimization minimizes push-off work to traverse terrain while conserving overall speed. The conditions are equivalent to human walking at 1.5 m/s with 7.5 cm increments in step height on Pyramid terrain. ) Overall walking speed for three sets of uneven terrain, each preceded by its own level control condition (nominal n1, n2, n3). There were no significant differences in speeds within each set (all P > 0.05). Overall speed was defined as walking distance divided by elapsed time to traverse the terrain, starting and ending at level gait. (B.) Speed variability for each terrain, defined as root-mean-square (RMS) variability of speed as it fluctuated within each trial. (C.) Maximum speed fluctuation for each terrain, defined as the largest observed deviation of speed from nominal. These occurred at terrain-specific step number = −4, −1, 2, 1, 3, 17, 9, 11, 16, 16 respectively for n1, U, D, UND, D&UD, n2, P, n3, C1, C2). (D.) Anticipatory and (E.) Recovery speed fluctuations for each terrain, defined as average deviation of speed (from steady) for two steps immediately preceding or following (respectively) uneven terrain. (F.) Inter-subject correlation for each terrain, between each subject's speed fluctuations and the average pattern across all subjects. Bar graphs show means across subjects; error bars denote s.d. Asterisks denote statistically significant (P < 0.05) differences: * from zero, double asterisks ** from respective control. Human ≥ 11) trajectories, along with the correlation coefficient between the two (± 95% CI; asterisk * indicates statistical significance, P < 0.05). Model predictions are from full-horizon, minimum-energy model (Fig. 3), rendered here in terms of discrete body speed at each step, for comparison with Human average trajectory of body speed (shaded area represents ±1 s.d. from Fig. 5). Each data point corresponds to body speed at a footfall, defined as stride length divided by stride time ending at that footfall. The first uneven step is indicated by vertical dashed line, and overall walking speed is denoted by horizontal solid line. Model trajectories are plotted in dimensionless speed and time, equivalent to units of 0.5 0.5 = 3.13 m/s and −0.5 0.5 = 0.32 s, respectively using gravitational acceleration and human leg length = 1 m.
|
2022-07-25T01:16:07.406Z
|
2022-07-16T00:00:00.000
|
{
"year": 2022,
"sha1": "57ff987f352f3be15d3bbc135c836c98471f0298",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "57ff987f352f3be15d3bbc135c836c98471f0298",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine",
"Engineering",
"Mathematics",
"Biology"
]
}
|
5013535
|
pes2o/s2orc
|
v3-fos-license
|
Role of microglia in fungal infections of the central nervous system
ABSTRACT Most fungi are capable of disseminating into the central nervous system (CNS) commonly being observed in immunocompromised hosts. Microglia play a critical role in responding to these infections regulating inflammatory processes proficient at controlling CNS colonization by these eukaryotic microorganisms. Nonetheless, it is this inflammatory state that paradoxically yields cerebral mycotic meningoencephalitis and abscess formation. As peripheral macrophages and fungi have been investigated aiding our understanding of peripheral disease, ascertaining the key interactions between fungi and microglia may uncover greater abilities to treat invasive fungal infections of the brain. Here, we present the current knowledge of microglial physiology. Due to the existing literature, we have described to greater extent the opportunistic mycotic interactions with these surveillance cells of the CNS, highlighting the need for greater efforts to study other cerebral fungal infections such as those caused by geographically restricted dimorphic and rare fungi.
Introduction
The Central Nervous System (CNS) is protected from toxins, drugs, and pathogens by an anatomical blood brain barrier (BBB) and intrinsic immune system. Microglia is the major surveillance cell of the CNS with phagocytic abilities, similar to myeloid-derived peripheral macrophages, which respond to local injury and infectious agents. Most fungi are capable of disseminating from the lungs to the CNS, particularly in immunocompromised individuals including AIDS and neutropenic patients. 1,2 Although there are shared characteristics between various fungi and their interaction with host's immunity, each fungus has varying structural and virulence factors which result in differing immune responses. 3 After inoculation of a healthy subject, most fungal infections are self-limited due to the individual's immunity. Local immune cells, such as macrophages, dendritic cells, and neutrophils first initiate the fungal defense through phagocytosis, release of cytokines, and initiation of the complement, humoral and cell mediated defenses, keeping the fungi localized and further eliminating it. In fact, the patient's immunosuppression and fungal immunity evasion mechanisms that have likely been acquired from interacting with other organisms in the environment, 4 enable fungal opportunism and potential dissemination to distal sites, such as the invasion of the CNS, contributing to the conditions associated to this process including: meningitis, encephalitis, abscess formation, and death.
Fungi are ubiquitous eukaryotic organisms found in the environment in association with soil, animal feces, plant debris, or water, only requiring moisture and organic matter for survival. The main route of transmission is through inhalation as many fungi in the form of spores enter into the respiratory system. Some fungi are found commensally on skin and mucosal flora only becoming a threat to human health during immunosuppression. Moreover, most fungal infections are confined to the skin, respiratory and intestinal tracts, and vaginal mucosa. However, in severe cases, systemic mycosis can arise and disseminate hematogenously into the CNS. 5,6 Direct contiguous spread from the orbits, petromastoid region, and the paranasal sinuses can also occur in some patients. 3 Additionally, fungal infection with or without dissemination may occur during trauma, prosthetic valve placement, and intensive care or intracranial procedures. 3 For example, outbreaks of fungal meningitis caused by the extremely rare human pathogens, Exserohilum rostratum 7 and Exophiala (Wangiella) dermatitidis 8 in patients receiving contaminated injectable steroids highlight the opportunistic nature of fungi. These eukaryotic microbes can be divided into 2 categories as they relate to their usual infectivity in relation to the human host immune system. The opportunistic mycoses such as candidiasis, aspergillosis, cryptococcosis, and mucormycosis, most exclusively are diseases afflicting immunosuppressed hosts. Also, thermally dimorphic fungi that live as filamentous mold in the environment and morphologically switch to yeasts or spherules at mammalian body temperature causing deep organ systemic infections. Fungi belonging to this group include Histoplasma capsulatum, Blastomyces dermatitidis, Coccidiodes immitis, and Paracoccidioides brasiliensis.
In this review, we will discuss the existing knowledge of microglial physiology, including their origin, morphological characteristics, and their ability to recognize and respond to the CNS infections. We are especially interested in describing the current knowledge about the evolving interactions between fungi and resident microglia of the CNS as this is crucial to understand the brain's capabilities of defending itself against infectious fungi. Putting in perspective the information available about how microglial cells recognize and combat fungal infections is important to influence and encourage further investigation to mitigate the number of clinical cases involving individuals with impaired immunity and afflicted with CNS mycoses.
Microglia, guardians of the CNS against fungi
Origin Microglial cells are known to comprise approximately 10-20% of the glial population in the CNS. 9 They can be found throughout the brain parenchyma, most commonly in the hippocampus and the retina. 10 The comparison between microglia and macrophage has facilitated our understanding of microglial physiology. However, distinctions must be made between them, since microglial cells are entirely maintained by self-replication and derive from a different ontogeny. 11 Microglia precursors have been identified as early as embryonic day 8.5-9.5 in the fetal yolk sac. 12,[13][14][15] These CD45 ¡ and CKIT cells in the yolk sac give rise to CX3CR1, CD45 C microglia within the CNS. 16,17 CX3CR1 is a key regulatory receptor for maintenance of this population and has been characterized to have exclusivity for microglia in the CNS, even though is also expressed by peripheral cells including monocytes, dendritic cells, and natural killer (NK) cells. 18 In contrast to bone marrow-derived cells, microglia develops prior to the development of the fetal liver which becomes responsible for the first circulating myeloid cells. 19 Recent RNA sequencing studies identified 29 specific genes that differentiate microglia from other cells in the CNS and peripheral monocytederived macrophages. 18 In addition, microglial cells have a sensome or transcriptomic signature, which regulate a unique cluster of transcripts encoding proteins that are important for detecting endogenous recognition molecules and microbial antigens. Aging plays an important role in the regulation of the sensome by increasing the expression of microglial specific antimicrobial genes necessary for neuroprotection. 20 Murine microglial maintenance and proliferation is dependent on the IRF8 transcription factor PU-1 and colony stimulating factor 1 receptor (CSF1R). 21 Particularly, IL-34 produced by neurons acts on the CSF1R, 17,22 thus not requiring ligands such as c-Myb and CSF-1, which are essential for bone marrow-derived macrophage preservation. 11 Perivascular microglial cells, also known as perivascular macrophages, are bone marrowderived cells produced as early as embryonic day 13.5 and remain adjacent to the basement membrane of small CNS parenchymal vessels. Similarly, macrophages aside the choroid plexus or meninges are also derivatives of this myeloid population. 21,23 This specific population of cells is continuously being maintained from the periphery, while self-replication characteristics of parenchymal microglia safeguard their maintenance. 11,12,24 In pathological conditions, microglia and perivascular macrophages release chemokines that regulate the neuroinflammatory response by increased recruiting of dendritic cells, neutrophils, and lymphocytes from peripheral tissue. 10,21,25 It is conceivable to think that the interaction of microglia and perivascular macrophages may be necessary to bridge the immunological communication between cells in the CNS and cells in the peripheral tissues. However, future and rigorous investigations are needed to unequivocally establish this connection.
States of existence
In the CNS of healthy individuals, microglial cells morphologically show refined branched processes oriented radially to a small elliptical soma (Fig. 1A). 25,26 These cells share similar characteristics to peripheral macrophages consistently monitoring their microenvironment, awaiting foreign pathogens and neurological insults, maintaining healthy synaptic junctions, and responding to apoptotic neuronal death. 27 Under homeostatic conditions, neurons and astrocytes communicate with microglia via paracrine and autocrine pathways by expressing receptors (e.g. CX3CR1, CD200R, CD45, etc.) that recognize ligands (e.g., fractalkine (CX3CL1), CD200, CD22, etc.) that keep microglia in a ramified state. 28 When neuronal death occurs, the release of adenosine triphosphate and calcium activates microglia to morphologically change into their active form. Many activation signals trigger innate immune responses. Pattern recognition receptors (PRRs) are maintained on their surface or intracellularly and allow for recognition of foreign antigens known as pathogen-associated molecular patterns (PAMPs) and damage associated molecular patterns (DAMPs). PRRs that are responsive to fungal antigens among peripheral macrophages and naive microglia include major histocompatibility complex II (MHC II), CD45, Toll-like receptors (TLRs), complement receptors (CR-3; CD11b/CD18), CD14, and CSF1R. 29 Upon antigenic interaction, na€ ıve microglia switch reactivity into an activated state taking on an amoeboid and phagocytic-like shape, with hypertrophy of the cell soma, retraction of ramifications, upregulation or de novo synthesis of cell surface or intracellular molecules (Fig. 1B). 13,21,24 This cellular transformation occurs in steps which may include hyper-ramification in order to improve motility and locomotion. 25 Additionally, fungal recognition by TLRs, Dectin-1, mannoproteins, and scavenger receptors on these cells lead to release of distinctive cytokines, such as IFN-g, TNF-a, IL-1b, IL-6, and IL-12, which enhance phagocytosis and production of free radicals, in the form of nitric oxide (NO) and superoxide anion. 30,31 Intracellular and extracellular defense against fungi by microglia depend on cytokine release, such as IFN-g, complement activation, 32 and opsonization of antigens. 33 For example, microglia expresses the S100B protein which surrounds the phagosome of opsonized Cryptococcus neoformans, which in the presence of IFN-g stimulates transcription of the inducible NO synthase gene resulting in increased secretion of NO. 34 These cells express a greater number of PRRs, including TLRs and MHC II, allowing for increased communication with other immune cells through the secretion of cytokines and chemokines. 35 Nucleotide binding oligomerization domains-like receptors on microglia stimulate the production of IL-1b and IL-18 which assist in the recruitment of neutrophils into the CNS infection site. 36 Although the phenotypic variations of microglial cells are complex, they are mainly involved in anti-inflammatory and pro-inflammatory processes. Depending on the stimuli, microglia can polarize into multiple phenotypes and switch activity states. 37 Similarly to the dual role of macrophages in the periphery after Th1 or Th2 lymphocyte stimulation, microglia have also being implicated in the M1 and M2 classification to describe and simplify their activation states. 35,38 The M1 cell, or "classical activated" microglia, acts as antigen presenting cell. In contrast, M2 microglial cells downregulate inflammation aiming to minimize the potentially neurotoxic effects of the immune response. 39 However, recent evidence suggests that the typical and perpetuated polarization comparison of microglia to macrophages and its terminology is insufficient and controversial, only reflecting individual bias and reductionism with limited in vivo applications. 24 For instance, elegant studies demonstrated that microglial and monocyte-derived macrophage gene expression profiles, functions, tissue life-span, and tridimensional morphology are considerably different despite of exhibiting similar number of cells during tissue inflammation. 40,41 These findings were later validated by showing that microglia's transcriptomic activity is unambiguously different to those of peripheral monocytederived macrophages, regardless of their anatomical origin. 18 Likewise, numerous studies have demonstrated microglia's own biological identity including the regulation of synaptic pruning and plasticity, 42-44 the spatial distribution of axonal projections, 45,46 and neuronal homeostasis and survival. 47,48 Alternatively, a current hypothesis proposes that microglial reactivity may be stimulated by damaged neurons with deficient signaling, the presence of circulatory plasma molecules in the CNS due to the BBB disruption, and peripheral leukocyte signaling mediated by cytokines after interactions with microbes or their antigens. 24 Major efforts in specifically dissecting the biology of microglia should focused on using epigenomics, comparative transcriptomics, proteomics, and other multidimensional technologies such as computational biology and 2-photon imaging. 24 Following activation of PRR due to fungal PAMPs recognition, adaptor molecules are important to proper functioning of signal transduction pathways that lead to inflammatory responses (Fig. 2). Myd88 is the key adaptor molecule in the TLR activation pathway against fungi and associates with the cytoplasmic part of TLR, 12 and subsequently recruits members of the IL-1b receptor associated kinase (IRAK) family, most importantly IRAK2 and IRAK4, which through TRAF6 downstream signaling leads to the translocation of NF-kB, and ultimately the release of inflammatory cytokines and interferon inducible genes. 9,12,36,49 Following Dectin-1 receptor activation by fungal cell wall antigens, Syk-CARD9 is the key adaptor molecule, 50 independent of Myd88 pathways, leading to NF-kB expression and Th17 responses. 51 TLR-2, 4, and 9, are associated with recognition of most fungal antigens, including the C. neoformans polysaccharide capsule, Candida albicans pseudohyphae, and Aspergillus spp. conidia. 52 On the surface of microglia cells, the TLR-4 predominates, which is known to induce pro-inflammatory processes favoring the development of a Th1 response that is critical in protection against fungi. 53,54 For example, knockout mice for the TLR-4 receptor are susceptible to disseminated candidiasis and reduced clearance of conidias produced by Aspergillus. TLR-2, Dectin-1, and CR-3 on the surface of microglia and macrophages recognize carbohydrates such as mannose and b-glucans on the surface of A. fumigatus and C. albicans. 49 In this regard, fungal PAMPs are restricted to complex carbohydrates in the cell wall, including chitin, mannoproteins, phospholipomannan, and b-glucans, such as zymosan, which may subsequently activate microglia yielding pro-lymphocytic and humoral responses to control fungal infections. 52 Currently, there is limited knowledge on the modulation of anti-inflammatory processes by microglia in the setting of fungal infection. In this regard, stressed mice infected with C. neoformans, stimulate production of anti-inflammatory chemokines such as CCL-2 by microglia, increasing animal susceptibility to disease. 55 In contrast, certain fungi are also capable of using their interaction with TLR and anti-inflammatory response to evade host defenses. For example, C. albicans induces immunosuppression by activating TLR-2, leading to the release of IL-10, an anti-inflammatory cytokine that activates CD4 C CD25 C T regulatory cells. 51
Fungal interaction with the BBB and CNS invasion
Fungi, particularly yeast cells, can spread hematogenously and penetrate into the brain parenchyma transcellularly, paracellularly, or inside of circulating macrophages using the Trojan horse mechanism. Upon exposure to the cerebral microcirculation, fungal cells interact with microglia, as well as astrocytes and endothelial cells, causing leptomeningitis, encephalitis, and granulomas. 56 The BBB aims to protect the CNS from pathogen invasion, particularly those being transported by the bloodstream. Activated PRRs on the surface of luminal endothelial cells of the BBB release pro-inflammatory cytokines leading to the activation of microglia and subsequent defense mechanisms. 57 However, disruption of the BBB via trauma, surgery or in AIDS patients increases the risk of CNS invasion. Microglial activation and released cytokines such as TNF-a have also been shown to disrupt the tight junctions of the BBB. 58 Yeast cells may also infect the mucosa and paranasal sinuses, but they do not usually spread intracranially. In contrast, hyphae are capable of contiguous spread through nearby sources such as the cribriform plate or periorbital and paranasal sinuses. Similarly, Aspergillus and the zygomycetes can form abscesses in nearby lobar locations, encephalitis, and vasculitis. Cerebral abscesses are typically localized, enhancing masses with degrees of surrounding cerebral edema. In this regard, focal neurologic deficits, seizures, and dementia are common manifestations of fungal CNS diseases.
Opportunistic fungal infections
Cryptococcosis. The most common cerebral manifestation of cryptococcosis, which is caused by C. neoformans and C. gattii, after inhalation of their infectious particles, is meningoencephalitis, which is characterized by granuloma or cryptococcoma formation. Cerebral cryptococcosis caused by C. neoformans occurs in 90% of infected immunocompromised hosts, 59 whereas C. gatti preferentially infects apparently healthy hosts. 60 The mortality rate of C. neoformans cerebral infection has been reported to be 30%. 61 CNS penetration by Cryptococcus spp occurs via a transcytosis, paracellular, and the Trojan horse pathways. 6,62 Post-mortem neuropathological examinations of patients' brains have demonstrated that C. neoformans' main virulence factor, its polysaccharide capsule which is extensively released during infection, is ingested and localizes inside of microglia. 63,64 In the periphery, the capsular polysaccharide can interfere with phagocytosis, antigen presentation, leukocyte migration and proliferation, and specific antibody responses, and can enhance HIV replication. 65,66 Alternatively, the black pigment melanin may play a role in anti-phagocytic activity of C. neoformans. 67 In avirulent nonmelanogenic C. neoformans infected mice produced numerous key cytokines as described above without fatality, the virulent melanogenic fungi produces little or no cytokine secretion in mice with massive tissue damage and a number of fatalities. 68 The microglial response is critical against C. neoformans after CNS invasion. Fungal PAMPs are recognized by microglial cells and astrocytes. Cytokine and antimicrobial molecules, that include IFN-g, TNF-a, IL-1b, IL-4, IL-6, IL-10, IL-12, IL-23, NO, and macrophage inflammatory protein-1a (MIP-1a) upon exposure to fungal antigens, recruit peripheral CD4 C and CD8 C T cells, peripheral macrophages, and neutrophils that are able to seed into the CNS. 69,70 CD40 and IL-2 enhanced the host cell response to C. neoformans in intracerebrally injected mice by up-regulating CD45, CD11 and MHC II on the surface of microglia as well as infiltration of these cells to the site of injection. 71 Experiments performed with IFN-g knockout mice revealed that this cytokine is essential for both microglial cell activation and the anti-cryptococcal efficacy especially after anti-CD40/IL-2 administration. In spite of the observed increased in the levels of circulating IFN-g and microglial reactivity early after treatment, minimal levels of IFN-g were detected in brain homogenates. Additionally, the phagocytic function of microglia reduces C. neoformans growth and increases expression of MHC II in the presence of CD4 C cells using murine cell lines. 72 In another study, alongside IL-12, the addition of IL-23p19 enhances the microglial response to C. neoformans. 73 Differences in response to NO by animal models have been observed. In contrast to murine microglia, human microglia may not be capable of killing C. neoformans probably due to insufficient production of NO as compared to the high levels of NO production by murine microglia. 74 Since macrophage physiology and iron metabolism are intertwined, the influence of increased iron levels on the effector functions of untreated and IFN-g plus lipopolysaccharide (LPS) microglial cells infected with C. neoformans was investigated in vitro using the murine cell line BV-2. 75 A high iron milieu augmented and decreased the anti-cryptococcal activity of basal and IFN-g plus LPS-treated BV-2 cells, respectively but had no effect on their phagocytic activity. Likewise, mice supplemented with iron and intracerebrally infected with C. neoformans showed increased fungal burden and reduced IL-12 production throughout the infection and low levels of IFN-g especially in the late stages of the infection relative to untreated control animals. 76 In healthy individuals, the brain is devoid of antibodies given that the BBB is intact and prevents these relatively large molecules from entering the CNS. 77,78 Yet, in cerebral inflammation, the BBB is disrupted and becomes increasingly permeable to these opsonins which pass through to help fight infections, augmenting the microglial anti-cryptococcal activity. Peripheral macrophages have shown that opsonization by IgM and complement activation is important for efficient phagocytosis of cryptococci. 79 For example, increased survival of healthy mice and immunosuppressed mice infected with opsonized cryptococcal cells was observed compared to unopsonized yeast cells. 33,80,81 Within the brain, the capabilities of a monoclonal anti-capsular IgG enhance microglial cell phagocytosis of cryptococci. 82 C. neoformans monoclonal antibody immune complex promotes chemokine production and phagocytosis in primary human microglia via activation of Fcg receptor for IgG. Additionally, disruption of phosphatidylinositol-3 kinase pathway inhibits phagocytosis independent of an effect on the MIP-1a secretion. 83 Interestingly, opsonization is required for human microglia to ingest cryptococci whereas murine and porcine microglia can phagocytize the yeast cells independently of opsonins participation. 84 Furthermore, G-protein coupled receptors (GPCR), particularly GPR34 which is highly expressed in murine retinal and cortical microglia, play a crucial role in cryptococcal phagocytosis. GPR34 knockout microglia showed impaired phagocytosis of cryptococcal cells. 85 Studies with complement deficient animals indicate that the complement system plays a critical role in resistance to cryptococcosis. 86,87 The complement system is an important regulator of the inflammatory response against C. neoformans and contributes to host resistance by opsonization of the yeast to facilitate adhesion and phagocytosis by peripheral phagocytic cells. 32,88 Although all complement components can be locally produced by resident brain cells, including astrocytes, neurons, oligodendrocytes, and microglia, often in response to injury, developmental signals, or infection, 77 there is no data available on the role of complement in controlling cerebral cryptococcosis by either microglia or peripheral macrophages. For instance, C1q is expressed in microglia and abundantly found in brain tissue, playing a major role in neurodegenerative diseases such as Alzheimer's disease. 89 This is an exciting and promising area of investigation considering the predilection of C. neoformans for the CNS and the importance microglia and complement may play in preventing fungal brain colonization. 90 Aspergillosis A. fumigatus is recognized as the most frequent species to cause aspergillosis and the most common cause of fungal brain abscesses, followed by A. flavus, A. niger, and A. oxyzaei. Cerebral aspergillosis is associated with 90-100% mortality, but only 10-20% of cases of invasive aspergillosis compromise the brain. 91 Numerous factors such as the site of infection, virulence of the strain, associated immunodeficiency state, as well as limited treatment options relate to these poor outcomes. 92 Patients with tuberculosis, neutropenia, asthma or chronic obstructive pulmonary disease, chronic granulomatous disease, cancer, and those taking prolonged immunosuppressant medications are most susceptible. 91,93 Nosocomial outbreaks have occurred especially during building renovations or constructions when air conditioning ducts become heavily contaminated and an ideal environment for hyphal growth and dispersion of the infectious conidia. 94 After inhalation, dissemination to the CNS occurs through the bloodstream and contiguous spread from the orbits, periorbital regions, middle ear, or paranasal sinuses. 95 Similar to CNS granuloma formation in murine models of cryptococcosis, cerebral aspergillosis granuloma is surrounded by microglia, leukocytes, and necrotic neurons. 96 When stimulated by Aspergillus b-glucans, Dectin-1 leads to the release of TNF-a, IL-1b, IL-6, IL-8, IL-12, and CXCL-1. 97 In Dectin-1 knockout mice, an increased mortality rate and impairment of cytokine production in the presence of the mold is observed. TLR-2 can recognize both, the conidial and hyphal forms of Aspergillus spp, but TLR-4 only recognizes the hyphal form. 98 IL-6 knockout mice are more susceptible to invasive aspergillosis than wild-type animals. 99 This is important because IL-6 is a neuroinflammatory cytokine released by cells of the brain, including microglia, astrocytes, and neurons. IL-6 plays a role in sequestering intracellular free iron, an important nutritional source for microbial growth within phagocytes. 100 The virulence factors of Aspergillus spp may enhance the evasion of immunity and penetration of the CNS. 101 For example, mycotoxins, such as gliotoxin, can damage and kill microglial cells, astrocytes, neurons via apoptosis. Mycotoxins also inhibit phagocytosis, reduce Reactive Oxygen Species (ROS) production by neutrophils and inhibit T cell responses. 95,102 Secretion of gliotoxin makes the fungal conidia less susceptible to opsonization increasing the fungus propensity to invade the CNS through endothelial cell endocytosis. 90,102 When this mold is cultured in human cerebrospinal fluid (CSF) secretes Alp-1, an alkaline protease that can cleave and inactivate complement proteins. 103 For example, A. fumigatus, the most frequent cause of cerebral aspergillosis, destroyed complement activity more efficiently than other Aspergillus spp 104 The degradation of complement in CSF results in a drastic reduction of the capacity to opsonize fungal hyphae. The Aspergillus-derived protease diminishes the amount of CR-3, a surface molecule to mediate eradication of opsonized pathogens, on granulocytes and microglia. Furthermore, a reduction in CR-3 expression of microglia cell surface causes a significant reduction in phagocytosis of fungal cells. Moreover, supplementation of CSF with nitrogen sources rescues the complement proteins and abolishes any Alp-1 induced cleavage, representing a potential therapy for aspergillosis treatment.
Candidiasis
C. albicans is a commensal most commonly known to cause oral and vaginal candidiasis in patients with immunodeficiency. C. albicans can disseminate to all the organs including the brain leading to meningitis, even though brain abscesses can occur in 50% of patients. Mortality due to invasive candidiasis ranges from 20 to 50%. 105 Microglial cells are the principal effector cells in invasive cerebral candidiasis, and when administered intracerebrally can limit infection and tissue damage. 106,107 Notably, during systemic candidiasis, accumulation of neutrophils predominate in all organs except the brain, where microglia is the major cell type detected interacting with the fungus. 106 In the retina of C. albicans infected mice, microglia become reactive 3 days postinfection, undergoing significant morphological transformations, increased expression of MHC II, CD11b, and CD45, and close association with blood vessels. 108 Biofilm formation seems to protect C. albicans from microglial damage impairing fungal cell phagocytosis, cytokine release, and NO production. 109 The b-glucans on the surface of C. albicans are detected by TLR-2 and 4, as well as Dectin-1 expressed on the surface of retinal and intraparenchymal microglia. [110][111][112] However, these carbohydrates may attenuate TLRmediated NF-kB activation, decreasing the capacity of microglia to release inflammatory cytokines in response to the pathogen. 113 CARD9, an adaptor molecule linked to Dectin-1 receptors, responds to the cell wall structures of the yeast, especially carbohydrates, and has been associated with cytokines production by microglia leading to the recruitment of neutrophils. This observation was confirmed in a CARD9 knockout mouse that shows reduced neutrophil recruitment and increased CNS fungal burden. 31
Mucormycosis
Rhizopus and Mucor are zygomycetes known to cause CNS infection. Rhinocerebral infection is a complication of mucormycosis which is suspected in patients with a triad of nasoorbital infection, diabetic ketoacidosis, and meningoencephalitis. 114 Cerebral infection occurs in 33-50% of all cases, and 70% of these cases are associated to patients with diabetic ketoacidosis which display cellular defective phagocytosis function. 114 In the diabetic murine model, the monocytes are dysfunctional, incapable of suppressing spore germination in serum. 114 Other populations at risk are patients with compromised immunity due to chemotherapy and a known history of haematopoietic stem cell transplantation (HSCT). 115,116 These patients have suppressed expression of Dectin-1 and TLR-2, due to possible polymorphisms, which may facilitate their susceptibility to acquire this fungal infection. In fact, patients with mucormycosis have shown a deficient population of CD4 C T cells that is responsive to IL-6 stimulation. 116 Additionally, the adoptive transfer of NK cells is promising alternative therapy that may restore host immunity after HSCT, decreasing the patients' susceptibility to mucormycosis. 115 IL-2 pre-stimulation of NK cells effectively killed a broad range of mucormycetes. Surprisingly, NK cells displayed a reduced production of IFN-g, which is important augmenting the fungicidal activity of macrophages. 117 Furthermore, neutrophils in healthy individuals can be chemotactically recruited by monocytes and induce damage to the pathogen by initiating ROS production. In the ketoacidotic state, this process is impaired, enabling contiguous spread from the sinuses, through the cribriform plate and into the brain. These fungi also cause frontal lobe abscess and cavernous sinus thrombosis. 118 Although there is dearth of information in the setting of mucormycosis, the brain parenchyma of a locked in syndrome patient infected by Mucor spp showed early ischemic damage in the presence of complete neuronal loss, microglial activation, edema, and vascular engulfment. 119 Dimorphic and rare fungi Dimorphic fungi cause geographically restricted mycoses, corresponding to the areas in the world where warm temperature and dry conditions exist. These fungal pathogens found in the environment in association with soil and bird or pigeon excreta, primarily infect the skin, bone, or lung parenchyma in immunocompromised hosts or individuals that practice outdoor activities (e.g., farming, hiking, spelunking, etc.) in endemic geographical regions that can progressively disseminate to the CNS. The prevalence of CNS invasion by the majority of these fungi is 5-25% with clinical manifestations that may include subacute and chronic meningitis, focal brain or spinal cord lesions, or encephalitis. Despite the importance in understanding the direct interaction between dimorphic fungi and microglia, there is limited information available in this area. Only data on the H. capsulatum cell wall protein Yps3p has been described to interact with TLR-2 receptors on microglia promoting the release of CCL-2 after activation of the NF-kB pathway. 120 Likewise, the b-glucan zymosan appears to be engulfed by both CR-3 and Dectin-1 PRRs on microglia regardless of complement opsonization. This slightly differs from macrophages as Dectin-1 predominately mediates phagocytosis of opsonized zymosan, while CR-3 predominates in non-opsonized carbohydrate. 52 Similarly, rodents intracerebrally infected with P. brasiliensis demonstrated progressive neuroinflammation characterized by substantial infiltration of peripheral phagocytic cells and increased chemokine expression that resulted in granulomatous meningoencephalitis. 121 Following the inhalation of conidia, the rare, but invasive pigmented or black fungi, including Cladophialophora bantiana, Exophiala dermatitidis, and Ramichloridium mackenziei, have been isolated as the causative agent of primary cerebral phaeohyphomycosis in individuals with both, competent and compromised immunity lacking medical intervention. 122 Melanin production may interfere with microglial recognition and eradication of fungi in the brain parenchyma resulting in high mortality due to meningoencephalitis and granulomatous disease, in which the fungus located within the giant cells walled in by fibrosis and reactive gliosis. 123 Therapy against black fungi in the setting of CNS involvement using combinations of amphotericin B, 5-flucytosine, and itraconazole has demonstrated improved survival. 124 Fusarium verticilliodes is a fungus that produces mycotoxins, commonly contaminating corn and other grains, leading to diseases of both animals and humans. 125 Fumonisin B 1 , the fungus most common mycotoxin, has been associated with cases of cerebral fungal invasion leading to neuronal axon demyelination. Fumonisin B 1 is cytotoxic to microglia causing the accumulation of phospholipids in the cell membrane and altering cellular respiration by impairing mitochondrial function. 126,127 Cases of CNS infection due to Penicillium marneffei are becoming increasingly a common opportunistic infection in patients with AIDS in Southeast Asia. 128 In contrast, recent studies suggest the neuroprotective role of Penicillium spp related components in fermented dairy products. Particularly, dehydroergosterol and oleamide show reduced microglial-induced inflammation, an important manifestation in the pathogenesis of dementia and Alzheimer's disease. 129 Furthermore, oleamide reduce Ab accumulation via enhanced microglial phagocytosis, and suppresses microglial inflammation after Ab deposition in the hippocampus. 130 Although these findings are provocative, further validation is necessary.
Conclusion
We have highlighted the role microglia play in fungal brain infection. Uncovering the innate abilities of microglial mediated phagocytosis, the factors that impair and enhance this process, and the transduction once fungal antigenic detection occurs will provide a greater path toward decreasing the burden of pathologies caused by fungi. It is apparent that to optimize microglial function against fungi, the presence of opsonins and T cells is crucial, yet as we have described fungal virulence and immune deficiencies may hamper the host's ability to properly attenuate these pathogens. Still in the face of fungi, many areas of research must be investigated, such as astrocyte and microglia interactions, improved models of research that duplicate actual disease processes, such as anti-fungal processes in the setting of T cell deficiency as is seen in HIV/AIDS, and solid organ transplant recipients on long-lasting corticosteroid therapy, that does not include systemic fungi that also lead to brain disease. Due to the worldwide prevalence and the challenges encountered once these vulnerable populations acquire invasive fungal diseases, the urge to study the opportunistic neurotropism of fungi is imperative for prophylaxis and patient care, including the development of efficacious antifungal therapy.
Disclosure of potential conflicts of interest
No potential conflicts of interest were disclosed. Author contributions G.W.K. and L.R.M. wrote the manuscript and prepared the diagram presented in Fig. 2. R.L.R. photographed and prepared the images presented in Fig. 1.
|
2018-04-03T02:47:15.057Z
|
2017-01-04T00:00:00.000
|
{
"year": 2017,
"sha1": "489bdfbfa3bfd5f9a6d1230fbcb20d8da23a3331",
"oa_license": null,
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21505594.2016.1261789?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "e739ebdeff963562a2cc2f04a69886fba2193701",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
1652766
|
pes2o/s2orc
|
v3-fos-license
|
Validation of a Laparoscopic Ferromagnetic Technology-based Vessel Sealing Device and Comparative Study to Ultrasonic and Bipolar Laparoscopic Devices
Introduction: Ferromagnetic heating is a new electrosurgery energy modality that has proven effective in hemostatic tissue dissection as well as sealing and dividing blood vessels and vascularized tissue. The purpose of this study was to evaluate a ferromagnetic-based laparoscopic vessel sealing device with respect to sealing and dividing vessels and vascularized tissue and to compare performance against current vessel sealing technologies. Materials and Methods: A laparoscopic vessel sealing device, Laparoscopic FMsealer (LFM), was studied for efficacy in sealing and dividing blood vessels and comparative studies against predicate ultrasonic, Harmonic Ace+(US), and/or bipolar, LigaSure 5 mm Blunt Tip and/or Maryland (BP), devices in vivo using a swine model and in vitro for comparison of seal burst pressure and reliability. Mann-Whitney and Student t test were used for statistical comparisons. Results: In division of 10 cm swine small bowel mesentery in vivo, the laparoscopic FMsealer [12.4±1.8 sec (mean±SD)], was faster compared with US (26.8±2.5 s) and BP (30.0±2.7 s), P<0.05 LFM versus US and BP. Blinded histologic evaluation of 5 mm vessel seals in vivo showed seal lateral thermal spread to be superior in LFM (1678±433 μm) and BP (1796±337 μm) versus US (2032±387 μm), P<0.001. In vitro, seal burst strength and success of sealing 2 to 4 mm arteries were as follows (mean±SD mm Hg, % success burst strength >240 mm Hg): LFM (1079±494 mm Hg, 98.1% success) versus BP (1012±463, 99.0%), P=NS. For 5 to 7 mm arteries: LFM (1098±502 mm Hg, 95.3% success) versus BP (715±440, 91.8%), P<0.001 in burst strength and P=NS in % success. Five 60 kg female swine underwent 21-day survival studies following ligation of vessels ranging from 1 to 7 mm in diameter (n=186 total vessels). Primary seal was successful in 97%, 99% including salvage seals. There was no evidence of postoperative bleeding at sealed vessels at 21-day necropsy. Conclusion: The Laparoscopic FMsealer is an effective tool for sealing and dividing blood vessels and vascularized tissue and compares favorably to current technologies in clinically relevant end points.
T he complexity of procedures performed laparoscopically has necessitated the development of more efficient and effective technologies to divide tissue and control bleeding. 1 The advent of energy-based vessel sealing technologies has expanded the arsenal of potential techniques available for hemostasis during laparoscopic surgery. In turn, this has tremendously expanded our ability to perform complex surgeries laparoscopically.
The 2 most common energy modalities employed today for sealing and dividing vascularized tissue are based on bipolar (BP) and ultrasonic (US) technology. 2 Each method possesses their own pros and cons inherent to the nature of their respective mechanisms. However, no single device has been shown to be superior to the other with regard to the most common metrics of performance. 2,3 As a result, usage of one device over the other is largely based on surgeon preference.
The FMwand (Domain Surgical, Salt Lake City, UT) is a commercially available hemostatic dissecting scalpel which uses ferromagnetic heating as its source of energy. 4 To generate ferromagnetic heating, radiofrequency current is delivered from a generator through a conductive alloy loop and back. 4 This loop is coated with a thin, several micron thick ferromagnetic coating material which couples to the high frequency alternating current. 4 As the radiofrequency current passes through the loop and ferromagnetic coating, pure thermal heat is generated by magnetic hysteresis losses and Ohmic heating related to the skin effect. 4 This technology allows for precise temperature control with rapid heating and cooling. 4 The heat generated is restricted to the coating itself and the tissue in contact with the coating. 5,6 No electrical energy or magnetic effects are delivered to the surrounding tissue and thus, the device remains electrically silent with regard to muscle contraction, nerve stimulation and interference with electrically sensitive equipment such as pacemakers or automatic internal defibrillators. 7 The FMsealer was constructed by placing a ferromagnetic heating element in the jaw of a surgical vessel sealing device (Fig. 1). Tissue compression occurs between the actively heated jaw and an opposing thermally inert surface. Heat conducts perpendicularly through the compressed tissue for sealing and dividing purposes. The versatility of a ferromagnetic energy source allows for a device that overcomes the geometrical limitations of US energy and circumvents the need for additional cutting mechanisms as seen in BP devices.
Following the development of the FMsealer, a series of experiments were conducted to compare the efficacy of ferromagnetic heating, sealing, and dividing of arteries and veins in vitro and in vivo with regard to size of vessel, speed of effect, strength of seal/burst pressure of seal, and durability in comparison with existing, commercially available US and BP devices. Compared with the US and BP vessel sealing devices, the FMsealer sealed arteries and veins in vitro ranging between 1 and 7 mm vessel diameter (as measured in a noncompressed naturally pressurized state) with burst pressures consistent with BP sealers and equivalent reliability of vessel seals. 4 The FMsealer compared favorably to the other technologies in speed and efficiency as well as tissue effects including thermal spread and injury. 4 In addition, in vivo survival studies confirmed the reliability of the FMsealer on arteries and veins ranging from 1 to 7 mm as well as sealing abdominal lymphatics. 4 The FMsealer operated at cooler temperatures and had less adjacent heat transfer to surrounding tissues based in part on active cooling of the instrument. 4 These favorable outcomes catalyzed further investigation with regards to the utility of such a device in laparoscopic surgery. The objective of this study was to evaluate common metrics of performance of a laparoscopic FMsealer as compared with the 2 most common energy modalities used today, BP and US.
MATERIALS AND METHODS
The laparoscopic FMsealer prototype was developed using a U-shaped, 15 mm long by 2.5 mm wide, heating element deployed in 1 jaw of the prototype vessel-sealing tool. The opposing jaw contains an insulated compression surface. Tissue is compressed between opposing sides and when activated, heat is transferred to the tissue from the heating element. The device was modeled in size and length after existing US and BP devices for purposes of comparative studies. This device was used throughout the following experiments. The Harmonic Ace + as representative of US energy and LigaSure 5 mm Blunt Tip as representative of BP energy were used for purposes of comparison with other energy modalities.
Two models were developed to facilitate test experiments: in vivo and in vitro. The swine in vivo model was subdivided into acute, nonsurvival and chronic, survival experiments. These experiments allowed for the following end points to be evaluated: (1) Proof of the concept that ferromagnetic heating can be used for laparoscopic vessel sealing and dividing in vivo, (2) Comparison of clinical effectiveness to US and/or BP devices for speed of 10 cm mesentery division and histology of vessel seals to analyze the extent of adjacent tissue thermal injury, and (3) 21-day survival following surgery using the laparoscopic FMsealer (n = 5). The porcine in vitro carotid artery model was developed to compare seal burst pressure among the vesselsealing modalities on a common platform. All in vivo experiments were performed by an experienced attending gastrointestinal surgeon under the direct supervision by FDA-approved personnel using good laboratory practice procedures to verify accuracy of data collection and interpretation. In vitro experiments were performed under the direction of the attending surgeon and PhD investigators.
Acute Nonsurvival In Vivo Animal Model
The acute nonsurvival in vivo studies to access proof of concept and comparison of clinical effectiveness utilized a 45-kg female swine model. Under approval from the University of Utah Institutional Animal Care and Use Committee, animals were anesthetized per protocol. The laparoscopic FMsealer was used to seal and divide in vivo, gastric, splenic, renal, uterine and mesenteric arteries, veins, and associated lymphatics ranging from 1 to 7 mm in diameter. Four distinct vessels were ligated and divided for each vessel size ranging from 1 to 7 mm in 1 mm increments to establish proof of concept that the ferromagnetic device could seal blood vessels.
Representative specimens of sealed arteries sealed in vivo were submitted for histologic evaluation. The sealed portions of the vessels were harvested, fixed in formalin, sectioned longitudinally, Hematoxylin and eosin stained, and mounted to slides for digital imaging and analysis of thermal damage. An independent pathologist, blinded to the specific device used to seal a respective vessel, made measurements of thermal damage. Extent of adjacent vessel thermal damage was assessed by measuring the midpoint of the vessel seal down the length of sealed vessel to the end point of histologically apparent thermal damage. The end point of thermal damage was defined as when intact fibroblast cell membranes are encountered and the most distal thermally damaged cellular structures cease. In every case, the greatest amount of thermal damage is recorded to represent total thermal spread.
Laparoscopic division of small intestine mesentery was used to compare device speed and efficacy. A metric ruler was used to measure a 10 cm length of intestine mesentery equidistant from the root of the mesentery to bowel wall. Ferromagnetic, US, and BP devices were compared in speed of division of the 10 cm length of small bowel mesentery and efficacy (completeness of vessel sealing, need for rescue sealing to salvage bleeding vessels not sealed in the primary pass). Seven separate measurements were made for the FMsealer, US (at a setting 5), and BP (3 bars) devices. Time for dissection was measured from recorded video. The animals were then euthanized per protocol.
Survival In Vivo Animal Studies
Five 60-kg domestic swine were utilized in a 21-day survival study to evaluate efficacy and durability of the FMsealer in sealing and dividing 1 to 7 mm arteries and veins. Surgery was performed under sterile conditions and under the direction of the veterinary staff of the Office of Comparative Medicine at the University of Utah by the University of Utah Institutional Animal Care and Use Committee approved protocol using FDA-approved good laboratory practice procedures. All swine were anesthetized under general anesthesia. Veterinary staff regularly monitored vitals. Each animal underwent splenectomy, left nephrectomy, bilateral partial hysterectomy, and selective mesenteric vessel ligation to locate, seal and divide arteries and veins measuring 1 to 7 mm in relaxed (nonvasospasm) outer diameter along with associated lymphatics. All vessels were measured using a surgical ruler in their undisturbed state in vivo before manipulation. All sealed vessels were marked with numbered sterile mouse ear tags as fiducial markers for localization at necropsy. Vessels >7 mm (splenic and renal veins) were suture ligated as these vessels exceeded the diameter of vessel size believed to be clinically relevant for use of the sealing device. The animals were survived for >21 days under direction of the husbandry and veterinary staff of the Office of Comparative Medicine at the University of Utah. They then underwent necropsy to assess for surgical bleeding, morbidity, and mortality.
At necropsy, before euthanasia, 10 mm thick by 15 mm width vascularized tissue bundles consisting of proximal hind and fore limb muscle bundles were isolated, ligated, and divided to evaluate the performance of the laparoscopic FMsealer in dividing vascularized tissue (n = 64 sealed tissue bundles in a total of 5 animals).
In Vitro Model
A computerized test apparatus that automates burst strength testing and reporting after sealing of vessels was developed. This allowed a large number of arteries to be evaluated in identical circumstances by the different technologies. Bench tests were performed on fresh (< 48 h old and refrigerated to 381F) 2 to 7 mm commercially harvested swine arteries. The outer diameter of each vessel was measured with a caliper after manipulating the vessel to its native tubular shape rather than measuring the diameter of the vessel in its flattened shape so as to more accurately measure the true diameter of the vessel. Each vessel to be sealed was sectioned at 1½ 00 length. One end of the artery was fitted over a 16 Ga stub adaptor and clamped to create a seal for pressure testing. The distal end of the vessel was then sealed using the recommended settings for each instrument as follows: LigaSure 5 mm Blunt Tip (3 bars); FMsealer (min setting 3). Upon sealing, each vessel segment was submerged in a saline bath while connected by Luer-lock connector to an automated syringe pump which, under computer control, allowed gradual injection of air to the point of burst of the seal while simultaneously monitoring pressure with an in line strain gauge pressure sensor. Data acquisition and plotting were performed using a multichannel A/D convertor (NI USB-6009, National Instruments, Austin, TX), plotting time versus pressure at 0.1-second intervals. The data acquisition module and gauge had been previously calibrated to NIST-traceable standards. The peak burst pressure was automatically derived. BP energy will seal vessels 4 to 7 mm to a higher burst pressure and with more success as compared with US energy. 8,9 As the burst strength and reliability performance of the laparoscopic FMsealer most closely resembled that of the BP device, the US device was omitted from comparison in the burst pressure and reliability studies.
Comparisons between devices were made using the mean value of burst pressure based upon the following criteria: fail (seal burst <120 mm Hg), marginal (120-240 mm Hg), and pass (> 240 mm Hg). Multiple measurements were taken (N > 100) for each device.
Statistics
Student t test was used for statistical comparisons for data with a Gaussian distribution (burst strength testing, speed of mesenteric division, and thermal spread). Mann-Whitney was used for statistical comparisons of nonparametric data (burst strength reliability, objective tissue effects).
Proof of Concept
The laparoscopic ferromagnetic device (FMsealer) was able to seal and divide arteries (mesenteric, gastric, renal, splenic, femoral, and carotid) ranging from 1 to 7 mm in diameter in a live swine model. These data are not shown as each vessel size in this range were successfully sealed and divided in vivo up to 7 mm arteries and veins. The FMsealer was clinically effective across the entire range of artery and vein diameters.
Tissue Thermal Effects
A comparison of thermal heating peak temperature of the laparoscopic FMsealer compared with the BP and US devices is shown in Figure 2. During a single continuous activation to seal and divide a 5-mm carotid artery, the peak external temperature of the FMsealer (921C) and BP device (831C) were significantly lower than that of the US device (2351C). In histologic measurements of extent of thermal injury to adjacent tissue, the FMsealer showed less thermal damage to adjacent vessel wall compared with US energy and equivalent to BP. This is shown in Figure 3 and Table 1.
Speed of Mesentery Division
The FMsealer was superior to US and BP energy sources in speed of 10 cm mesentery division (mean ± SD s): FMsealer (12.43 ± 1.8 s), US (20.50 ± 2.46 s), BP (30.01 ± 2.65 s) (Pr0.01, FM vs. US or BP). Data shown are speed of mesenteric division to achieve complete hemostasis of divided mesenteric vessels. This is summarized in Table 2.
Sealed Vessel Burst Pressure and Reliability
Data showing burst pressure and efficacy of sealing arteries 2 to 4 mm and 5 to 7 mm in diameter using the laparoscopic FMsealer versus BP device are shown in Table 3, respectively. Data from the sealing of swine carotid arteries were as follows: (mean ± SD mm Hg, % success sealing burst strength >240 mm Hg). The laparoscopic FMsealer sealed vessels 5 to 7 mm to a higher burst pressure and with more success: Laparoscopic FMsealer (1098 ± 502 mm Hg, 95.3% success) versus BP (715 ± 440, 91.8%). For swine carotid arteries measuring 2 to 4 mm the laparoscopic FMsealer sealed to equivalent burst pressure and equivalent success: Laparoscopic FMsealer (1079 ± 494 mm Hg, 98.1% success) versus BP (1012 ± 463, 99.0%).
Survival Studies
One hundred eighty-six vessels ranging in size from 1 to 7 mm (Table 4) were sealed in total among the 5 swine in the >21-day survival study. Vessel types sealed were typical of splenectomy, left nephrectomy, bilateral hysterectomy, and selective ligation of mesenteric arteries, veins, and lymphatics. Initial sealing failed in 6 arteries ranging from 4 to 7 mm yielding a 96.8% primary seal rate. Five of the 6 vessel seal failures were successfully resealed with a single application of the FMsealer yielding a 99.5% overall successful seal rate. One vessel required suture ligation for failure of rescue seal and inadequate remaining vessel length to attempt additional rescue seals.
All 5 animals survived to postoperative day 21 without any observed morbidity or mortality. 186 of 186 numbered mouse ear tag fiducial markers were located. No animals showed signs of early or delayed intraperitoneal hemorrhage (no hematoma, hematin staining of adjacent tissues) from any sealed vessel site. All sealed vessels were secure. No lymphoceles or ascites were appreciated indicating successful ligation of lymphatics in association with sealed arteries and veins.
The FMsealer was successful in ligating and dividing the vascularized fore and hind limb 1 cm thick muscle bundles in all cases without bleeding (n = 64 sealed tissue bundles in a total of 5 animals).
DISCUSSION
Advances in electrosurgical technology have driven the complexity of procedures that can now routinely be accomplished laparoscopically. This increasing complexity, in turn, drives the need for further technological advances. This is most notably seen in advanced laparoscopic procedures where obligate electrosurgical instruments are needed to achieve reliable and rapid hemostasis while dissecting, sealing, and dividing vascularized tissue, and simultaneously minimizing collateral damage to surrounding tissues. 2,10 Current technologies, US and BP, still possess nontrivial shortcomings including limitations in geometry, shape, efficiency of dissection, adjacent tissue damage, and ergonomics. Despite numerous studies comparing these devices, there is no clear evidence to support the use of either in preference over the other. 2,11 This is because to date, no single device has been shown to be superior to another in all categories of performance. 2,3,11 The use of ferromagnetic heating to dissect and coagulate tissue is a new energy modality born out of this cycle. Application of ferromagnetic technology as a dissecting tool, the FMwand, has resulted in a surgical device that delivers a near instantaneous "on" effect with rapid cooling, minimal collateral damage, and excellent first pass hemostasis. 5,6 Furthermore, the FMwand has been shown to be superior with respect to tissue distortion, ease and speed when compared with monopolar electrosurgical devices. 12 Inspired by this new technology, a prototype of a vessel-sealing tool was developed, the FMsealer. 4 applications. 4 On the basis of this favorable outcome, a series of experiments were designed to investigate the FMsealer in the laparoscopic setting. In vivo and in vitro models were used to study proof of concept and comparative performance parameters of the laparoscopic FMsealer compared with predicate US and BP devices.
The laparoscopic FMsealer proved effective using live in vivo swine models, in which abdominal, femoral, and carotid arteries ranging from 1 to 7 mm in diameter were successfully sealed and divided. The laparoscopic FMsealer compared favorably to the other technologies in speed and efficiency as well as tissue effects including thermal spread and injury. Compared with the US and BP vessel-sealing devices, the laparoscopic FMsealer sealed vessels with clinically reliable burst pressures consistent with the other technologies presently available. In addition, survival studies confirmed the reliability seen in the in vitro benchtop vessel-sealing burst pressure and reliability model both in the initial use of the laparoscopic FMsealer on vessels ranging from 1 to 7 mm in diameter and durability of the seal in the survival in vivo studies with no evidence of intraperitoneal bleeding at 21-day necropsy in a total of 186 separate arteries and veins.
In addition to measurable benefits in tissue sealing and dividing performance in this animal model, ferromagnetic technology offers additional theoretical advantages. First, there are few constraints as to size, length, and geometry of the technology as is the case with US technologies. The open and laparoscopic FMsealer can be configured to specific uses including longer, larger jawed instruments for open abdominal applications, shorter and narrower designs for fine tissue dissection such as head and neck surgery, and curved platforms for use in laparoscopic and pelvic surgery, including urologic, gynecologic, and rectal operations. In this regard, the FMsealer resembles the existing BP platforms. Second, the FMsealer has the ergonomic advantage of allowing for separate sealing and division and omits the addition of a separate cutting mechanism needed in existing BP platforms. Third, there is no electrical energy conducted to the patient so the device remains electrically silent with regard to muscle contraction or nerve stimulation and interference with electrical monitoring or electrically sensitive equipment like pacemakers or automatic internal defibrillators. 7 Finally, as the FMsealer does not pass electrical current through the tissue, it can be expected to work across staple lines or in the vicinity of other metallic objects such as clips, similar to that of US technologies but unlike BP technologies.
As the data presented here represent the proof of concept work and initial validation of efficacy in vivo and in vitro using a porcine model, the utility of this energy platform in the clinical setting in humans remains to be determined.
CONCLUSIONS
Ferromagnetic technology through a novel ferromagnetic alloy-coated heating element is a highly effective and efficient technology for thermal sealing and dividing blood vessels and vascularized tissue. An initial prototype of a laparoscopic sealing instrument utilizing ferromagnetic heating compared favorably to commercially available products based on US and BP technologies. Development of ferromagnetic vessel-sealing technology for this application shows great promise with possible distinct clinical advantages over existing technologies, particularly in the laparoscopic setting.
|
2018-04-03T06:14:49.727Z
|
2017-02-23T00:00:00.000
|
{
"year": 2017,
"sha1": "c6d11426df812cd5fb22ad956086cf703e40aa08",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc5377999?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "c6d11426df812cd5fb22ad956086cf703e40aa08",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236061840
|
pes2o/s2orc
|
v3-fos-license
|
Digital Literacy Competencies of Mechanical Engineering Vocational Education Teacher Candidates
The era of the Industrial revolution 4.0 collaborates on cyber technology and automation technology. Rapid changes due to the 4.0 industrial revolution, require anticipatory steps by the government through increasing the quality of education. Preparing a great generation who has competencies as capital to anticipate the 4.0 industrial revolution, the 21st-century learning education. One of the competencies expected for 21st-century learning is digital literacy skills. Therefore, this study aims to determine the ability of prospective teachers prepared by the Mechanical Engineering Vocational Study Program, Sarjanawiyata University. The population of prospective teachers is students who have implemented teaching practices in schools in the 2019/2020 school year. Quantitative research with descriptive method approach. Data analysis with the percentage of students who use and integrate Information and Communication Technology (ICT) in learning during practical field experience (PPL). The results obtained conclude that digital literacy skills need to be improved, because students in their learning rarely (50%) use and integrate ICT in learning. Because with digital literacy skills students can design learning that facilitates students to think critically, creatively, and innovatively.
Introduction
The era of the Industrial revolution 4.0 collaborates on cyber technology and automation technology. The concept of the application is centered on the concept of automation carried out by technology without requiring human labor in the application process. Of course, it is necessary to prepare reliable Human Resources (HR) to face this era. Efforts to prepare human resources are closely related to the education sector. Education is the sector that is considered most responsible for preparing young people for their future, regardless of its form. In connection with the rapid changes due to the 4.0 industrial revolution, it is necessary to take anticipatory steps by the government by increasing the quality of education. The Indonesian nation from 2020 to 2035 will get a demographic bonus because the portion of the productive population is more than the number of non-productive people, in other words, the "dependency ratio" is estimated to be in the lowest category so far, around 44%, especially in 2030. However, the demographic bonus can it will turn into a demographic disaster if it is wrong to organize and implement quality education. The mechanical engineering vocational education study program is tasked with preparing prospective teachers who have scientific competence in the field of engineering and can design, implement, and evaluate mechanical engineering learning creatively, innovatively, reflectively, critically, and adaptively to the development of ICT (Information and Communication Technology). Therefore, to see whether the prospective teachers who are prepared already have digital literacy competencies, it can be seen when students carry out learning practices in schools. How big are the students in implementing or integrating digital literacy competencies during learning practices in schools? Digital literacy is the ability to access, manage, understand, integrate, communicate, evaluate, and create information safely and appropriately through digital devices and networked technologies for participation in economic and social life. It includes competencies that are variously referred to as computer literacy, ICT literacy, information literacy, and media literacy. (Unisco, 2018). Digital literacy skills are a highly expected need to answer the challenges of the digitalized era. The Covid 19 outbreak that has hit throughout the world has made rapid changes in the use of information technology. Almost all fields or aspects make use of information technology. The world of education with the existence of Covid 19 by adopting a learning policy implemented online or distance education. Therefore digital competencies make the key to success in implementing digital-based programs. Literacy of representation includes how technology can be understood in our time, and how to use digital tools in a broader context. (Johannesen, 2014).
In considering the nature of general digital competence, comment that: digital competency involves more than knowing how to use devices and applications, which is intricately connected with skills to communicate with ICT, as well as information skills. Sensible and healthy use of ICT requires particular knowledge and attitudes regarding legal and ethical aspects, privacy, and security, as well as understanding the role of ICT in society and a balanced attitude towards technology. (Janssen, at.al, 2013). Aspects of digital competence that must be possessed by prospective teachers (1) Information literacy includes access, evaluation, use, management; (2) ICT Literacy: integrating ICT in learning, Information: identity, locate, retrieve, store, organize and analyze digital information, judging its relevance and purpose. (Janssen, at.al, 2013).
(3) Media literacy: design, use, develop, evaluate according to needs in learning. The digital abilities possessed by educational candidates will be better prepared to face 21st-century learning so that teachers can prepare graduates who have the expected competencies. (Masitoh, 2018). Examines digital literacy as an effort to improve the quality of learning and towards the golden generation of 2045, and the results of his research work can be concluded that teachers and lecturers in the 21st century are challenged to be able to prepare learning components that can improve school literacy. The advancement of digital devices or technology as the identity of the industrial era 4.0 provides opportunities for humans to increase productivity or develop themselves, organizations or communities with specific goals. 21st century teachers are required to be able to manage modern technology in order to support the learning process. Digital devices have become one of the key instructional technologies in education, especially in light of what we know about today's learner. These devices, including the computer, play multiple roles within the curriculum, ranging from tutor to student creativity resource. Teachers can use the computer as an aid to collect student performance data, as well as to manage classroom activities. (Smaldino, 2019). Teacher professional competence related to knowledge deepening approach includes the ability to manage information, solve structural problems, and integrate open software and subject-specific application by method student-centered teaching and projects collaborative in supporting understanding students are the key to their completion complex in the real world. To support collaborative projects, teachers must use network and web-based resources for help students collaborate, access information. (Munir,2014).
Method
This research approach uses quantitative research with descriptive research methods. Descriptive research is designed to obtain information about the status of symptoms at the time of the study.. The population of this study was all students of the Mechanical Engineering Vocational Study Program of the Sarjanawiyata Tamansiswa University who had carried out field teaching practices as many as 98 people are then taken a sample of 86 people by random sampling. The data analysis technique is by calculating the percentage of achievement on the acquisition of each indicator. Measurement of value using categories (Always, Often, Rarely, and Never). 1. Integrating technology media in mechanical engineering learning 2. create digital media in machine engineering learning 3. using digital media in accordance with the characteristics of the subjects 4. using digital media in accordance with the characteristics of students 5. carry out mixed learning (online and offline) using a specific platform
Results & Discussion
The results of data processing from the research results can be presented as follows: a. Information literacy Figure 1, students' abilities in information literacy can be obtained information that students in the rarely category (59%) access information, use accurately, process, evaluate or critique information technology for the preparation of the learning process during teaching practice. Students who have information literacy competencies in the frequent category are 34%.ICT literacy b. ICT literacy Figure 2. Pie chart students' profiles of ICT literacy Based on Figure 2, the students' ICT literacy abilities are in the category of never (1%), rarely (52%), often (33%), and always (14%) in this category the ability of students is measured on the integration of ICT-based media in learning, students design, using, managing, researching, evaluating ICT-based learning.
c. Media literacy Figure 3. The Students' profile of media literacy Based on Figure 3, the students' media literacy abilities are in the category of never (0%), rarely (60%), often (32%), and always (8%) in this category the ability of students is measured in the integration of media technology in learning, students design media, using media, developing, ICT-based media.
The three pie charts above, which are the results of research on students' digital literacy skills when carrying out field teaching practices (PPL) in schools, show that they are in a low category. This can be seen in the average of the three indicators of digital literacy skills of 57% in the rare category. Students who often take advantage of digital literacy in the learning process are 33%. The large percentage of students who rarely integrate or utilize digital literacy in learning can predict that students are still low in their ability to digital competence. Based on the survey, students have information technology devices or devices such as laptops / PCs, based mobile phones (android, ios), easy internet access, support for devices they already have, which cannot be utilized in integrating ICT in learning during field teaching practices. The expected abilities in information literacy are related to accessing, assessing, and evaluating information data from technical learning platforms, critically evaluating information from the internet / other digital sources, using information accurately and creatively to solve problems, managing information from various technology platform sources. or software for engineering learning. This ability is someone's skills in conducting internet-based information management with various existing platforms. the more skilled someone is in managing information, the more benefits he will get. Information literacy is also called information literacy, namely awareness of a person's information needs, identifying, accessing effectively efficiently, evaluating, and legally incorporating information into knowledge and communicating that information. (Janssen, at.al, 2013).. Information Literacy is the set of skills needed to find, retrieve, analyze, and use information. The twenty-first century has been named the information era, owing to the explosion of information and information sources. (Janssen, at.al, 2013). Digital literacy skills in the ICT literacy indicators of students are expected to be able to use ICT as a medium for learning, researching, evaluating, and channeling information in learning, utilizing digital technology as a medium for accessing, managing, integrating, evaluating, and making information to obtain new knowledge both in their field and outside. Based on the results of research on this aspect, students rarely (55%) make use of their devices to create ICT-integrated learning. Teachers must have the ICT skills needed to use technology to obtain learning resources outside their fields and pedagogical knowledge in strengthening professional learning. (Janssen, at.al, 2013). Teachers' digital competence has been defined as the set of capacities and skills that result in the adequate incorporation and use of ICT as a methodological resource, integrated into the teaching-learning process, thus transforming ICT into Learning and Knowledge Technology (LKT ) with a clear educational application. (Janssen, et.al, 2013). (Romero, et.al, 2020). The application of technological literacy can be done using the Personal Capability Matuarity Model (P-CMM) approach. Implementation can be done via computers, the internet, and cellular phones to introduce students to new knowledge. ( Syarifuddin,2014).
Digital literacy skills on the student media literacy indicator are expected to have the competence to integrate technological media in learning, create digital media for machine engineering learning, use digital media according to the characteristics of subjects, use digital media according to student characteristics, carry out
|
2021-07-20T00:08:46.247Z
|
2021-04-05T00:00:00.000
|
{
"year": 2021,
"sha1": "27cc98a6469b4ec6b605b9901402049a46e18ecf",
"oa_license": "CCBY",
"oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/4817/4049",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1161bb9afb39e6aefed4ffee97ddb9ac4795140",
"s2fieldsofstudy": [
"Engineering",
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
136090384
|
pes2o/s2orc
|
v3-fos-license
|
Experimental and Numerical Modeling of Fluid Flow Processes in Continuous Casting: Results from the LIMMCAST-Project
The present paper reports about numerical simulations and model experiments concerned with the fluid flow in the continuous casting process of steel. This work was carried out in the LIMMCAST project in the framework of the Helmholtz alliance LIMTECH. A brief description of the LIMMCAST facilities used for the experimental modeling at HZDR is given here. Ultrasonic and inductive techniques and the X-ray radioscopy were employed for flow measurements or visualizations of two-phase flow regimes occurring in the submerged entry nozzle and the mold. Corresponding numerical simulations were performed at TUBAF taking into account the dimensions and properties of the model experiments. Numerical models were successfully validated using the experimental data base. The reasonable and in many cases excellent agreement of numerical with experimental data allows to extrapolate the models to real casting configurations. Exemplary results will be presented here showing the effect of electromagnetic brakes or electromagnetic stirrers on the flow in the mold or illustrating the properties of two-phase flows resulting from an Ar injection through the stopper rod.
Introduction
During the last decades, continuous casting has become the dominant process of steel casting, accounting for approximately 95 % of the annual world steel production [1]. Nevertheless, research is still being done with great effort to further develop and improve the process. It became obvious that the flow of the liquid steel in the tundish, the submerged entry nozzle (SEN) and the mold is one of the major issues, which is decisive for the steel quality at the end of the process [2]. In particular, the surface quality of the casted strand or the incorporation of inclusions is influenced by the structure and intensity of the mold flow. An efficient adjustment and control of the fluid flow and the related transport processes ensure a good quality of the product or facilitates a stable and efficient process. The steel flow is usually influenced by plant design, i.e. the geometry of the SEN, but this is fixed throughout the operation of the continuous caster. More flexible tools for flow control are electromagnetic fields, which are already in industrial use since about 30 years. There are basically two types of electromagnetic actuators in operation: so-called electromagnetic brakes (EMBr) generate a static magnetic fields for damping the turbulent flow and electromagnetic stirrers (EMS) use an alternating magnetic fields for stirring and pumping the liquid metal. However, the interaction between the turbulent steel flow and the diverse magnetic field appears to be rather complex and deserves further investigation.
Many numerical simulations have been conducted dealing with various aspects of the casting process, such as the submergence depth of the submerged entry nozzle [3], the design aspects of the nozzle ports [4], the particle wall interactions and nozzle clogging [5,6] or the effect of electromagnetic fields [7,8]. However, numerical models for multiphase flows and turbulent flows in large scales need to be validated by reliable experimental data. A comparison of the numerical results with plant trials require the availability of robust and suitable measuring techniques.
Flow measurements in liquid steel are very difficult. Because of the high temperature of about 1500 • C and the harsh environment at the casting machine, there are almost no measurement techniques available. Some rough information might be retrieved from observations of the free surface by dipping paddles or nail-boards into the melt [9][10][11][12][13], but flow measurements from deep inside the liquid steel in a real casting machine are still missing and there is no solution foreseeable for years to come. During the last decades water experiments are used for modeling the flow in the SEN and the mold. Respective investigations of the flow field were done by optical methods, e.g. [4,14]. The modeling activities have to consider the significant differences in the material properties of water and liquid steel. While an appropriate scaling of the experimental model can achieve the similarity in terms of Reynolds and Froude number, the dramatic deviations in the surface tension and the thermal as well as the electrical conductivity render serious investigations of many flow problems actually impossible (like heat transfer, influence of magnetic fields, two-phase flows). This is illustrated by the mismatch in other dimensionless numbers, e.g. Morton, Hartmann or Prandtl number.
Model experiments in liquid metals provide an attractive possibility for investigations of fluid dynamic aspects in continuous casting. Three experimental facilities have been designed and assembled at HZDR in the framework of the LIMMCAST program [15]. The LIMMCASTproject within the LIMTECH alliance deals with the experimental and numerical modeling of the continuous casting of steel. Therefore, it covers both essential modeling strategies and allows a comparison of results and a validation of numerical models. This paper will give an overview about the experimental and numerical activities inside this project and present some recent results.
2. Modeling of the continuous casting process 2.1. The experimental models The experimental facilities LIMMCAST and mini-LIMMCAST have been build up at HZDR for investigations of flow phenomena being relevant for the continuous casting process [15]. A third model setup has been realized for the visualization of two-phase flows in the submerged entry nozzle and the mold by X-ray radiography [16].
The design concept of all three experimental facilities contains the main components of a continuous caster, which are: the tundish, the submerged entry nozzle (SEN) and the mold. The facilities were planned and assembled as a flexible construction allowing for an easy modification of the setup at a reasonable effort and time. The transport of the liquid metal is achieved by induction pumps operating with permanent magnets. The liquid flow rate through the submerged entry nozzle is controlled by a stopper rod device. Experiments have been conducted using rectangular molds of different sizes and aspect ratios. Moreover, various geometrical configurations of submerged entry nozzles can be applied. A photograph of each setup is presented in figure 1 with indications of some major parts.
The dimensions of the model experiments are designed to meet the requirements with respect to the similarity. Main dimensionless numbers are the Reynolds number, Froude number, magnetic Reynolds number, Hartmann number or the magnetic interaction parameter. These [17]. The maximum of the Hartmann number, which can be reached in the experiments with the actual existing electromagnets, is a little bit over 400 at mini-LIMMCAST [18] and around 1800 at LIMMCAST [15], whereas an exemplary reference from a real caster reaches a value of about 350. So, the electromagnets needn't to be operated at full power in the experiments to achieve similarity according to the Hartmann number in the given example. The simple geometric scaling of the model molds is in the range of 1:3 to 1:10. The cross sections of the prototype casters cover a wide region from thick to thin slab geometries. Basically, the experimental setups in this paper are modeling such slab casters. The rectangular cross sections of the molds have an aspect ratio of 1:4 to 1:6.7.
LIMMCAST
The LIMMCAST facility is the largest experimental facility available at HZDR for modeling the continuous steel casting process. The tin-bismuth-alloy (Sn 60 Bi 40 ) is used as model liquid here. The physical properties of the alloy were measured and reported in [19]. The liquidus temperature of this alloy is 170 • C requiring operating temperatures in the range of 200 • C to 350 • C. All components and connecting pipes are made of stainless steel and are equipped with electric heaters and thermal insulation. In the first stage the facility is operated under isothermal conditions. Experiments with cooling and solidification are considered as a future option.
Mini-LIMMCAST
The mini-LIMMCAST setup is a small-scale model operating with the eutectic alloy GaInSn. That alloy is liquid at room temperature. The absence of electrical heaters eases the experimental effort and reduces potential interferences with sensitive measurement techniques significantly. The material properties can be found in [20]. The mold and the SEN are made of acrylic glass. The application of plastic materials enables a very flexible modification and a quick realization of complex geometries. The electrical wall conductivity plays an important role in experiments with an electromagnetic brake. Thin metal plates have been inserted at the inner mold wall for achieving electrically conducting boundary conditions in the mini-LIMMCAST mold.
X-LIMMCAST
The third experimental setup X-LIMMCAST was especially designed for the visualization of liquid metal / Argon two-phase flows by X-ray radiography. It is also made of acrylic glass and operated with GaInSn. The mold has a rectangular cross section with a maximum mold thickness of 15 mm. This important restriction is caused by the high X-ray attenuation coefficient in the liquid metal. The damping of the X-ray becomes too strong at a larger thickness of the liquid metal domain and the number of photons is then no longer sufficient to provide a sufficient image contrast at sufficiently small exposure times. A scintillation screen located just behind the flow vessel to be investigated and a CCD-camera are used for the image acquisition. The observation window area can be shifted which allows for covering the area of gas injection at the stopper rod or the lower part of the SEN and the mold.
Velocity measurement techniques suitable for liquid metal modeling
The experiments at the liquid metal models of the LIMMCAST-family and the investigation of the related flow phenomena require an appropriate tool-box of measurement techniques. Flow measurements in liquid metals are a challenging task but nevertheless an inevitable condition for performing model experiments. The well-established optical methods in fluid dynamics cannot be applied due to the opaqueness of the metal melts. Other measuring principles relying on ultrasonic techniques or inductive methods can be used, however, ready-to-use commercial products for spatially and temporally resolved measurements in liquid metals are very rare. Therefore, new measurement techniques have been developed and qualified at HZDR for liquid metal flows. Ultrasonic techniques are a very attractive candidate for substituting the optical methods in the field of non-transparent liquids. In the 90's the Ultrasound-Doppler-Velocimetry (UDV) was established as a powerful tool for velocity measurements in fluid dynamics [21][22][23]. Meanwhile the equipment (sensors and instruments) were adapted to the specific operational conditions in liquid metal flows and the application range was extended towards higher temperatures, e.g. [24,25]. This method is used as the main and reference method for the mold flow. A fully new approach for the reconstruction of three-dimensional flow pattern in liquid metals is the Contactless Inductive Flow Tomography (CIFT) [26,27], The technique relies on the detection of flow-induced magnetic fields outside the liquid metal, which result from the interaction between the fluid flow and an externally applied excitation magnetic field. The capabilities of CIFT for measurements of the mold flow could be successfully demonstrated [28]. CIFT results were compared with UDV measurements and showed a very good agreement. Further research work has been done to improve the robustness of this method against disturbances. A detailed insight into this measurement principle and its first application to an continuous casting model experiment can be found in [28].
Another contactless velocimeter is the Local-Lorentz-Force-Velocimetry (LLFV). This technique is based on the principle of the Lorentz-Force flow meters [29]. The extension of the measurement devices towards local velocity measurement is a more recent development and had already been tested at a duct flow in a closed liquid metal loop [30]. The feasibility of LLFV measurements was tested at the mini-LIMMCAST facility.
Numerical model
The geometric dimensions for the calculation domain and corresponding material properties are taken from the experimental setups. The flow is assumed to be isothermal, incompressible, turbulent and under the influence of external magnetic fields.
Governing equations
The flow in the continuous casting process can be described by the conservation equations for mass and momentum: Here,p,ū, F EM and F M P are pressure, velocity, electromagnetic forces and multiphase forces. The quantities are either Reynolds-averaged or filtered, depending on the turbulence modeling method used. and η denote the fluid density and the dynamic viscosity, respectively. The unknown stresses τ mod are either Reynolds stresses in case of RANS-type turbulence modeling or subgrid stresses in case of LES. They have to be provided by suitable turbulence models.
Turbulence modeling
Two different approaches are considered to describe turbulence. The first turbulent closure is realized through the unsteady Reynolds-averaged Navier-Stokes (URANS) approach. By using this type of model, a typical RANS model keeps an unsteady term in its transport equations. This enables the ability to resolve large-scale in the inertial domain.
The second one are Scale Resolved Simulations (SRS). In this concept turbulent scales are resolved in some amount, while the rest is modeled. The classic approach is the Large Eddy Simulation (LES), where the larger eddy motions are directly calculated by the filtered equations. The drawback is that the amount of resolved eddies has to be very high since the so-called subgrid scale (SGS) models are tailored towards small scale physics. In this study, the σ-model fulfills all requirements [31,32] for this complex fluid flow. The model is able to switch to a two-dimensional turbulence description. This is of importance since two-dimensional turbulence can occur in turbulent flows under strong magnetic fields [33,34]. Moreover, the model uses a proper wall modeling with cubical decay of the SGS viscosity normal to the wall. It uses singular values of the resolved velocity gradient tensor and therefore incorporates some structure of the turbulence. Moreover, it is comparable in its behavior to a dynamic Smagorinsky SGS. In a recent study, this was found to be most suitable for MHD turbulence at low magnetic Reynolds numbers [35]. Nevertheless, the calculation time and the mesh size in LES can become incredibly high depending on the flow type. A way to overcome these problems is represented by hybrid turbulence models. The Delayed Detached Eddy Simulation (DDES) pursues the concept of blending over between a near-wall RANS-like modeling and an LES-like modeling in the core flow. This eliminates the need of a near wall refinement and reduces the calculation time. The DDES model here utilizes the Spalart-Allmaras model for the RANS task and the SGS part in the core region [36]. Another idea incorporates more features of the URANS modeling concept. The so-called Scale Adaptive Simulation (SAS), makes use of inherent flow instabilities to generate more unsteadiness as a traditional URANS can render. The reason is that these instabilities are not damped by turbulent viscosity. Hence, large turbulent eddies can be resolved. However, it is necessary to have a finer mesh in regions, where these instabilities arise. The method is often called the second generation of URANS, because of its close link to the URANS model [37]. The most common formulation is the combination of the SAS method [38] with the Shear Stress Transport (SST) variant of the k-ω RANS model [39]. Following the modeling equations, it can be observed that just one additional term is included in the specific turbulent dissipation rate equation that enables the SAS characteristics.
Electromagnetic forces
The electromagnetic forces F EM in equation (2) can be generally described by the induction equation for a varying magnetic field B. Since the length scale and the velocities in the experiment are small, the magnetic Reynolds number Re m is smaller than unity. Hence, the induced currentsj do not affect the imposed magnetic field B 0,y , which allows a simplification called quasi-static approximation. All in all, it represents a one-way modeling of the electromagnetic forces [40]. The electromagnetic forces act as Lorentz forces in the Navier-Stokes equations and require only the solution of an additional Poisson equation for the electric potentialψ: The subscript 0 in the description of the magnetic field denotes the static character whereas y denotes the alignment of the magnetic field with the y-direction in this study.
Multiphase modeling
Multiphase flow in the continuous casting mold appears in many ways. Here, a bubbly flow is considered, as it develops from argon gas injection at low and moderate gas flow rates. The resolution of the bubble interface is not expedient. Hence, a much cheaper technique is the point-particle approach within the Lagrangian frame. The bubble is seen as point mass which experiences different forces. The bubbles can move inside the Lagrangian domain, where the forces on the particle like drag, pressure, virtual mass [41], lift [42] and electromagnetic forces [43] are interpolated from the Eulerian domain. Integration of all the forces yields a bubble trajectory.
Since argon bubbles are subjected to break-up and coalescence, bubbles of different sizes can be observed. A recent study has measured these bubble size distribution in a liquid metal mold model [16]. This distribution was used for the simulations here.
The most important force acting on a moving bubble is the drag force. In a recent study [44] the authors found a large influence of bubble shape (by means of C D,Dij ) and swarm effects (by means of C D,Rog ) on the quality of the results using tailored drag models by [45,46]. The drag model was then extended towards MHD effects (by means of C D,Jin ) using findings of a recent study [47]: with Here, α c denotes the phase fraction of the continuous phase, Eo denotes the Eötvös number and N denotes the magnetic interaction parameter.
Numerical method and computational cost
The coupled Navier-Stokes MHD equations (Equation (1)-(5)) were discretized on a hexadominant unstructured mesh using the finite volume method within the open-source CFD library OpenFOAM. The Lorentz force is added as an explicit source in the momentum equations. After the flow development, the flow fields are collected for 20 s for the mini-LIMMCAST geometry and 80 s for the upscaled mold geometry.
The different turbulence modeling methods demanding for different mesh resolutions. In a previous study, the authors have shown that the SAS needs at least 1.5 million cells in the case of the mini-LIMMCAST geometry [48]. This is achieved by an overall cell size of 1,5 mm and a refined region of 0,75 mm cell size. During the paper, results of the SAS approach on a fine mesh are also shown, which consists of 3.5 million cells. The upscaled mold geometry consists of 5 million cells, where the majority of cells is located in the first 1,5 m of the strand. The simulation of the SAS on the medium size mesh of the mini-LIMMCAST takes about 6 days on 32 cores of 2,9 GHz Intel Xeon X5670 processors in the high-performance computation cluster at the TU Bergakademie Freiberg, while the fine mesh requires 25 days on 64 cores. The LES of the upscaled mold geometry takes about 30 days on 128 cores.
A short review on LIMMCAST results
The following sections will briefly summarize the research activities conducted by the LIMMCAST project within the LIMTECH alliance during the last five years.
Effect of electromagnetic fields on the mold flow
The impact of a static magnetic field on the mold flow was already a prominent topic in previous studies of the LIMMCAST program, e.g. [18,49]. It was shown that the electrical boundary conditions at the mold wall have dramatic influence on the properties of the mold flow [17]. Insulating boundary condition can lead to dramatic flow oscillations whereas the flow becomes steady in case of conducting boundary conditions. Regardless of the electrical boundary conditions at the mold walls, a reverse flow close to the jet is triggered by the static magnetic field and a strong upward flow can be observed at the narrow mold wall above the jet. Moreover, a tendency to form two-dimensional flow structures along magnetic field direction becomes apparent. The insulating case showed some more special features, like an jet oscillation and the transition from a standard double roll flow in the mold to multiple rolls. In the lower mold one can now find a single roll over the whole mold width, which can also change its rotation direction over time. Generally, the symmetry was broken by the EMBr under insulating boundary conditions. These findings were also obtained by subsequent numerical simulations [7,[49][50][51]. The flow structures are sketched in figure 2. The standard double roll flow pattern under reference condition without magnetic field is shown in the left mold half. The effects of the EMBr especially at insulating boundary conditions is sketched in the right mold half.
The strategy for applying electromagnetic actuators can vary in the magnetic field design or the positioning of the magnetic coils at the mold. Therefore, experimental studies about the effect of variations of the EMBr position at the mold were performed for different nozzle designs [17,52]. The same problem was investigated again by numerical simulations [53].
Another study addresses the action of a rotary electromagnetic stirrer (EMS) at the mini-LIMMCAST facility in a round mold [54]. The round mold in the experiments was a 1:3 scale model of an actual caster. The flow measurements were obtained by means of the UDV method. Three different situations were investigated in this work: the jet from the nozzle without EMS, a purely EMS-driven flow and a superposition of both. The EMS driven mold flow revealed the structure of a primary swirling flow and a secondary flow in the meridional plane consisting of two toroidal vortices. The secondary flow is significantly weaker as the primary flow, but, it is responsible for a redistribution of angular momentum in the melt. The combination of the jet flow with the stirrer showed a dramatic increase of free surface velocities with the creation of vortices and strong surface deflections near the nozzle [54]. The measurement results were used for the validation of numerical models in corresponding simulations [55] which are supposed to provide reliable predictions of the flow in the real caster.
Mold surface
In the industrial process of continuous casting the mold surface is of particular interest because it is the only access for visual observation by the operator or the positioning of crude testing probes. Moreover, the stability of the interface between the melt and the slag layer is regarded as a crucial issue with regard to achieving defect-free casting products [2]. Disadvantageous flow conditions can lead to entrainment of impurities, surface freezing, hook formation and other problems. This explains the desire for as low as possible surface disturbances and a sufficient heat transfer from the melt to the slag layer which ensures a sufficient melting of the flux powder for the lubrication of the strand. Therefore, some melt convection should exist just beneath the melt surface, but, the velocities should not exceed a certain limit to avoid the entrainment of impurities, slag or flux.
The melt surface was also examined in the mini-LIMMCAST experiments. Here, a slag layer does not exist, but the free surface of the liquid metal is covered by a certain oxide layer. In particular, the surface level in the mold and the tundish can be detected by an ultrasonic distance sensor. The sensor tracks the temporal behavior of the melt level at one location, e.g. [15]. A comparison of such results with an analytical assessment by the extended Bernoulli equation (including friction losses and accelerations) showed a good agreement [17]. As a second possibility, a video camera was applied for the visualization of the melt surface [17]. The recorded images revealed a specific frequency of the free surface in a slab configuration without electromagnetic actuators. This natural frequency is related to a surface wave with a wavelength of twice of the mold width [17]. Another approach uses line lasers in combination with video imaging for an improved quantitative evaluation of the surface behavior. A laser scanner provides time series of the surface profile along the central line at the mold surface [56]. For instance, the observation of the mold surface at the mini-LIMMCAST setup also showed the detrimental influence of an ill-posed EMBr at insulating boundary conditions. In this case, the melt surface becomes obviously unstable and shows intensive sloshing and bulging. The differences in the surface behavior are also sketched in figure 2 and an example of the surface profiles measured by the line laser is depicted figure 3. Such strong level oscillations were not observed in case of electrically conducting mold walls.
Argon injection
The effect of Argon gas injection has been studied at the X-LIMMCAST facility by X-ray radiography [16]. Two observation windows were chosen covering the injection point of argon gas at the tip of the stopper rod and the jet region in the mold, respectively. The monitoring of the two-phase flow in the injection region revealed the occurrence of large void zones in the upper part of the SEN just below the stopper rod. An exemplary picture from the injection point including the clearly visible void zones can be seen in the left of figure 4. The majority of the bubbles detaching from the stopper rod were captured by the void zones. The detachment of relatively large bubbles were observed at the end of the void zones. Smaller bubbles are formed by strong shearing in the bottom part of the SEN just before the nozzle ports [16]. The rather flat geometry of the X-LIMMCAST mold (see section 2.1.3) restricts the flow to an almost twodimensional pattern. The liquid metal jets emerging from the SEN divide the mold flow in an upper and lower part. Because of the high inertia of the liquid metal jet the small bubbles are dragged into the lower recirculation zone in the mold. A direct ascending motion of the bubbles is blocked by the jet. The increasing gas concentration in the lower part of the mold promotes coalescence resulting in larger bubbles. The mean bubble size grows until the related buoyancy force becomes strong enough to overcome the jet barrier [16]. An sketch of the two-phase flow phenomena observed at X-LIMMCAST is depicted in the middle of figure 4. The right picture in figure 4 presents a snapshot of the two-phase flow in the mold. The two X-ray images in figure 4 reveals also the challenging task of identifying the fast moving bubbles.
Application and test of new measurement techniques
The development of the CIFT method for applications in the field of continuous casting has been reviewed in [57]. Further progress has been made to improve robustness and applicability of the method. A measuring configuration was designed and realized for the LIMMCAST facility. Related reconstructions of the flow field can be found in [58]. The detection of the weak induced magnetic field created by the melt flow is a challenging measurement task, especially in the presence of other disturbing external magnetic fields. A feasibility study about the applicability of the CIFT technique at a mold equipped with an EMBr was carried out for the mini-LIMMCAST setup [59]. It became obvious that the acquisition of the CIFT signals in presence of a much stronger EMBr can be successfully performed by specific induction coils [58,60]. Another task is the application of gradiometric magnetic field sensors instead of flux gate sensors or single induction coils. Measurements by gradiometric coils are more robust against electromagnetic disturbances. First test measurements at mini-LIMMCAST are reported in [61].
The Local-Lorentz-Force-Velocimetry (LLFV) is dedicated for a localized and contactless measurement of the melt velocity. The applied LLFV sensor is able to measure the induced force as well as the torque in all three dimensions, resulting in six measurable parameters. Both, force and torque, can be used to determine the local velocity in all three directions. Test measurements were carried out at the mini-LIMMCAST setup [62]. These experiments revealed that the torque is less sensitive to noise than the force measurement [62]. The comparison of LLFV data with corresponding UDV measurements obtained in the mold center plane revealed some qualitative differences. These deviations can be associated with the restriction of the LLFV measuring volume to regions close to the mold wall and three-dimensional flow effects in the flat mold. A better agreement between LLFV and UDV data were obtained for UDV measurements conducted close to the mold wall [62].
Validation and comparison of different turbulence models
Since turbulence modeling concepts are based on canonical flow problems, the models are not universally applicable to every flow problem. Therefore, the first study was intended to find the most suited RANS turbulence model for mold flows [63]. The study came to the conclusion that the k-ω SST model is in favor of all the other models considered. Moreover, second order discretization was found to be crucial for reasonable results. The first observation with the OpenFOAM MHD numerical model has identified an active flow oscillation once an magnetic field is applied [48]. The study has also shown that conventional URANS modeling lacks of the MHD turbulence aspect which has either to be included in the model or tackled by models which allow more unsteadiness of the flow. The latter concept was pursued in subsequent studies [64,65]. First, the SAS concept by Menter et al. [66] was utilized to the MHD flow problem. The model exhibits dependence on the mesh resolution of the refined mesh region. Nevertheless, the flow shows more fluctuation and unsteadiness (see Fig. 5 c)).
The already found oscillation is rendered very well and additionally turbulent scales can be resolved which then underlie Lorentz forces. Therefore, MHD turbulence can be described to some amount. The vertical component of the velocity along a line from the surface to the bottom of the mold is shown in Fig. 5 a) and b). In case without an EMBr, the profiles show a significant peak of the jet and a recirculation with upward directed velocities in the lower mold. The numerical and experimental data of the temporal average are in very good agreement. In contrast to that, the profiles of the MHD case show strong differences between the used modeling methods and a left-right asymmetry as well. The reason is the oscillation that is not perfectly periodic since a small shift in the magnetic field has drastic effects. In addition the measurement period only captures a few cycles of the oscillation. Among the turbulence modeling methods, the LES gives the most reasonable results. The upper recirculation and the jet progression matches quite well, despite differences in the jet angle. Moreover, deviations from the experimental profile can be found in the lower mold because of modification of the outlets in the case of an LES.
Another study incorporates the DDES modeling concept by Spalart et al. [36] to the comparison [65]. Once again, the SRS models seem to be in better accordance with experimental results of the average flow as well as in instantaneous flow features. Apart from the turbulence modeling, the dynamics of the free surface level in mold flows was investigated in another study [67] by means of thin slab mold model at TU Delft [68]. As a next step, a scale-up of the mold model was done using different EMBr concepts (see section 4.3).
With regard to argon injection at the mold apparatus, a validation study was performed to identify dependencies of the results on bubble drag models and bubble inlet conditions in an air-water system [44]. Especially the scarcely discussed inlet conditions have shown their strong influence on the results. In addition drag models which incorporate swarm effects of the bubble ensemble have shown the best accordance with experimental results. The current investigations used these results to extend the numerical MHD model with the multiphase part of bubbly flow in a full scale mold model (see section 4.3).
Electromagnetic stirring at the SEN
Recent studies were concerned with the application of rotary electromagnetic stirring at the continuous casting setup. The CIFT method was used to investigate the effect of electromagnetic stirring at the nozzle on the flow in a rectangular mold [69]. Another experimental study about the influence of magnetic stirring in the submerged entry nozzle (SEN) for continuous casting of round blooms has recently been performed at the mini-LIMMCAST facility. The experimental setup consists of a round mold made of acrylic glass (PMMA) with an inner diameter of 80 mm. A rotating magnetic field (RMF) is applied to the SEN by rotating permanent magnets. Velocity measurements have been carried out using the Ultrasound Doppler Velocimetry (UDV). Both an horizontal and vertical arrangement of ultrasonic sensors was employed where the transducers were mounted at several positions along the side wall of the mold or were dipped directly into the liquid metal through the free surface. The points of origin of the coordinate axes x and y are located on axis of the cylindrical mold and pointing radial outwards, while the origin of the z-axis is placed on the top edge of the mold. The free surface is located at z = 25 mm, the SEN outlet at z = 65 mm. depth of more than 300 mm from the free surface. Because of the highly dynamic conditions, the measured velocities are split into positive and negative components before the time average is computed to avoid averaging to zero. In figure 6 (left) it can be seen that close to the center of the mold, at y = ±15 mm, downward (positive) velocities are predominant while closer to the outside of the mold, at y = ±30 mm, upward (negative) velocities can be observed. The application of a rotating magnetic field in the SEN moves the area of highest velocities closer to the free surface ( figure 6 right).
A detailed discussion of the phenomena observed in our experiments is currently in preparation and will be published in the near future.
EMBr with conducting walls at LIMMCAST
Static magnetic fields are used for an efficient control of the jet flow and the suppression of surface fluctuations at continuous casting of steel [2]. The effect of a static magnetic field on the mold flow has been investigated experimentally at the LIMMCAST facility. The model mold made of stainless steel has a cross section of 400 mm times 100 mm. The SEN has an inner diameter of 35 mm and the pipe axis of the SEN determines the vertical z-axis with downward orientation. The top edge of the nozzle port marks the horizontal x-axis, which is oriented towards the narrow mold face. The location of the coordinates is also illustrated in figure 8. The melt level is controlled by two electrodes detecting the melt level and the adjustable pump rotation speed. The mean melt level in the mold was adjusted at a position of about −90 mm. The first ultrasonic sensors is located at a distance of 8 mm from the narrow mold wall. Neighboring ultrasonic sensors have a distance of 37 mm between each other. So, the horizontal positions in the coordinate system correspond to x =115 mm, 152 mm and 189 mm. The tips of the waveguides are positioned at a height of 13 mm above the upper port edge.
The mold flow without applied magnetic field was calculated by numerical simulations. The velocity magnitude in the center-plane is illustrated in figure 7. The figure depicts clearly the jet flow emerging from the nozzle ports. Further, the flow shows the typical double roll pattern, where the jet splits up in an up-and downward stream after impinging at the narrow mold wall.
These results of the numerical simulations are compared with corresponding measurements obtained by UDV. The ultrasonic measurement configuration is sketched in figure 8. Local flow measurements were carried out by means of the Ultrasonic-Doppler-Velocimetry applying specific waveguide sensors for high temperatures. Figure 9 presents the time-averaged vertical velocity measured by the waveguide-sensors at three different x-positions. The outermost sensor close to the narrow wall detected a strong ascending flow, which can be attributed to the upwards directed branch of the upper convection roll. The innermost sensor recorded the smallest velocities as it is located near the center of the roll pattern. The downward flow appearing at distances of about 100 mm can be related to the jet. The UDV results (solid lines) are compared to the outcome of the numerical simulation (dashed lines) and show a quite well agreement.
Next, the static magnetic field was applied to the flow. The top edge of the EMBrs pole shoes are located at a height of z = −43 mm. The pole shoes have a height of 200 mm and covered the nozzle ports completely. Figure 10 contains time-averaged profiles of the vertical velocity determined at the sensor position close to the narrow side wall for different values of the magnetic field strength, but otherwise for the same casting geometry like in figure 7 to 9. It becomes obvious that the upwards flow of the upper recirculation zone (above the jet and close to the narrow mold wall) is amplified by the magnetic field. Furthermore, the flow appears to be almost steady. This finding coincides very well with the data measured at the smaller mini-LIMMCAST setup for conducting walls [18], which indicates a similarity with respect to the electrical wall conductance ratio. Figure 7. Snapshot of the velocity magnitude in the midplane of the mold from a numerical simulation under reference condition (without EMBr).
Upscaling to a real casting geometry
Based on the successful validation of the numerical model using the experimental data from the mini-LIMMCAST facility, the numerical simulations were extended to melt flows in a continuous casting mold in a full-scale geometry as being relevant for real casting machines. For that purpose a rectangular mold of 1500 mm width and 250 mm thickness was chosen. The reduction of the liquid cross section with increasing distance from the flow surface due to progressing solidification from the side walls is considered in the model. Other parameters included in the model are a strand length of 5000 mm, a casting velocity u c of 1,5 m/min and an SEN inlet velocity u in of 1,91 m/s. The specific geometry of the SEN was taken from [70]. At the top surface, a free slip boundary condition was applied. Numerical calculations were performed for the case without magnetic field and for two different types of electromagnetic brakes. The first EMBr configuration concerns a ruler brake whose pole shoes cover the entire width of the mold. The other magnetic system represents the so-called Flow Control Mold (FCM) EMBr with two static magnetic fields, which are situated below the jets and on the level height of the free surface, respectively. Electrically insulating boundary conditions are assumed for the mold walls. Fig. 11 shows LES results of the mean flow in the midplane of the mold. A double roll pattern can be identified in case without EMBr. The jets show a rather flat exit angle and significant spreading. Moreover, a slight asymmetry between the left and the right jet can be observed. The reason might be a self-sustaining long-wave oscillation having a period of about 8 seconds. The limited calculation time allows for capturing only a few oscillation cycles within the simulation. The surface velocities are moderate and in a range which is seen as tolerable for avoiding adverse phenomena like slag entrainment [71].
The application of a level EMBr to this flow leads to an upwards directed bending of the jets. The flow pattern reveals a strong asymmetry and distinct oscillations of the jets. These features appear to be similar as those found by the experiments in the mini-LIMMCAST facility. Compared to the mini-LIMMCAST experiments a smaller SEN submergence depth was chosen for the real-scale configuration. This might be the reason that the characteristic oscillations are not found in the upper recirculation rolls. The significant bending of the jets cause large flow velocities at the free surface. Such a situation bears the risk to enable shear layer instabilities at the slag interface which could cause the entrainment of slag droplets into the melt [71].
A completely different flow pattern is shown for the case of the FCM EMBr. The jets discharge from the nozzle with a flat exit angle and show a low spreading. The flow pattern appears to be almost symmetric and stable. The upper vortices are very weak and the lower recirculation is shifted downwards by the magnetic field. The surface velocity is drastically reduced which might create the risk of meniscus freezing.
The corresponding turbulent kinetic energy (TKE) is shown in Fig. 12. For the case without magnetic field it becomes obvious that the TKE is very high in the jet region. The TKE is produced in the jet shear layers, from there it is transported into the zone of the upper recirculation rolls. As a result, the surface TKE is rather high as well. At the top surface maximum values of the TKE are found in the narrow flow regions at both sides of the SEN. In the case with a Level EMBr the TKE in the jet is lower since the magnetic field dissipates turbulent by Joule heating. This effect reduces the TKE at the free surface, too. However, the high velocities at the surface and the left-right asymmetry cause remarkable flow detachment at the SEN. As a consequence, the development of von Kármán vortices is facilitated which are detrimental to the steel quality [71]. For the FCM EMBr the contour plot of the TKE distribution reveals a broad jet, but, the striking feature are the very low TKE values occurring at the top surface around the SEN and in the region below the SEN. Due to the low surface turbulence, an entry of slag into the liquid metal appears to be quite unlikely. Another series of URANS calculations for two-phase flows in the mold was conducted using the k-ω SST model for the same three configurations as considered in the single-phase simulations: without magnetic field, ruler EMBr and FCM EMBr. For that purpose we assumed the injection of Argon bubbles at the SEN inlet. The URANS concept was chosen here because of expected uncertainties in the turbulent dispersion and the bubble trajectory obtained by SAS calculations. Fig. 13 shows the liquid volume fraction at the midplane of the mold and the free surface as well as the bubble size distribution along the top surface for an argon flow rate of 8 % with reference to the melt flow rate. Since larger bubbles are supposed to ascend quicker, their point of impact at the free surface is found to be closer to the SEN in all three cases. Smaller bubbles are more affected by dispersion due to the liquid flow and can be detected at larger distances from the SEN. It can be seen that the dispersion of argon bubbles along the x-direction is obstructed by a Level EMBr while such a damping effect on the horizontal bubble transport is not observed for the FCM EMBr. The distribution of the small bubbles in the FCM setup shows two peaks, one close to the SEN and the other almost in the center of the distance between SEN an narrow mold face. The second peak at a larger distance to the nozzle might be explained by the existence of quasi-two-dimensional vortices which are aligned with the magnetic field direction. Such vortices promote an effective bubble transport towards the side wall. The magnetic field also affects the transport of larger bubbles the distribution of which also shows higher bubble numbers at greater distances from the nozzle.
Summary & Conclusions
This paper presents a summary of experimental and numerical results produced during the LIMMCAST-project within the Helmholtz LIMTECH alliance. The project aims at the investigation of fluid flow aspects in the continuous casting process of steel. Special attention was paid to the influence of DC and AC magnetic fields on the flow in the mold. Furthermore, the issue of multiphase flows in the SEN and the mold due to Argon-injection at the stopper rod was addressed.
Three experimental facilities with liquid metals are operated at HZDR for modeling various essential aspects of fluid flow and related transport processes. Such an experimental program requires the availability of suitable measurement techniques for liquid metal flows. The measurements carried out at LIMMCAST rely on the ultrasonic Doppler velocimetry and inductive methods as the Contactless Inductive Flow Tomography (CIFT) and the Lorentz Force Velocimetry (LFV). In this respect, a fruitful cooperation was established with the Young Investigators Group of the LIMTECH alliance.
Several numerical schemes and different turbulence models were applied and verified. Finally, it can be concluded that the LES model provides the best results for the specific configurations considered here. This assessment is based on a comparison with the experimental data from the LIMMCAST experiments.
The action of static electromagnetic fields on the mold flow is one of the primary fields of interest in the project. The main finding is the non-trivial action of electromagnetic actuators on the mold flow. The electrical boundary conditions at the mold wall showed a dramatic influence on the temporal behavior of the melt flow under the influence of diverse electromagnetic actuators. For instance, the application of a DC magnetic field under insulating boundary conditions can promote the generation of biased flows characterized by large scale oscillations and instabilities of the jet. Different types of EMBrs usually employ steady horizontal magnetic fields normal to the wide side of the mold. Under such conditions the mold flow tends to become quasi-two-dimensional which becomes apparent by the formation of strong recirculation zones. Other experimental research activities are related to the application of AC magnetic fields for electromagnetic stirring in the mold and SEN.
The coordinated and complementary use of numerical simulations and model experiments allow the acquisition of essential insights into the flow phenomena occurring in the continuous casting of steel. The model experiments provide valuable measurement data which can be used for the validation of numerical models.
Despite the progress in modeling and understanding of the mold flow in continuous casting, there remain still a lot of work in experimental and numerical simulations as well as the enhancement of measurement techniques. There are a lot of different kind of electromagnetic actuators in industrial operation and up to now this project selected just two of them as an exemplary. Further, the experiments about the two-phase flow due to Argon injection were just a small beginning and even the combination of Argon injection with electromagnetic actuators is interesting topic but also connected to challenging measurement and numerical modeling tasks. The CIFT measurement technique has made huge progresses in the last years, but there is still some more work and development necessary to push it towards a potential application at a real caster. The simulations in this study considered insulating walls only. In the real process, the solidified shell allows the induced currents to enter the shell and therefore influence the Lorentz force distribution in the mold. Future studies should include this feature. Moreover, the model is missing a free surface treatment, which should be added by combining the current model with a Volume-of-Fluid (VoF) approach. By means of this, important questions regarding slag emulsification can be addressed more thoroughly.
|
2019-04-29T13:17:41.893Z
|
2017-07-01T00:00:00.000
|
{
"year": 2017,
"sha1": "019559c301340ef5becc0d2a95a55b86b12c1fe6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/228/1/012019",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c07b1a41c187b3e01aa4f036809c7827a34704be",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
251362884
|
pes2o/s2orc
|
v3-fos-license
|
Frequency of Adrenal Insufficiency in Patients With Hypoglycemia in an Emergency Department: A Cross-sectional Study
Abstract Context In most patients presenting with hypoglycemia in emergency departments, the etiology of hypoglycemia is identified. However, it cannot be determined in approximately 10% of cases. Objective We aimed to identify the causes of unknown hypoglycemia, especially adrenal insufficiency. Methods In this cross-sectional study, we evaluated the etiology of hypoglycemia among patients in our emergency department with hypoglycemia (plasma glucose level < 70 mg/dL (3.9 mmol/L)] between April 1, 2016 and March 31, 2021 using a rapid adrenocorticotropic hormone (ACTH) test. Results There were 528 cases with hypoglycemia included [52.1% male; median age 62 years (range 19-92)]. The majority [389 (73.7%)] of patients were using antidiabetes drugs. Additionally, 33 (6.3%) consumed alcohol; 17 (3.2%) had malnutrition; 13 (2.5%), liver dysfunction; 12 (2.3%), severe infectious disease; 11 (2.1%), malignancy; 9 (1.7%), heart failure; 4 (0.8%), insulin autoimmune syndrome; 3 (0.6%), insulinoma; 2 (0.4%) were using hypoglycemia-relevant drugs; and 1 (0.2%) suffered from non-islet cell tumor. Rapid ACTH tests revealed adrenal insufficiency in 32 (6.1%). In those patients, serum sodium levels were lower (132 vs 139 mEq/L, P < 0.01), eosinophil counts were higher (14 vs 8%, P < 0.01), and systolic blood pressure was lower (120 vs 128 mmHg, P < 0.05) at baseline than in patients with the other etiologies. Conclusion The frequency of adrenal insufficiency as a cause of hypoglycemia was much higher than what we anticipated. When protracted hypoglycemia of unknown etiology is recognized, we recommend that the patient is checked for adrenal function using a rapid ACTH test.
If one considers all patients with altered mentation presenting in the emergency department, hypoglycemia has been identified as the underlying process in approximately 7% of cases [1]. While the most common causes of hypoglycemia are antidiabetes drugs, there are many other causes of hypoglycemia, and in approximately 10% of cases, the etiology of hypoglycemia cannot be determined [2,3], and glucose infusion is routinely used to maintain patients' blood glucose levels.
Adrenal insufficiency (AI) is 1 cause of hypoglycemia. AI is the lack of cortisol (glucocorticoid) and/or aldosterone (mineralocorticoid) secretions from adrenal glands. AI is classified as primary (Addison disease), caused by diseases of the adrenal cortex; secondary, caused by impaired adrenocorticotropic hormone (ACTH) secretion due to pituitary abnormalities; and tertiary, caused by insufficient corticotropin-releasing hormone (CRH) secretion and function because of hypothalamic dysfunction [4][5][6]. The prevalence of primary AI is estimated at between 82 and 144 cases per million population in Western societies [7,8] compared to an estimated 138 to 142 cases per million population in Japan [9,10]. The currently estimated incidences of this disorder are 4.4 to 6.0 and 6.6 new cases per million population per year in Western societies [11] and in Japan [9], respectively, and it presents most often between 30 and 50 years of age [10,12,13]. Secondary AI occurs more frequently than primary AI [10,14]. Its estimated prevalence is 150 to 280 per million [15,16], and affected patients are often diagnosed in their 60s [10,17]. The most common cause of tertiary AI is chronic exogenous administration of synthetic glucocorticoids, which causes prolonged suppression of hypothalamic CRH secretion through negative feedback mechanisms [18,19]. Patients with chronic AI may develop acute AI (adrenal crisis) under stress such as infection, surgery, etc. Adrenal crisis is a fatal condition that causes circulatory disorders due to an absolute or relative deficiency of glucocorticoids. The estimated incidence is 6 to 8 and 6.3 per 100 chronic AI patients per year in Western societies and Japan, respectively [10,16]. Therefore, detecting AI is critical in emergency medical care.
The objective of this study was to identify the cause of hypoglycemia of unknown etiology, especially AI, in emergency departments.
Study Population
This was a 5-year single-center cross-sectional study to investigate the causes of unexplained hypoglycemia. Inclusion criteria were male and female patients aged ≥18 years with hypoglycemia [plasma glucose level < 70 mg/dL (3.9 mmol/L)] with or without hypoglycemic symptoms who presented to the emergency department between April 1, 2016 and March 31, 2021. Furthermore, rapid ACTH test was performed on all patients who met inclusion/exclusion criteria except those whose cause of hypoglycemia was found to be antidiabetes drugs. Exclusion criteria were patients whose plasma glucose level was ≥70 mg/dL with hypoglycemic symptoms, patients who refused a rapid ACTH test, and patients who were pregnant. Also excluded were patients who have participated in other clinical trials, and patients who were judged by their physician to be ineligible for participation. This trial was approved by the institutional review board at Shin Komonji Hospital. Written informed consent was obtained from all participants before enrollment in the trial.
Procedures
The cause of hypoglycemia was investigated in those patients who qualified for the study. In the emergency department, a rapid ACTH loading test (250 µg synthetic 1-24 ACTH: tetracosactide acetate administered intravenously) was performed on all patients except those who were taking antidiabetes drugs. Blood specimens were collected by nurses before loading and at 30 and 60 minutes after ACTH administration.
The diagnostic criterion for AI was peak serum cortisol levels of <18 μg/dL post rapid ACTH test [10,20,21]. Insulin tolerance test, CRH loading test, and/or continuous ACTH loading test were performed on patients who were diagnosed with AI to identify primary (adrenal), secondary (pituitary), or tertiary (hypothalamic) AI (Fig. 1).
Outcome Measures
The primary outcome was the rate of AI in patients with hypoglycemia presented to our emergency department. In addition, we investigated the clinical manifestations in each hypoglycemic etiology group.
Statistical Analyses
Analysis of variance and Dunnett's test were performed to compare the AI-induced hypoglycemia group as a control group with other hypoglycemia groups. Statistical analyses were conducted using R-software, version 4.05 (R Foundation for Statistical Computing, Vienna, Austria). Twosided P-values < 0.05 were considered statistically significant.
One hundred thirty-nine patients whose etiology could not be diagnosed in the emergency department or who did not take any antidiabetes drugs underwent rapid ACTH tests as the previously noted etiologies may not be the actual cause of hypoglycemia. Testing revealed AI in 32 (6.1%) of 528 patients. The differences between the non-AI and AI regarding the response to rapid ACTH test is shown in Table 2. Subsequent tests revealed the following: 3 were primary, 27 were secondary (pituitary), and 2 were tertiary (hypothalamic) (Fig. 1). Of the 3 patients with primary AI, 1 had 21-hydeoxylase deficiency, 1 suffered from a fungal infection, and 1 had a bilateral adrenal metastasis of lung cancer. Of the 27 patients with secondary AI, all had an isolated ACTH deficiency. Both patients with tertiary AI were chronic steroid users. AI was found in 2 (5.7%) of the 35 patients with alcohol abuse, 7 (36.8%) of the 19 with severe infection and/or sepsis, 1 (5.6%) of the 18 with malnutrition, and 4 (26.7%) of the 15 with malignancies (Table 1). Of the 4 patients with malignancies and AI, 1 had a bilateral metastasis of lung cancer who suffered from a primary AI. The others had neither metastasis nor adrenal cancer, but 2 had breast cancer and 1 had colorectal cancer, and they suffered from secondary AI.
The primary causes of hypoglycemia are insulin and oral hypoglycemic drugs. Patients with type 1 diabetes who used insulin had 1 to 3.2 hypoglycemic attacks per year [23,24], and type 2 diabetic patients with ≥5 years of insulin treatment experienced an average of 0.7 hypoglycemic attacks per year [24]. Patients prescribed oral hypoglycemic drugs, especially sulfonylureas, were also at high risk of hypoglycemia [25]. The incidence of hypoglycemia in patients with type 2 [26]. Alcohol inhibits gluconeogenesis in the body but does not affect glycogenolysis [27]. Thus, hypoglycemia occurs after several days of alcohol consumption with limited ingestion of food and after hepatic glycogen stores are depleted [28]. Alcohol ingestion is often the cause of, or a contributing factor to, hypoglycemia encountered in patients coming to emergency departments.
In our study, we found that the ratio of secondary and tertiary AIs, rather than primary AI, was as high as 29/32 (90.1%) in all types of AI. It is rare for adult primary AI to cause hypoglycemia in the absence of infection, fever, or alcohol ingestion [29]. In contrast, hypoglycemia is more common in secondary AI caused by isolated ACTH deficiency [30][31][32]. As a result, we concluded that Addison's disease would be diagnosed quickly and treated; however, ACTH insufficiency was often overlooked, possibly because the absence of dehydration and hypotension permits patients to tolerate their illness longer. In accordance with the guidelines [10,21,33], administration of hydrocortisone to our AI patients promptly improved their hypoglycemia, and glucose administration was no longer necessary. It is beneficial to make an early diagnosis of AI because it is not necessary to continue to administer glucose wastefully. In addition, in septic shock, patients with AI showed the depressed pressor sensitivity to noradrenaline, which may be substantially improved by physiological doses of hydrocortisone [34]. Eventually, of the 32 patients with AI, 21 (65.6%) improved their hypothalamic-pituitary-adrenal axis and no longer needed hydrocortisone to maintain their blood glucose.
Hyponatremia can occur in both primary and secondary AIs and is found in 70% to 80% of patients with AI [35].
The underlying etiology is different in each case. In primary AI, hyponatremia and hypovolemia are caused by aldosterone deficiency, whereas in secondary AI, hyponatremia is due to the lack of cortisol, which leads to increased vasopressin secretion and dilutional or hypervolemic hyponatremia [19,35,36]. Hyponatremia can occur early in the disease and maybe the initial manifestation [37]; in contrast, the number of patients with hyperkalemia was not significantly large in our study. Hyperkalemia often occurs due to aldosterone deficiency. Therefore, hyperkalemia is observed in patients with primary AI who have both aldosterone and cortisol secretory deficiency. In contrast, patients with secondary or tertiary AI usually have normal mineralocorticoid function due to compensation for the intact renin-angiotensin system. In our study, hyperkalemia did not occur in the AI group because the proportion of secondary and tertiary AI was overwhelmingly higher than that of primary.
Heart rate and systolic blood pressure in hypoglycemia without AI are slightly raised. In secondary and tertiary AI, hypotension is less prominent [6,30]. However, in most patients with primary AI, the blood pressure is low, and some Table 3. Clinical manifestation of hypoglycemia have postural hypotension. These symptoms are primarily due to volume depletion resulting from aldosterone deficiency [38]. Glucocorticoids are necessary for adrenal medullary epinephrine synthesis, and patients with AI have decreased serum epinephrine and a compensatory increase in serum norepinephrine concentrations [39]. This may cause slightly lower basal systolic blood pressure and an exaggerated increase in pulse rate in response to upright posture. Relative eosinophilia was reported to be a marker of AI [10,40]. One study found that the combination of a history of glucocorticoid withdrawal, nausea, hyperkalemia, and eosinophilia was a useful predictor of AI in an inpatient population [41]. In contrast, subsequent small series suggested that the eosinophil count is >500/µL in <20% of patients with AI [42]. Therefore, the presence of eosinophilia was found incidentally; other causes such as allergy or infection should also be investigated [43].
In our study, there were 2 patients with hypoglycemia whose etiology was undetermined; however, both were undergoing hemodialysis. Chronic kidney disease likely involves impaired gluconeogenesis, reduced renal clearance of insulin, and reduced renal glucose production [44], which may increase the risk of hypoglycemia. However, previous data on chronic kidney disease as a risk factor for hypoglycemia has been conflicting. Some [45,46] but not all [47] previous studies have noted such an association.
This was a single-center prospective observational study where the results were obtained at a facility with an endocrinology and emergency medicine specialist in the Kitakyushu area of Japan. Thus, the generalizability of the proportion of hypoglycemic patients and the proportion of patients with diabetes among all patients in emergency departments is unknown. An ACTH test may not be available in all facilities and the turnaround time on the test if available may preclude this diagnosis being made in the emergency department. In addition, because the rapid ACTH test was not performed in the emergency department on the hypoglycemic patients who used antidiabetes drugs due to the study protocol, the proportion of AI may have been higher if testing had been performed for all patients. However, after admission, rapid ACTH tests were performed on the 4 antidiabetes drug users who did not improve their plasma glucose levels despite an intravenous drip infusion of glucose for ≥2 days, but no AI was found in any of them. Therefore, we thought that AI might not be the cause of hypoglycemia in patients using antidiabetes drugs.
In conclusion, the probability of AI was much greater than we anticipated as a cause of hypoglycemia. When protracted hypoglycemia of unknown etiology with hyponatremia, hypotension, and/or eosinophilia is recognized, we recommend that the patient be checked for adrenal function using the rapid ACTH loading test. There are no untoward side effects, and allergic reactions are extraordinarily rare. ACTH tests can be performed without monitoring by an advanced healthcare provider and are less expensive.
Funding
This study was supported by a grant from the Kitakyushu Medical Association. The sponsor did not contribute to the design, collection, management, analysis, interpretation of data, writing of the manuscript, or the decision to submit the manuscript for publication.
Disclosures
The authors have nothing to disclose.
Author Contributions
T.K. wrote the first draft of the manuscript and was mainly responsible for the designation of the methodology. All authors were responsible for the critical revision of the article for important intellectual content. T.K., M.T., N.T., and T.N. were responsible for the collection and assembly of data. M.T. was responsible for data management. T.K. completed the main part of the data analyses, and all authors discussed the analysis plan and results and provided input to the manuscript. All authors had access to the final study results and were responsible for the final approval of the manuscript.
Data Availability
The data that support the findings of this study are not publicly available because they were used under license for the current study only. However, the corresponding author will on request detail the restrictions and any conditions under which access to some data may be provided.
|
2022-08-06T15:03:17.338Z
|
2022-08-04T00:00:00.000
|
{
"year": 2022,
"sha1": "b363fe28645c34904662fa309baa5e37b9cc5e2f",
"oa_license": "CCBYNCND",
"oa_url": "https://academic.oup.com/jes/advance-article-pdf/doi/10.1210/jendso/bvac119/45230021/bvac119.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "58c2678e80e46f7336084c8f49446bb26b3a0a51",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
246330719
|
pes2o/s2orc
|
v3-fos-license
|
Fragmentation of Beaded Fibres in a Composite
The fibre–matrix interface plays an important role in the overall mechanical behaviour of a fibre-reinforced composite, but the classical approach to improving the interface through chemical sizing is bounded by the materials’ properties. By contrast, structural and/or geometrical modification of the interface may provide mechanical interlocking and have wider possibilities and benefits. Here we investigate the introduction of polymer beads along the interface of a fibre and validate their contribution by a single fibre fragmentation test. Using glass fibres and the same epoxy system for both matrix and beads, an increase of 17.5% is observed in the interfacial shear strength of the beaded fibres compared to fibres with no polymer beads. This increase should lead to a similar improvement in the strength and toughness of a beaded fibre composite when short fibres are used. The beads were also seen to stabilise the fragmentation process of a fibre by reducing the scatter in fragment density at a given strain. A case could also be made for a critical beads number—4 beads in our experimental system—to describe interfacial shear strength, analogous to a critical length used in fibre composites.
Introduction
The mechanical behaviour of fibre-reinforced composites is as dependent on the characteristics of its fibre-matrix interface as on the properties of its constituent materials [1][2][3][4]. Good fibre-matrix bonding at the interface, for instance, ensures efficient stress transfer in shear from the matrix to the fibre and results in strong, yet typically brittle, composites. Weak fibre-matrix bonding, in contrast, allows for redistribution of stresses around defects and cracks and for good energy dissipation, and results in tough yet weak composites. Tuning of the mechanical behaviour of a fibre-reinforced composite can be done through the modification of the interface. Usually, this modification is done chemically through polymer sizings or coupling agents applied to the surface of the fibre [4][5][6], but the degree to which mechanical behaviour tuning is possible is limited by the basic properties of the materials used. Furthermore, the strength and toughness achievable by such tuning are typically mutually exclusive of each other, and the enhancement of either property is usually accompanied by the degradation of the other [7].
A number of studies have emerged on the improvement of the mechanical behaviour of composites through the structural design of components [8][9][10][11][12][13][14][15][16][17]. Mechanical interlocking at the interface of these components is often cited as an effective means by which superior mechanical properties and an optimal balance of strength and toughness may be obtained [8,11,12,14]. This approach of property tuning through the structural design of components draws its inspiration from building strategies seen in natural composites. Composite materials in nature have a fairly limited selection of relatively weak component materials, yet due to the complex hierarchical architectures in which these component materials are put together, their overall mechanical properties far exceed the basic rule
Beaded Fibre Preparation
Beaded fibres were prepared by taking advantage of the Plateau-Rayleigh instab [8][9][10]-a phenomenon by which a liquid cylindrical film spontaneously partitions approximately evenly spaced droplets [10]. Taut glass fibres were glued to a metal fr using tape, as seen in Figure 1a. Using another small fragment of a glass fibre, a dro of epoxy that was mixed and deaerated was deposited onto each suspended fibre. As droplet slid down the fibre under gravity, a thin uniform layer of epoxy was depo onto the surface of the fibre. Through the Plateau-Rayleigh instability, the layer the most instantaneously and completely spontaneously separated into fairly evenly spa similarly sized beads along the entire length of the fibre. The epoxy beaded fibres then moved to an oven and cured for 6 h at 100 °C. For further details on the process the instability, refer to [10]. [8][9][10]. A droplet of epoxy is placed on a suspended fib diameter d. As the droplet slides down the fibre, a thin layer of epoxy is formed, which spon ously partitions into fairly uniformly spaced beads of length L, diameter D and wavelength thin layer of epoxy is observed between beads. (b) Cured EP828 epoxy beads of various diam on E-glass fibres. Alternating large and small beads are typically seen when bead diameter > 70 Beaded fibres obtained from this method can be seen in Figure 1b. The bead fairly uniformly sized, and the bead parameters (diameter, length and wavelength) successfully regulated by controlling the size, viscosity and surface tension of the in epoxy droplet applied to the fibre, which in turn affected the thickness of the epoxy l deposited on the fibre once the droplet had slid down it. A larger initial droplet resu in a thicker epoxy coating being deposited on the surface of the fibre, which in tur sulted in larger beads with longer wavelengths. A more viscous drop would have h similar effect. A smaller initial droplet resulted in a thin epoxy layer on the fibre and sequently smaller beads with shorter wavelengths. Typically for beads of diameter gr than 70 µm, smaller beads are seen to alternate between the larger beads. This is du the fact that an epoxy layer is present on the surface of the fibre between beads [8,10] for bigger bead diameters, this epoxy layer is sufficiently thick for a secondary instab to occur. For this study, fibres with a bead diameter range of 28-35 µm were chosen, the average bead diameter taken as being approximately 32 µm. The corresponding length range was 60-70 µm, and the bead spacing (wavelength, bead-centre to bead tre) was 90-160 µm. These bead parameters were chosen because they are more uni in size.
An alternative method for controlling the bead parameters, which was used suc fully in our previous studies [9], involved dipping and drawing out taut glass fibres an epoxy bath at a controlled velocity. The thickness of the epoxy layer on the fibre thus the bead parameters) was determined by the resin viscosity and surface tension [8][9][10]. A droplet of epoxy is placed on a suspended fibre of diameter d. As the droplet slides down the fibre, a thin layer of epoxy is formed, which spontaneously partitions into fairly uniformly spaced beads of length L, diameter D and wavelength λ. A thin layer of epoxy is observed between beads. (b) Cured EP828 epoxy beads of various diameters on E-glass fibres. Alternating large and small beads are typically seen when bead diameter > 70 µm.
Beaded fibres obtained from this method can be seen in Figure 1b. The beads are fairly uniformly sized, and the bead parameters (diameter, length and wavelength) were successfully regulated by controlling the size, viscosity and surface tension of the initial epoxy droplet applied to the fibre, which in turn affected the thickness of the epoxy layer deposited on the fibre once the droplet had slid down it. A larger initial droplet resulted in a thicker epoxy coating being deposited on the surface of the fibre, which in turn resulted in larger beads with longer wavelengths. A more viscous drop would have had a similar effect. A smaller initial droplet resulted in a thin epoxy layer on the fibre and consequently smaller beads with shorter wavelengths. Typically for beads of diameter greater than 70 µm, smaller beads are seen to alternate between the larger beads. This is due to the fact that an epoxy layer is present on the surface of the fibre between beads [8,10], and for bigger bead diameters, this epoxy layer is sufficiently thick for a secondary instability to occur. For this study, fibres with a bead diameter range of 28-35 µm were chosen, with the average bead diameter taken as being approximately 32 µm. The corresponding bead length range was 60-70 µm, and the bead spacing (wavelength, bead-centre to bead-centre) was 90-160 µm. These bead parameters were chosen because they are more uniform in size.
An alternative method for controlling the bead parameters, which was used successfully in our previous studies [9], involved dipping and drawing out taut glass fibres from an epoxy bath at a controlled velocity. The thickness of the epoxy layer on the fibre (and thus the bead parameters) was determined by the resin viscosity and surface tension, and speed at which the fibres are drawn out of the epoxy bath. In this method, higher draw-out velocities or more viscous resins resulted in a thicker epoxy layer, which resulted in larger beads with higher wavelengths, and vice versa [9].
Background
In the single fibre fragmentation test method, a fibre is embedded in a polymer matrix that is usually shaped in the form of a dog-bone for ease of handling and testing ( Figure 2). The specimen is placed in a tensile tester and elongated, and as the applied strain increases, the fibre breaks at points along its length where the cumulative shear stress induced at the interface exceeds the fibre strength. Applying further strain, the fragmentation process continues until a stage is reached where the lengths of the fragments are too short to allow a sufficient build-up of stress to equal or exceed the fibre tensile strength. At that stage, no more fragmentation occurs even with further elongation, and the process is said to have reached saturation [3,24,27]. speed at which the fibres are drawn out of the epoxy bath. In this method, higher drawout velocities or more viscous resins resulted in a thicker epoxy layer, which resulted in larger beads with higher wavelengths, and vice versa [9].
Background
In the single fibre fragmentation test method, a fibre is embedded in a polymer matrix that is usually shaped in the form of a dog-bone for ease of handling and testing (Figure 2). The specimen is placed in a tensile tester and elongated, and as the applied strain increases, the fibre breaks at points along its length where the cumulative shear stress induced at the interface exceeds the fibre strength. Applying further strain, the fragmentation process continues until a stage is reached where the lengths of the fragments are too short to allow a sufficient build-up of stress to equal or exceed the fibre tensile strength. At that stage, no more fragmentation occurs even with further elongation, and the process is said to have reached saturation [3,24,27]. An important parameter obtained from a typical single fibre fragmentation test is the critical length, , which can be thought of as the maximum length a fibre fragment can have before the stress induced via the interface exceeds the fibre tensile strength and breaks it. At saturation, the length of fibre fragments is then anywhere between and . Therefore, the average length of fragments at saturation, , is equal to . The effective interfacial shear strength, , can be estimated by a simple force balance equation shown in Equation (1). This estimation is based on the constant shear model proposed by Cottrell, Kelly and Tyson [25,30,31].
where ( ) is the average strength of the fibre at critical length and is the fibre radius. The critical length is unique to a particular fibre-matrix system, and changes in it can be used as a preliminary indication for the direction in which the interfacial characteristics have moved. Shorter critical lengths are typically associated with a fibre-matrix system with good interfacial adhesion, and vice versa. If, for instance, a coating is applied at the interface of a fibre-matrix system and an increase in critical length is observed, it is highly probable that the interfacial adhesion (that is, shear strength) of the system has been reduced, since they are inversely related, as seen in Equation (1). Typical critical lengths are less than 1 mm and therefore ( ) cannot be determined experimentally, but is calculated from the Weibull parameters obtained for the same fibres at higher gauge lengths. A full explanation of the procedure of calculating ( ) may be found in the Appendix A. An important parameter obtained from a typical single fibre fragmentation test is the critical length, L c , which can be thought of as the maximum length a fibre fragment can have before the stress induced via the interface exceeds the fibre tensile strength and breaks it. At saturation, the length of fibre fragments is then anywhere between 1 2 L c and L c . Therefore, the average length of fragments at saturation, L sat , is equal to 3 4 L c . The effective interfacial shear strength, τ, can be estimated by a simple force balance equation shown in Equation (1). This estimation is based on the constant shear model proposed by Cottrell, Kelly and Tyson [25,30,31].
where σ (L c ) is the average strength of the fibre at critical length and r is the fibre radius. The critical length is unique to a particular fibre-matrix system, and changes in it can be used as a preliminary indication for the direction in which the interfacial characteristics have moved. Shorter critical lengths are typically associated with a fibre-matrix system with good interfacial adhesion, and vice versa. If, for instance, a coating is applied at the interface of a fibre-matrix system and an increase in critical length is observed, it is highly probable that the interfacial adhesion (that is, shear strength) of the system has been reduced, since they are inversely related, as seen in Equation (1). Typical critical lengths are less than 1 mm and therefore σ (L c ) cannot be determined experimentally, but is calculated from the Weibull parameters obtained for the same fibres at higher gauge lengths. A full explanation of the procedure of calculating σ (L c ) may be found in the Appendix A.
Specimen Fabrication
A total of 24 samples with beaded fibres were made and tested in this study. The results were compared to 24 bead-less fibre samples (i.e., glass fibres with no beads), which functioned as the control for the test.
Samples for the single fibre fragmentation test were made by placing a single fibre in the centre of a dog-bone shaped silicon mould and attaching 10 g weights to both ends of the fibre ( Figure 3) using a quick-drying cyanoacrylate glue (CN, Tokyo Measuring Instruments lab, Tokyo, Japan). This particular cyanoacrylate glue was chosen because it could withstand the curing cycle and thus ensure that the weights stayed on the ends of the fibre throughout the cure cycle. Pre-mixed and deaerated epoxy was then added to the moulds to cover the fibre, and the moulds were shifted to an oven and cured for 6 h at 100 • C. the centre of a dog-bone shaped silicon mould and attaching 10 g weights to both ends of the fibre ( Figure 3) using a quick-drying cyanoacrylate glue (CN, Tokyo Measuring Instruments lab, Tokyo, Japan). This particular cyanoacrylate glue was chosen because it could withstand the curing cycle and thus ensure that the weights stayed on the ends of the fibre throughout the cure cycle. Pre-mixed and deaerated epoxy was then added to the moulds to cover the fibre, and the moulds were shifted to an oven and cured for 6 h at 100 °C. The pre-tensioning of fibres by hanging 10 g weights on their ends during sample fabrication is an important step to ensure fragmentation saturation is achieved. This is due to (1) glass fibres typically having a high strain to failure, and (2) the mismatch in thermal expansions of the epoxy and the fibre resulting in a compressive stress being applied to the fibre once the sample is cooled down to room temperatures after curing [24,27,29,[32][33][34][35][36]. This mismatch results in more strain being needed to fraction the fibre. Insufficient pre-tensioning (i.e., with insufficient weights) results in the saturation not being achieved. See Appendix B for the determination of the strain induced in the fibre due to thermal mismatch and the pre-tensioning weights.
Test Procedure
The fragmentation tests were carried out on a Minimat tensile test instrument that was equipped with a 200 N load cell. The wide ends of the dog-bone epoxy sample were placed in a set of slots ( Figure 4). Small weighted lids were placed over the wide ends of the sample in the slots so as to keep the sample in place during the test and especially after the matrix broke. The use of slots instead of clamps was so as to prevent the build-up of stress in the sample ends and to securely hold the sample in place without any slippage for the duration of the extension. Samples were extended until the sample broke, and the force-displacement curves of the samples were recorded. The displacement rate was 1 µm s −1 . The number of breaks of a fibre in a sample was counted under an optical microscope only after matrix failure and sample rupture. A stereo-zoom fitted with a video camera and a cross-polariser was used to monitor the fragmentation process along the entire The pre-tensioning of fibres by hanging 10 g weights on their ends during sample fabrication is an important step to ensure fragmentation saturation is achieved. This is due to (1) glass fibres typically having a high strain to failure, and (2) the mismatch in thermal expansions of the epoxy and the fibre resulting in a compressive stress being applied to the fibre once the sample is cooled down to room temperatures after curing [24,27,29,[32][33][34][35][36]. This mismatch results in more strain being needed to fraction the fibre. Insufficient pretensioning (i.e., with insufficient weights) results in the saturation not being achieved. See Appendix B for the determination of the strain induced in the fibre due to thermal mismatch and the pre-tensioning weights.
Test Procedure
The fragmentation tests were carried out on a Minimat tensile test instrument that was equipped with a 200 N load cell. The wide ends of the dog-bone epoxy sample were placed in a set of slots ( Figure 4). Small weighted lids were placed over the wide ends of the sample in the slots so as to keep the sample in place during the test and especially after the matrix broke. The use of slots instead of clamps was so as to prevent the build-up of stress in the sample ends and to securely hold the sample in place without any slippage for the duration of the extension. Samples were extended until the sample broke, and the forcedisplacement curves of the samples were recorded. The displacement rate was 1 µm s −1 . The number of breaks of a fibre in a sample was counted under an optical microscope only after matrix failure and sample rupture. A stereo-zoom fitted with a video camera and a cross-polariser was used to monitor the fragmentation process along the entire sample. The videos and the force-displacement curves for the samples were synchronised to correlate fragment density to percentage strain in the samples. sample. The videos and the force-displacement curves for the samples were synchronised to correlate fragment density to percentage strain in the samples.
Qualitative Analysis under Cross-Polarised Light
Embedded beaded fibres under load were qualitatively analysed under cross-polarised light. Transparent epoxy matrices are typically optically isotropic, but in highly stressed regions, such as those around fibre breaks, the matrix becomes optically anisotropic or birefringent and appears brightly lit and colourful compared to the low-stressed regions. Birefringence in epoxy matrices makes it possible to qualitatively study stress distributions at the interface of fibre-matrix systems. Figure 5a presents a beaded fibre under load before any fibre fragmentation has occurred. Some birefringence is observed at the apparent bead-matrix interface, implying that there is some stress discontinuity (that is, high stress gradient) across this interface under load.
Qualitative Analysis under Cross-Polarised Light
Embedded beaded fibres under load were qualitatively analysed under cross-polarised light. Transparent epoxy matrices are typically optically isotropic, but in highly stressed regions, such as those around fibre breaks, the matrix becomes optically anisotropic or birefringent and appears brightly lit and colourful compared to the low-stressed regions. Birefringence in epoxy matrices makes it possible to qualitatively study stress distributions at the interface of fibre-matrix systems. Figure 5a presents a beaded fibre under load before any fibre fragmentation has occurred. Some birefringence is observed at the apparent bead-matrix interface, implying that there is some stress discontinuity (that is, high stress gradient) across this interface under load. sample. The videos and the force-displacement curves for the samples were synchronised to correlate fragment density to percentage strain in the samples.
Qualitative Analysis under Cross-Polarised Light
Embedded beaded fibres under load were qualitatively analysed under cross-polarised light. Transparent epoxy matrices are typically optically isotropic, but in highly stressed regions, such as those around fibre breaks, the matrix becomes optically anisotropic or birefringent and appears brightly lit and colourful compared to the low-stressed regions. Birefringence in epoxy matrices makes it possible to qualitatively study stress distributions at the interface of fibre-matrix systems. Figure 5a presents a beaded fibre under load before any fibre fragmentation has occurred. Some birefringence is observed at the apparent bead-matrix interface, implying that there is some stress discontinuity (that is, high stress gradient) across this interface under load. The fact that there is a distinct interface between the bead and the matrix also implies that they behave as two separate entities rather than as one continuous phase, despite being made from the same epoxy system. If the beads and matrix had constituted a continuous phase, the stress at the interface would not be discontinuous but instead continuous as in bead-less fibres, without the 'butterfly' pattern. This is in agreement with our previous observations on the pullout of beaded fibres [8,10]. Figure 5b shows a beaded fibre after several fibre breaks at saturation (see definition in Section 2.3.1). Saturation at this point was presumed because (1) no more fibre breaks were observed at this stage and (2) the ends of the birefringent patterns were seen to almost touch each other, which is an indication of saturation of the fragmentation process being reached [24]. Figure 5c displays a fragment of a beaded fibre between two fibre breaks while still under load. The positions of the beads are marked below for more clarity. No debonding is observed between the bead and the matrix (debonding typically appears as a black shadow). It is interesting to note that all the beads are also brightly lit, particularly the two beads far from the centre of the fragment. The pattern of lighting in the bead appears to be distinctly brighter than the matrix surrounding it, implying that there is some stress concentration in the bead compared to the matrix in that region. Such stress concentration indicates that the bead is bearing high stress, acting as an obstacle against matrix displacement, thereby mechanically interlocking the fibre. A video of the fragmentation process of beaded fibres has also been included in the Supplementary Materials for further clarity. The video demonstrates the gradual rise in the beads stress level with respect to the surrounding matrix as the external load is increased.
Some of the beaded fibre samples showed fibre pullout on completion of the test when the sample ruptured, as seen in Figure 6. The matrix close to the fracture surfaces was necked and thus birefringent, making the contrast between the beads and the matrix in this region weaker than in other parts of the sample. Therefore, for better clarity, the beads are marked out in white ( Figure 6a). The fracture surface was seen to be perpendicular to the fibre (Figure 6b). The beads were not pulled out with the fibres but remained in the matrix. No epoxy residue was observed on the pulled-out fibre surface (Figure 6c), indicating that the failure between the bead and the fibre was adhesive and that between the bead-matrix and bead-fibre interfaces, the bead-fibre interface is likely the weaker interface. This particular observation was consistent with our previous work on the pullout of beaded fibres [8]. The fact that there is a distinct interface between the bead and the matrix also implies that they behave as two separate entities rather than as one continuous phase, despite being made from the same epoxy system. If the beads and matrix had constituted a continuous phase, the stress at the interface would not be discontinuous but instead continuous as in bead-less fibres, without the 'butterfly' pattern. This is in agreement with our previous observations on the pullout of beaded fibres [8,10]. Figure 5b shows a beaded fibre after several fibre breaks at saturation (see definition in Section 2.3.1). Saturation at this point was presumed because (1) no more fibre breaks were observed at this stage and (2) the ends of the birefringent patterns were seen to almost touch each other, which is an indication of saturation of the fragmentation process being reached [24]. Figure 5c displays a fragment of a beaded fibre between two fibre breaks while still under load. The positions of the beads are marked below for more clarity. No debonding is observed between the bead and the matrix (debonding typically appears as a black shadow). It is interesting to note that all the beads are also brightly lit, particularly the two beads far from the centre of the fragment. The pattern of lighting in the bead appears to be distinctly brighter than the matrix surrounding it, implying that there is some stress concentration in the bead compared to the matrix in that region. Such stress concentration indicates that the bead is bearing high stress, acting as an obstacle against matrix displacement, thereby mechanically interlocking the fibre. A video of the fragmentation process of beaded fibres has also been included in the Supplementary Materials for further clarity. The video demonstrates the gradual rise in the beads stress level with respect to the surrounding matrix as the external load is increased.
Some of the beaded fibre samples showed fibre pullout on completion of the test when the sample ruptured, as seen in Figure 6. The matrix close to the fracture surfaces was necked and thus birefringent, making the contrast between the beads and the matrix in this region weaker than in other parts of the sample. Therefore, for better clarity, the beads are marked out in white ( Figure 6a). The fracture surface was seen to be perpendicular to the fibre (Figure 6b). The beads were not pulled out with the fibres but remained in the matrix. No epoxy residue was observed on the pulled-out fibre surface (Figure 6c), indicating that the failure between the bead and the fibre was adhesive and that between the bead-matrix and bead-fibre interfaces, the bead-fibre interface is likely the weaker interface. This particular observation was consistent with our previous work on the pullout of beaded fibres [8].
Fragmentation Behaviour and Effective Interfacial Strength
Using a cross-polarised optical stereo-zoom, videos of the fragmentation process of randomly chosen 9 beaded fibre samples and 11 control samples were recorded to synchronise fibre breaks with the applied strain in the model composite sample. This was done to examine whether the beads influenced the fragmentation process of the fibre while under load. The number of breaks in each sample at a given strain was translated to fragments density and plotted against percentage strain in the composite (Figure 7a,b). No fitting was done for the data points in these graphs. For ease of comparison between the two configurations, Figure 7c presents a beaded fibre sample and a control, each showing typical behaviours for their batch. It was not possible to plot all the fibre breaks up to saturation. A very low magnification stereo-zoom was used so that the entire length of each specimen could be viewed. Fibre breaks were therefore identified by the birefringence pattern formed in the matrix around the break rather than by the break itself since the breaks were often too small to detect. Closer to saturation, fibre breaks were very likely formed close to other pre-existing breaks, and any birefringence produced by a new break was, in all likelihood, masked by the patterns already in the matrix. Nonetheless, fibre breaks during the beginning and middle of the fragmentation process could be seen clearly and thus studied. Hence, the plots of fragment density vs. strain provide good insights into how the beads affected the fragmentation process of fibres. , the critical length and , the effective interfacial shear strength, were calculated by Equation (1) and also recorded in Table 1. The calculation of the fibre strength, ( ) , may be found in Appendix A. ( ) was calculated from the Weibull parameters of fibres without beads since the beads and the matrix were made from the same material. From our previous study [8], the bead and the matrix were found to have identical physical and chemical properties, and thus, the system can ultimately be considered as a single fibre embedded in bulk epoxy. The strength of the fibre embedded in the matrix would therefore not be affected by the presence of the beads. The effective interfacial shear strength for beaded fibre was likewise calculated by taking only the fibre radius into consideration and not the radius of the beads since the beads and the matrix were made from the same In Figure 7a,b, no fibre breaks were observed at strains below 5.5% in either configuration. However, some of the beaded fibre samples appeared to start fragmentation at lower strains (Figure 7a) than the control (Figure 7b), an expected outcome of the higher stress induced by the beads on the fibre. Fragmentation for the control typically began between 7.6 and 9.8% strain, whereas, for beaded fibre samples, fragmentation was seen to begin anywhere between 5.7 and 8.3% matrix strain. The beads appeared to 'stabilise' the fragmentation process since less scatter was observed among the beaded fibre samples compared to the control, meaning that the probability of failure at a given load can be predicted more accurately. The beaded fibre samples attained higher overall fibre fragmentation densities than the control, so that the fragment lengths for beaded fibres were shorter than those of the control, implying that stress transfer from the matrix to the fibre was better for the beaded fibre samples than the control. The curves, therefore, indicate a successful modification of the interface of a fibre in a matrix due to the presence of the beads.
The average number of breaks in the fibre at saturation was recorded for all 24 beaded fibre samples and 24 control samples after each of the samples had failed. The average fragment length at saturation, L sat , was calculated for each sample by dividing the length of the fibre under fragmentation (11.7 mm; Figure 3) by the total number of breaks along the fibre of that particular sample. The average of each configuration is recorded in Table 1. Beaded fibres, on average, had a higher number of fibre breaks compared to the control and thus lower values of L sat . Though the difference between the number of breaks in the fibre appears small, a t-test shows that it is still highly significant. A plot of the distribution of average L sat for beaded fibres compared to the control can be found in Figure 7d. While there is some overlap between the samples, it can be seen that the beaded fibre samples (red) tend to have lower L sat than the control. This concurs well with what was observed in the plots of fragmentation density vs. strain in Figure 7a,b, where higher fragment densities were observed for beaded fibre samples compared to the control. L c , the critical length and τ, the effective interfacial shear strength, were calculated by Equation (1) and also recorded in Table 1. The calculation of the fibre strength, σ (L c ) , may be found in Appendix A. σ (L c ) was calculated from the Weibull parameters of fibres without beads since the beads and the matrix were made from the same material. From our previous study [8], the bead and the matrix were found to have identical physical and chemical properties, and thus, the system can ultimately be considered as a single fibre embedded in bulk epoxy. The strength of the fibre embedded in the matrix would therefore not be affected by the presence of the beads. The effective interfacial shear strength for beaded fibre was likewise calculated by taking only the fibre radius into consideration and not the radius of the beads since the beads and the matrix were made from the same epoxy material. Therefore, calculations of τ were done only taking into account the fibre radius for both the control as well as beaded fibres.
A highly significant increase of 17.5% was calculated in the effective interfacial shear strength of beaded fibres compared to bead-less fibres. The beads, therefore, improved the effective interfacial adhesion of a fibre in a matrix. It must be noted that the calculations of τ for beaded fibres and the control were done using a constant shear model (Equation (1)), which is a simplistic model, and therefore the term 'effective' is used to describe the interfacial strength. We may expect a comparable improvement in strength and toughness (pullout energy) in composites reinforced by beaded fibres when short fibres are used (shorter than the critical length), as both properties are proportional to the interfacial strength [37]. When longer fibres are used, the strength will still improve, whereas the pullout energy might degrade; however, such degradation should be outweighed by the dissipation of plastic and friction energy at the bead-matrix interface due to relative motion between them (see more on this in the discussion, in Section 4.1).
Distribution of Breaks and Critical Number of Beads
To further probe the role of the beads at the interface, the distribution of the position of breaks along a beaded fibre for all the beaded fibre samples was studied. Both distributions are displayed in Figure 8. Figure 8a presents fragments of beaded fibres. The fragment on top is between a break outside a bead (left) and a break inside a bead (right), and the fragment below is between a break at the edge of a bead (left) and a break outside a bead (right). Three whole beads are seen on the fragment on the top, and four whole beads are seen on the fragment below. Figure 8b provides a distribution of the position of fibre breaks for each of the beaded fibre samples. Most of the fibre breaks (44% of all fibre breaks) were found to be outside and far away from beads (green), whereas a large fraction of breaks (29%) was found at the edge of beads (yellow). Only a few breaks were found to be completely inside beads (10%). The matrix cracks caused by fibre breaks were occasionally larger than the beads (as seen in Figure 5c), making it impossible to determine whether a fibre break was in the bead, outside of it or at its edge. These breaks were labelled as 'unsure' on the graph (blue) and accounted for 17% of the fibre breaks in beaded fibres. So, excluding the uncertain cases, 53% of breaks were outside beads, whereas 47% were either inside beads or at their edge; given that the fibre length covered by beads is on average the same as the length not covered by beads, and that the beads distribution along the fibre is widely varied (see Section 2.2), this implies that the locations of breaks and beads are not correlated.
The distribution of the number of beads on fibre fragments was also studied. Figure 8c shows the total number of beads on the fragments of 15 beaded fibre samples after the completion of the fragmentation test. We observe that the number of beads on a fragment at saturation is not random, but appears to follow a log-normal distribution (Figure 8d). The most frequent number of beads on a fibre fragment was n sat = 3 beads, followed by 2 beads and 4 beads per fragment. In a previous study [10], we had put forth a concept of critical number of beads, n c , where the maximum stress in the fibre is determined by the number of beads on the fragment rather than by the length of the fibre. The critical number of beads would be a discrete quantity that could replace the critical length L c . For this particular system, the critical number of beads is possibly n c = 4 3 n sat = 4. The fact that this is an even number is in line with the observation that most fibre breaks are outside beads (had the critical number of beads been odd, further breaks would most likely have occurred in the centre of the middle bead due to symmetry).
Mechanical Interlocking
An increase of 17.5% in the effective interfacial strength, , of beaded fibre samples compared to control is quite a surprising outcome since the beads and the matrix are made from the same epoxy system. Supposedly, the beads and the matrix should have acted as one entity, and there should not have been a difference in the fragmentation behaviour or interfacial shear strength of the beaded fibre samples compared to the control. However, this is not what we observe. Instead, we not only see a very significant increase in of beaded fibres compared to the control, but we also see that the number of beads on fragments is not random.
The first possible hint as to why we observe these results is seen in Figure 5a, where we see a strong indication that the beads and the matrix act as two separate entities under load, with a distinct interface between them. From our previous work on the pullout of beaded fibres [8], we know that the epoxy beads are fully cured before they are embedded in the matrix during the sample fabrication process in Section 2.3.2. It is, therefore, less likely for the epoxy in the bead to form crosslinks with the epoxy in the matrix during sample fabrication because diffusion of epoxy oligomers from the liquid matrix into the solid bead is very limited, and most bonding sites at the bead side are already occupied,
Mechanical Interlocking
An increase of 17.5% in the effective interfacial strength, τ, of beaded fibre samples compared to control is quite a surprising outcome since the beads and the matrix are made from the same epoxy system. Supposedly, the beads and the matrix should have acted as one entity, and there should not have been a difference in the fragmentation behaviour or interfacial shear strength of the beaded fibre samples compared to the control. However, this is not what we observe. Instead, we not only see a very significant increase in τ of beaded fibres compared to the control, but we also see that the number of beads on fragments is not random.
The first possible hint as to why we observe these results is seen in Figure 5a, where we see a strong indication that the beads and the matrix act as two separate entities under load, with a distinct interface between them. From our previous work on the pullout of beaded fibres [8], we know that the epoxy beads are fully cured before they are embedded in the matrix during the sample fabrication process in Section 2.3.2. It is, therefore, less likely for the epoxy in the bead to form crosslinks with the epoxy in the matrix during sample fabrication because diffusion of epoxy oligomers from the liquid matrix into the solid bead is very limited, and most bonding sites at the bead side are already occupied, culminating in the interface between the bead at the matrix. This relatively weak interface is reflected in the lower slope of the force-displacement curve (at high strains) during pullout of a beaded fibre compared to a bead-less fibre. It must be noted that though the beads undergo the curing cycle twice first when the beads are formed on the surface of a fibre, and again when the beads are embedded in the matrix and cured-we know from our previous study on the pullout behaviour of beaded fibres, that the mechanical and chemical properties of the beads are identical to those of the matrix [8].
A second hint is realised by comparing a fragment of beaded fibre between breaks and a bead-less fibre, as in Figure 9. Here, the beads are brightly lit, particularly closer to fibre breaks (which are just outside the frame of the images). This indicates that the beads are under higher stress with respect to the surrounding matrix when under load, implying that the beads act as mechanical interlocks for the fibre against the displacement of the matrix. culminating in the interface between the bead at the matrix. This relatively weak interface is reflected in the lower slope of the force-displacement curve (at high strains) during pullout of a beaded fibre compared to a bead-less fibre. It must be noted that though the beads undergo the curing cycle twice first when the beads are formed on the surface of a fibre, and again when the beads are embedded in the matrix and cured-we know from our previous study on the pullout behaviour of beaded fibres, that the mechanical and chemical properties of the beads are identical to those of the matrix [8].
A second hint is realised by comparing a fragment of beaded fibre between breaks and a bead-less fibre, as in Figure 9. Here, the beads are brightly lit, particularly closer to fibre breaks (which are just outside the frame of the images). This indicates that the beads are under higher stress with respect to the surrounding matrix when under load, implying that the beads act as mechanical interlocks for the fibre against the displacement of the matrix. We previously put forth a phenomenological model based on mechanical 'friction lock' to describe the behaviour of beaded fibres under load [8]. Since the bead is fully cured and is not likely to form a significant number of bonds with the matrix, some relative motion is possible at the bead matrix interface. Thus, when a beaded fibre is subject to a load, it is thought to shift and push against the matrix, resulting in an equal and opposite stress being exerted on the bead by the matrix. This stress is converted to radial pressure in the bead, which then propagates through the bead to the bead-fibre interface, inducing a friction shear stress ( ) at the bead-fibre interface. This friction shear stress is in addition to the existing bonding shear stress between the bead and the fibre, , and so contributes to increasing the total effective of the system. The mechanism can be thought of as being very similar to the friction locking of mechanical parts using a wedge. Since the mechanism proposes a friction shear stress, which adds onto the bonding shear stress, the use of total effective shear stress to quantify the effect of beads at the interface in Section 3.2 is valid. This train of causes and effects is supported by the two experimental observations described above, namely the existence of a distinct interface, which is most likely weaker than the continuous epoxy, and the stress concentration in the beads with respect to their matrix surrounding. A detailed description of the phenomenological friction lock mechanism may be found in [8].
The mechanical locking action induced by the beads is demonstrated in Figure 10, which shows a portion of a beaded fibre before and after a fibre break. Before fibre break the stress level in the beads is moderate because the load is distributed over several beads along the fibre. However, after fibre break the stress at the two beads at both sides of the break, which are closest to the edges of the two new fibre fragments, rises distinctively over the matrix stress, reflected by the high light intensity in the beads. The higher stress in the beads with respect to the matrix implies that they are incurring extra load, indicative of their mechanical locking action. A similar observation is seen in the video of beaded fibres, which is found in the Supplementary Materials. We previously put forth a phenomenological model based on mechanical 'friction lock' to describe the behaviour of beaded fibres under load [8]. Since the bead is fully cured and is not likely to form a significant number of bonds with the matrix, some relative motion is possible at the bead matrix interface. Thus, when a beaded fibre is subject to a load, it is thought to shift and push against the matrix, resulting in an equal and opposite stress being exerted on the bead by the matrix. This stress is converted to radial pressure in the bead, which then propagates through the bead to the bead-fibre interface, inducing a friction shear stress (τ f ) at the bead-fibre interface. This friction shear stress is in addition to the existing bonding shear stress between the bead and the fibre, τ i , and so contributes to increasing the total effective τ of the system. The mechanism can be thought of as being very similar to the friction locking of mechanical parts using a wedge. Since the mechanism proposes a friction shear stress, which adds onto the bonding shear stress, the use of total effective shear stress τ to quantify the effect of beads at the interface in Section 3.2 is valid. This train of causes and effects is supported by the two experimental observations described above, namely the existence of a distinct interface, which is most likely weaker than the continuous epoxy, and the stress concentration in the beads with respect to their matrix surrounding. A detailed description of the phenomenological friction lock mechanism may be found in [8].
The mechanical locking action induced by the beads is demonstrated in Figure 10, which shows a portion of a beaded fibre before and after a fibre break. Before fibre break the stress level in the beads is moderate because the load is distributed over several beads along the fibre. However, after fibre break the stress at the two beads at both sides of the break, which are closest to the edges of the two new fibre fragments, rises distinctively over the matrix stress, reflected by the high light intensity in the beads. The higher stress in the beads with respect to the matrix implies that they are incurring extra load, indicative of their mechanical locking action. A similar observation is seen in the video of beaded fibres, which is found in the Supplementary Materials. A question arises as to whether the effects we see in Section 3.2, namely the increase in fibre breaks (and consequently increase in interfacial shear strength) in beaded fibres, are because of stress concentrations inside the bead due to double curing of the bead and/or thermal mismatch, or due to the friction lock mechanism we briefly described here. The beads undergo a second cure cycle during sample preparation for the single fibre fragmentation test, and it is known that residual strains are possible in the fibre due to a mismatch in thermal coefficients of expansion of epoxy and glass fibres.
To answer this question, we first consider the effect of double cure on the bead properties. We refer back to our work on the pullout of single beaded fibres in [8], where we studied in detail the effect of a single bead at the fibre-matrix interface (we also refer to ref [8] for characterisation tests of the epoxy system used here). In ref [8], we addressed the issue with the bead undergoing a cure cycle twice and concluded that the properties of the epoxy in the bead did not significantly differ from that of the matrix. While tests on bulk epoxy that had been cured twice showed a slight increase in its glass transition temperature, , compared to epoxy that had been cured once, there was no significant difference between the mechanical properties of epoxy that had gone through the cure cycle twice and that which had gone through the cycle only once. Bulk epoxy that was cured twice (so as to emulate the cure conditions of the bead) was found to have a slight (2.5%) but insignificant (p-value = 0.43) increase in tensile modulus. Running this increase in bead modulus in our micro-mechanical model described in ref [9], by setting the bead material stiffness higher by 2.5% than the matrix, an increase in effective interfacial shear strength of less than 0.1% was obtained. Such increase is negligible and nowhere near the very significant (p-value = 0.002) increase of 17.5% in the effective interfacial shear strength observed in Section 3.2. Moreover, referring to our work in [8], we observed an increase in pullout force and work with decreasing bead size (down to a limit that was not reached in these tests) because smaller beads enhance the wedging effect of the friction lock mechanism. Had the increase in pullout work and force simply been an artefact of stress concentration due to the bead being cured twice, bigger beads would have resulted in higher pullout forces and work. However, this was not observed.
Next, with regard to stress concentrations in the bead due to residual thermal stresses, we refer to our calculations in Appendix B, where the residual strain induced in the bead due to double curing, and the residual strain induced in the fibre due to the bead, were calculated and found insignificant. Thus, referring to the measures taken to relieve thermal strains during matrix curing (Section 2.3.2), and to our calculations in Appendix B, the 10 g weights used to offset thermal residual strain in bead-less fibres would be sufficient to offset any thermal residual strain induced in the fibre by the beads and the matrix as well.
We also refer to Figure 3c and to our video in the Supplementary Materials, where we see beaded fibres embedded in epoxy under polarised light but not under load. In A question arises as to whether the effects we see in Section 3.2, namely the increase in fibre breaks (and consequently increase in interfacial shear strength) in beaded fibres, are because of stress concentrations inside the bead due to double curing of the bead and/or thermal mismatch, or due to the friction lock mechanism we briefly described here. The beads undergo a second cure cycle during sample preparation for the single fibre fragmentation test, and it is known that residual strains are possible in the fibre due to a mismatch in thermal coefficients of expansion of epoxy and glass fibres.
To answer this question, we first consider the effect of double cure on the bead properties. We refer back to our work on the pullout of single beaded fibres in [8], where we studied in detail the effect of a single bead at the fibre-matrix interface (we also refer to ref [8] for characterisation tests of the epoxy system used here). In ref [8], we addressed the issue with the bead undergoing a cure cycle twice and concluded that the properties of the epoxy in the bead did not significantly differ from that of the matrix. While tests on bulk epoxy that had been cured twice showed a slight increase in its glass transition temperature, T g , compared to epoxy that had been cured once, there was no significant difference between the mechanical properties of epoxy that had gone through the cure cycle twice and that which had gone through the cycle only once. Bulk epoxy that was cured twice (so as to emulate the cure conditions of the bead) was found to have a slight (2.5%) but insignificant (p-value = 0.43) increase in tensile modulus. Running this increase in bead modulus in our micro-mechanical model described in ref [9], by setting the bead material stiffness higher by 2.5% than the matrix, an increase in effective interfacial shear strength of less than 0.1% was obtained. Such increase is negligible and nowhere near the very significant (p-value = 0.002) increase of 17.5% in the effective interfacial shear strength observed in Section 3.2. Moreover, referring to our work in [8], we observed an increase in pullout force and work with decreasing bead size (down to a limit that was not reached in these tests) because smaller beads enhance the wedging effect of the friction lock mechanism. Had the increase in pullout work and force simply been an artefact of stress concentration due to the bead being cured twice, bigger beads would have resulted in higher pullout forces and work. However, this was not observed.
Next, with regard to stress concentrations in the bead due to residual thermal stresses, we refer to our calculations in Appendix B, where the residual strain induced in the bead due to double curing, and the residual strain induced in the fibre due to the bead, were calculated and found insignificant. Thus, referring to the measures taken to relieve thermal strains during matrix curing (Section 2.3.2), and to our calculations in Appendix B, the 10 g weights used to offset thermal residual strain in bead-less fibres would be sufficient to offset any thermal residual strain induced in the fibre by the beads and the matrix as well.
We also refer to Figure 3c and to our video in the Supplementary Materials, where we see beaded fibres embedded in epoxy under polarised light but not under load. In these images, the birefringent pattern is not strong and appears mainly on the bead surface, indicating a distinct interface between the beads and the matrix. It is only once the beads are under load that a significant birefringence is observed. No obvious stress concentration is seen inside the bead before a fibre break, as seen in Figure 10. If the increase in effective interfacial shear stress was simply due to an artefact of residual stress in the bead, we would expect to see significant birefringence in the bead before a fibre break. Furthermore, Figures 5 and 9b show fragments of beaded fibres between fibre breaks, and it can clearly be seen that the distribution of the stress concentration in the beads is not uniform in all beads. The beads on the outer edges (closest to fibre breaks) have higher stress concentrations, as reflected by them being more brightly lit, while the beads in the centre are not as brightly lit. Therefore, the hypothesis that the double cure of the epoxy or thermal residual stress in the bead is the reason for the results seen in Sections 3.2 and 3.3 must be rejected.
A further question may arise as to whether the stress concentrations in the beads under load, whatever their source, induce local stress concentrations in the fibre, causing it to prematurely break and consequently disrupt the calculation of the effective shear strength in Equation (1). However, this hypothesis is not supported by the breaking statistics in Figure 8b, which shows that fibre breaks appear fairly randomly with respect to the location of beads. A stress concentration induced by the bead on the fibre would tend to occur consistently close to the bead edges, where the bead stress is maximal. Referring, for example, to Figures 9b and 10, most fibre breaks are far from the bead edges. Furthermore, such hypothetical stress concentrations should be negligible compared to the stress buildup along the fibre length due to the cumulative effect of the shear stress applied by the matrix and the beads.
Yet, the most convincing evidence for the effectiveness of the beads in transferring stress from the matrix to the fibre, so that the stresses we observed are not a mere artefact of stress concentration, comes from our pullout tests in [8]. In these tests, which were carried out on a fibre fragment with a single bead, we measured an average increase of 0.15 N in the pullout force due solely to the bead, equivalent to an increase of about 30% in the effective interfacial shear strength. Hence, the possibility of an artefact should be rejected because an artefact would have decreased the pullout force rather than increase it. Furthermore, the following calculation accurately predicts the expected effective shear strength in the fragmentation tests, based on the results of the pullout test. Comparison of the effective interfacial strength of beaded fibres, τ, calculated from the pullout test in [8] and the fragmentation test presented herein, reflects a significant difference: the former predicts an increase of about 30% in τ compared to bead-less fibres (see Appendix C), whereas the latter predicts a mere 17.5%. We note that the pullout test was a pure test with only a single bead on a single fibre, compared to multiple beads in the fragmentation test. The difference in τ becomes clear when observing the stress intensity profile along a fibre fragment, seen in Figure 9. The stress in both the beaded and bead-less cases is very high at the fragment ends (high light intensity) and gradually decreases toward the fragment centre, reminiscent of the well-known shear-lag stress distribution. The predicted contribution to the shear stress by a single bead is added on top of the shear-lag stress at the region of the bead so that the contribution of the outer beads is significant, whereas that of the inner beads is much lower. In other words, most of the 17.5% increase predicted by the fragmentation test is due to the two outer beads, out of the total of four beads on the fragment, resulting in a rough average contribution of about 15% (i.e., 30% × 2/4), whereas the additional 2.5% (i.e., 17.5-15%) are due to the inner beads.
Effect of Beads Size and Fibre Volume Fraction
In this study, we examined only a single size of beaded fibres, i.e., fibres with beads that ranged from 28-35 µm (approx. 32 µm on average) with similar bead densities. To further understand the role of the beads at the interface, other bead diameters and spatial distributions will also be studied in future work to see if there is an optimal bead size and distribution for any potential enhancement of the interfacial shear strength, and to understand the influence on the position of breaks or the critical number of beads on fibre fragments. The combination of these effects-bead size and beads distribution-is quite complex. Although, intuitively, a larger bead should provide stronger mechanical interlock, we have shown in our previous work [8] that when the bead diameter is much reduced, its wedging effect is more pronounced. Similarly, although it seems that a higher density of beads (smaller wavelength) should provide better mechanical interlocking, this is not necessarily the case, because as shown above, not all the beads are contributing equally to the overall interlocking.
The strength and toughness of a composite are linearly dependent on fibre volume fraction (V f ). Maximum volume fraction is achieved when the fibres are tightly packed, often in the form of prepregs. Therefore, given the presence of the beads on a fibre, can a practical volume fraction be achieved for a beaded fibre composite? Typical continuous glass-fibre composites have volume fractions within the range of 30-70%, depending on the application. For instance, applications in aerospace using prepregs have volume fractions of about 60%, whereas applications using the hand-layup method, such as what is seen in boat building, would only have a volume fraction of 30-40%. From our previous study on beaded fibres [10], the maximum bead locking effect is achieved when the bead diameter is about 1.5 times the fibre diameter. At that size, according to the volume fraction analysis in [10], the achievable fibre volume fraction is 40-50%, depending on whether the packing is continuous (bead-to-bead contact) or staggered (bead-to-fibre contact), which is suitable for a variety of composite applications. A higher volume fraction is still achievable for beads with smaller diameters, provided that the expected reduction in locking effectiveness would be compensated by using beads made of a stiffer and stronger material than the matrix. With such beads, a staggered tight packing of beaded fibres, where each bead is in contact with a neighbouring fibre, could result in enhanced mechanical locking due to dovetailing (like that seen in nacre) when the material is under load [10]. We note that although the diameter of the beads used in this study was on average 2.2 times the fibre diameter (i.e., above the optimal size), its locking effectiveness was measurable and significant.
For short fibre (or discontinuous fibre) composites, the benefits of using beaded fibres may be more pronounced. First, for many applications with short fibres, such as sheet moulding composites for structural parts in the automotive industry, the fibre volume fractions are much lower than that of composites using continuous fibres and can be even as low as 22% [38]. Therefore, obtaining beaded fibre composites of practical volume fraction is feasible and is not impeded by the presence of beads on the fibre. Moreover, as seen in this study, not all the beads in the composite appear to bear stress to the same degree, such that the beads on the ends of short fibres are expected to bear a higher stress concentration. Thus, if short fibres are used, this would result in more 'active' beads, or more beads contributing to stress transfer to the fibre. Thus, beaded short fibres potentially improve the strength of the composite compared to using fibres with no beads. In fact, a similar structure has been studied in the past of fibres with enlarged ends known as bone-shaped short fibres [16], where strength increases were observed and attributed to better stress transfer through mechanical interlocking of the enlarged ends. Therefore, short-beaded fibre composites have ample potential, and as such, studies on such composites are anticipated in the future.
Conclusions
Polymer beads at the interface appear to be a promising way of increasing the interfacial shear strength of a fibre-reinforced composite. Using a single fibre fragmentation test in a model of glass fibres with epoxy beads embedded in epoxy matrix, an increase of 17.5% was observed in the interfacial shear strength of beaded fibres compared to the control. A similar improvement is expected for a multifibre composite. The beads, therefore, improved the effective interfacial adhesion of a fibre in a matrix. The beads also seem to make the fragmentation process more uniform and predictable. It was also seen that for beaded fibres, fragmentation started earlier. The beads and the matrix had a distinct interface between them, implying that despite being of the same material, they were not a continuous phase but functioned as two separate entities. Where fibre pullout occurred, the beads were not pulled out with the fibre but stayed inside the matrix. These findings suggest that the beads serve as interfacial obstacles against matrix displacement, providing a mechanical interlock for the fibres.
In this study, we limited ourselves to using epoxy beads that were chemically and physically identical to the matrix. This was done so as to isolate and investigate the effect of geometry alone. However, as predicted in our previous theoretical study [9], using beads of different materials could be beneficial and is something that should be studied and pursued. For instance, using a stiffer material for the beads could result in an overall increase in stiffness of the composite, and using thermoplastic materials for the bead could potentially increase the toughness of the overall composite. Future work anticipates a detailed model of how the beaded fibres behave under fragmentation, and a study into the effect of beads diameter and distribution on interfacial shear strength, critical bead number and fragmentation process of beaded fibres. Further tests are anticipated in this direction, including expansion to full-scale composites.
Background
The determination of the average fibre strength at critical fibre length, σ (L c ) , is a controversial subject [25]. The strength of reinforcement fibres is stochastic in nature and cannot be described by a single value as it depends on the sporadic presence of harmful defects. Therefore, the failure of the fibres is described by the weakest link model, which takes into account the fibre length and the density of critical defects. Reinforcement fibres, such as the E-glass fibres used in the current study, exhibit a size effect in which shorter fibres have higher tensile strengths due to the fact that they likely have fewer severe defects that could cause failure. A number of studies calculate σ (L c ) from the Weibull distribution parameters obtained by tensile tests conducted on several filaments at one gauge length [1,2,6,24,26,28], whereas other studies suggest the use of three or four gauge lengths for extrapolation of σ (L c ) [25,28]. Here, we investigated both methods in the determination of the average fibre strength at critical length, σ (L c ) .
Method 1: Obtaining Weibull Parameters from Fibres at a Single Gauge Length L 0
The two-parameter Weibull distribution, combined with the weakest link model, is used to present the strength data of the E-glass fibres used in this study: where P is the probability of failure at the applied tensile strength σ, L is the fibre gauge length and L 0 is the reference length. The Weibull modulus m, also known as the shape parameter, is a measure of the scatter in the tensile data (higher m means narrower dispersion) and is dimensionless, and σ 0 is a constant known as the scale parameter or characteristic strength [6,26]. For n filaments tested at gauge length L 0 , the strengths of the filaments were ordered from least to the greatest strength, and each data point assigned a rank i. P was determined for each of the n data points by: For the particular reference gauge length L 0 used in the test, Equation (A1) can be rearranged to the following: The Weibull parameters m and σ 0 were determined by plotting ln[− ln(1 − P)] vs. ln(σ). The slope of the plot yielded the shape parameter, m, and the scale parameter, σ 0 , was calculated from the y-intercept and m. Using the distribution mean, σ L = σ 0 where Γ is the gamma function, the average fibre strength σ (L c ) at critical length L c was determined: where σ L 0 is the average fibre strength at gauge length L 0 .
Method 2: Obtaining Weibull Parameters from Filaments at Three Different Gauge Lengths
In order to minimise errors in calculating σ (L c ) from Equation (A4), tensile tests were carried out at three different gauge lengths [28]. Then, the data for ln(σ L ) vs. ln(L) was plotted, where σ L is the average strength at gauge length L, given by the distribution mean. The mean Weibull shape parameter, m, was calculated as − 1 slope from the linear regression obtained from this graph. The scale parameter, σ 0 , was obtained by inverting the mean function The average fibre strength at critical length σ (L c ) could then be calculated by the mean [26,29].
Single Fibre Tensile Tests and Results
From previous work in [8], the tensile properties of standalone bead-less glass fibres did not significantly differ from those of beaded fibres. Moreover, since the beads on the fibre are of the same material as the matrix, once the beaded fibre is embedded in the matrix, the strength of fragments of the beaded fibre can be thought of as the same as that of bead-less fibre fragments embedded in an epoxy matrix. Therefore, only the strength of bead-less fibres in air was considered and used going forward.
Single fibre tensile tests were performed on bead-less glass fibres in air on an Instron (model 5965) at a rate of 1 µm s −1 . Only fibres of diameter 16.8 ± 0.5 µm were used, since this was the chosen fibre diameter for the single fibre fragmentation tests. Glass fibres were stretched taut and glued to a plastic tab using a stiff cyanoacrylate (CN, Tokyo Measuring Instruments laboratory, Tokyo, Japan), as seen in Figure A1a. Double-sided tape was then placed over the tab and the fibre ( Figure A1b) before the sample was placed within the clamps of the Instron to prevent the tab from slipping through the clamps. One side of the tab was cut before loading onto the Instron for ease of handling. Once both sides of the fibre were placed in the clamps, the tab was cut on the other side as well so that the fibre was free-standing ( Figure A1c). The fibre was then stretched until failure occurred, and the maximum tensile stress at failure (i.e., tensile strength) was recorded. matrix, the strength of fragments of the beaded fibre can be thought of as the same as that of bead-less fibre fragments embedded in an epoxy matrix. Therefore, only the strength of bead-less fibres in air was considered and used going forward. Single fibre tensile tests were performed on bead-less glass fibres in air on an Instron (model 5965) at a rate of 1 µm s −1 . Only fibres of diameter 16.8 ± 0.5 µm were used, since this was the chosen fibre diameter for the single fibre fragmentation tests. Glass fibres were stretched taut and glued to a plastic tab using a stiff cyanoacrylate (CN, Tokyo Measuring Instruments laboratory, Tokyo, Japan), as seen in Figure A1a. Double-sided tape was then placed over the tab and the fibre ( Figure A1b) before the sample was placed within the clamps of the Instron to prevent the tab from slipping through the clamps. One side of the tab was cut before loading onto the Instron for ease of handling. Once both sides of the fibre were placed in the clamps, the tab was cut on the other side as well so that the fibre was free-standing ( Figure A1c). The fibre was then stretched until failure occurred, and the maximum tensile stress at failure (i.e., tensile strength) was recorded. Figure A2a. The shape parameter, , the scale parameter, , and the average fibre strength at critical length for the fibres, ( ) were determined and recorded in Table A1. The average fibre strength at = 30 mm was found to be 1344 ± 283 MPa. Method 2: Filaments at two additional gauge lengths 20 mm and 10 mm were tested, with 21 and 25 filaments, respectively, and a plot of ln( ) vs. ln( ) is seen in Figure A2b. The shape parameter, , the scale parameter, , and the average fibre strength at critical length for the fibres, ( ) , can be found in Table A1. Method 1: An initial gauge length of L 0 = 30 mm was chosen for this method and 29 filaments were tested at this gauge length (i.e., n = 29). A plot of ln[− ln(1 − P)] vs. ln(σ) is given in Figure A2a. The shape parameter, m, the scale parameter, σ 0 , and the average fibre strength at critical length for the fibres, σ (L c ) were determined and recorded in Table A1. The average fibre strength at L 0 = 30 mm was found to be 1344 ± 283 MPa.
Method 2: Filaments at two additional gauge lengths 20 mm and 10 mm were tested, with 21 and 25 filaments, respectively, and a plot of ln(σ L ) vs. ln(L) is seen in Figure A2b. The shape parameter, m, the scale parameter, σ 0 , and the average fibre strength at critical length for the fibres, σ (L c ) , can be found in Table A1. matrix, the strength of fragments of the beaded fibre can be thought of as the same as that of bead-less fibre fragments embedded in an epoxy matrix. Therefore, only the strength of bead-less fibres in air was considered and used going forward. Single fibre tensile tests were performed on bead-less glass fibres in air on an Instron (model 5965) at a rate of 1 µm s −1 . Only fibres of diameter 16.8 ± 0.5 µm were used, since this was the chosen fibre diameter for the single fibre fragmentation tests. Glass fibres were stretched taut and glued to a plastic tab using a stiff cyanoacrylate (CN, Tokyo Measuring Instruments laboratory, Tokyo, Japan), as seen in Figure A1a. Double-sided tape was then placed over the tab and the fibre ( Figure A1b) before the sample was placed within the clamps of the Instron to prevent the tab from slipping through the clamps. One side of the tab was cut before loading onto the Instron for ease of handling. Once both sides of the fibre were placed in the clamps, the tab was cut on the other side as well so that the fibre was free-standing ( Figure A1c). The fibre was then stretched until failure occurred, and the maximum tensile stress at failure (i.e., tensile strength) was recorded. Method 1: An initial gauge length of = 30 mm was chosen for this method and 29 filaments were tested at this gauge length (i.e., = 29). A plot of ln [− ln(1 − )] vs. ln( ) is given in Figure A2a. The shape parameter, , the scale parameter, , and the average fibre strength at critical length for the fibres, ( ) were determined and recorded in Table A1. The average fibre strength at = 30 mm was found to be 1344 ± 283 MPa. Method 2: Filaments at two additional gauge lengths 20 mm and 10 mm were tested, with 21 and 25 filaments, respectively, and a plot of ln( ) vs. ln( ) is seen in Figure A2b. The shape parameter, , the scale parameter, , and the average fibre strength at critical length for the fibres, ( ) , can be found in Table A1. Both methods of determining σ (L c ) for fibres for this epoxy-fibre systems appear to agree very well with each other, and therefore, both may be used as a means of determining σ (L c ) of a fibre. A t-test comparing the values of the σ (L c ) calculated by each of the methods revealed a p-value of 0.048, implying that there is a significant difference between them. Thus, for the purpose of consistency, only the values of σ (L c ) from method 1 were taken in the calculation of τ in Section 3.2.
Similarly, σ (L c ) of beaded fibres was calculated using both methods, as seen in Table A1, which is essentially calculating the strength of the glass fibre at a lower L c . This is based on our assertion that the fibre strength of a beaded fibre embedded in a matrix should not be affected by the presence of the beads at the surface. Comparing the σ (L c ) of beaded fibres obtained from methods 1 and 2 using the t-test revealed a p-value of 0.009, again implying that there is a significant difference between the values obtained from method 1 and method 2. Therefore, we used the results of method 1, as for the beadless fibres.
Appendix B
During the fabrication process of the specimens, residual stresses and strains are introduced in the fibre and matrix due to the mismatch in thermal coefficients of expansion of the matrix and glass fibres. A typical value of the coefficient of thermal expansion (CTE) of a glass fibre used in this study is 5 × 10 −6 • C −1 [32]. The epoxy system used in this study had a glass transition temperature (T g ) of about 78 • C [8] and was cured at 100 • C, i.e., above T g . It is known at the CTE of polymers above T g is much higher than below it due to the polymers being in a more rubbery state above T g [34,35]. Typical CTE values of epoxy are 60 × 10 −6 • C −1 below T g and 180 × 10 −6 • C −1 above T g [36]. An assumption was made that there was no relaxation of thermal stresses even above T g , and so Equation (A7) (modified from [33]) was used to calculate the thermal residual strain on cooling the sample to room temperature: where α m1 and α m2 are the CTE of the matrix above and below T g , respectively, α f is the CTE of the fibre, T is room temperature (25 • C) and T cure is the temperature at which the sample was cured. ε th, f was found to be −0.67%. To calculate the percentage strain (ε w ) induced in the fibres by hanging 10 g weights on either end when making the samples for the single fibre fragmentation tests as described in Section 2.3.2, Equation (A8) was used where F is the force induced in the fibre, 10 × 10 −3 × 9.8 N, A is the area of the fibre of diameter 16.8 µm, and E, the Young's modulus of the fibre, which from the single fibre tests at gauge length 30 mm done in Appendix A was found to be 68.2 GPa. The ε w induced in the fibre by the 10 g weights was calculated to be 0.65%, which effectively counterbalances the residual strain induced due to thermal stresses. Therefore, pre-straining the fibres with 10 g weights is important in order to offset the thermal residual strain induced during sample preparation.
Compressive Residual Strain in Fibre due to Beads
We ask the question of whether or not the beads also contribute to the residual strain in the fibre, especially since they are cured once on the fibre and then cured a second time during sample preparation (see Section 2.3.2). We observed in our previous study [8] that epoxy cured twice was found to have a slightly higher T g of 80 • C (2 • C above the matrix value). We also estimate that the values of α m1 and α m2 for the beads are the same as for the matrix, because a small difference in T g is negligibly associated with the curing degree at high levels of curing [39]. Using this T g , the compressive strain induced by the bead on the fibre would be −0.65% (Equation (A7)), which is a difference of only 0.02% from that of bulk epoxy cured once, equivalent to about 0.43% of the fibre strength, a negligible stress level. Furthermore, this value is highly exaggerated because the bead is small compared to the bulk matrix, and so most of the difference in thermal expansion will be borne by the bead and not by the fibre.
Appendix C
We calculated in our previous paper [8], based on pullout tests of fibres with single beads, that the friction interfacial stress at the bead front half is τ f = 64.8 MPa, and that the ratio between the friction interfacial stress (τ f ) and the bonding interfacial strength (τ i ) is τ f /τ i ∼ = 1.24. Thus, in a fibre with several beads, the average interfacial strength can be calculated by weighing the stress components by their respective action lengths, so that τ i acts along λ, the distance between bead centres (i.e., the beads period), whereas τ f acts along L, the bead half-length.
The equation can be normalised by τ i : From this study, an average value for 2L was calculated to be 65 µm, and average λ was taken to be 125 µm. Thus, the ratio of half-length to bead period, L/λ ∼ = 0.26. Using the ratio of τ f /τ i from above, we calculate the ratio between τ beaded and the control as: τ beaded τ i = 1 + (0.26 × 1.24) = 1.32 (A11) Thus, this calculation predicts that the effective shear strength will be higher by about 30% in the beaded fibres compared to the bead-less ones. This prediction, however, is not realised in fibre fragments with more than two beads, because the outer beads carry most of the load whereas the inner beads bear much less (see discussion in the main text, Section 4.1).
|
2022-01-28T16:19:58.606Z
|
2022-01-24T00:00:00.000
|
{
"year": 2022,
"sha1": "022805464852499eaa70bc965e78a529776e192e",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/15/3/890/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3ae9fadcba202708a23f41c25fdac5be5b960d80",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
6619951
|
pes2o/s2orc
|
v3-fos-license
|
Cognitive Control Associated with Irritability Induction: an Autobiographical Recall Fmri Study
Objective: Despite the relevance of irritability emotions to the treatment, prognosis and classification of psychiatric disorders, the neurobiological basis of this emotional state has been rarely investigated to date. We assessed the brain circuitry underlying personal script-driven irritability in healthy subjects (n = 11) using functional magnetic resonance imaging. Method: Blood oxygen level-dependent signal changes were recorded during auditory presentation of personal scripts of irritability in contrast to scripts of happiness or neutral emotional content. Self-rated emotional measurements and skin conductance recordings were also obtained. Images were acquired using a 1,5T magnetic resonance scanner. Brain activation maps were constructed from individual images, and between-condition differences in the mean power of experimental response were identified by using cluster-wise nonparametric tests. Results: Compared to neutral scripts, increased blood oxygen level-dependent signal during irritability scripts was detected in the left subgenual anterior cingulate cortex, and in the left medial, anterolateral and posterolateral dorsal prefrontal cortex (cluster-wise p-value < 0.05). While the involvement of the subgenual cingulate and dorsal anterolateral prefrontal cortices was unique to the irritability state, increased blood oxygen level-dependent signal in dorsomedial and dorsal posterolateral prefrontal regions were also present during happiness induction. Conclusion: Irritability induction is associated with functional changes in a limited set of brain regions previously implicated in the mediation of emotional states. Changes in prefrontal and cingulate areas may be related to effortful cognitive control aspects that gain salience during the emergence of irritability. Introduction Several imaging studies in healthy human subjects using positron emission tomography (PET) or functional magnetic resonance imaging (fMRI) have been conducted during the provocation of specific emotional states, including happiness, sadness, fear, anger, anxiety, guilt or disgust. These studies have demonstrated the involvement of multi-focal brain circuits in emotional processing, frequently involving portions of the frontal and temporal neocortices, anterior cingulate gyrus, medial temporal structures, amygdala, insula and the basal ganglia.
Introduction
Several imaging studies in healthy human subjects using positron emission tomography (PET) or functional magnetic resonance imaging (fMRI) have been conducted during the provocation of specific emotional states, including happiness, sadness, fear, anger, anxiety, guilt or disgust.][3][4] Despite the wealth of such literature on emotional processing, not all kinds of human emotions have as yet been investigated using functional imaging techniques.For instance, no imaging study to date has assessed the neural circuits involved specifically in the mediation of irritability states.Irritability is a distinct negative emotional state that involves a subjective reduction in the control over temper in response to sensorial or psychic stimuli. 5,6lassic conceptualizations differentiate irritability from anger by the absence, in the former, of the actual intention to hurt. 7Other authors have emphasized irritability as a broader construct than anger, involving a greater range of behavioral, and in particular cognitive, features.Such cognitive aspects may be critical to the management of feelings of irritability in ways other than through the expression of aggression. 6,8rritability emotions may appear either in healthy individuals 9,10 or in the context of psychiatric conditions including mood disorders, substance misuse disorders and borderline personality disorder. 11In particular, irritability is highly prevalent in major depressive disorder, and is associated with greater severity of depression, suicidal tendencies, and impairments in overall functioning. 12Irritability emotions are also a frequent feature of bipolar spectrum disorders, particularly in association with younger age, presence of other axis I psychiatric conditions, atypical depressive features and mixed mood states. 13,14Such data suggest that irritability is relevant not only in terms of the prognosis and treatment planning for mood disorders, but may also be a potential marker of nosological subtypes of these conditions. 14e have recently devised an fMRI protocol aimed at detecting brain activity patterns associated with the induction of emotional states by presentation of autobiographical scripts.Using this type of paradigm in a group of 12 healthy volunteers, we recently reported significant foci of activation in the prefrontal cortex, insula, dorsal anterior cingulate cortex, thalamus, hypothalamus and middle temporal gyrus during auditory presentation of happiness scripts. 15We describe here the specific changes in brain activity provoked by induction of irritability states using this paradigm, both in comparison to presentation of neutral or happiness-inducing personal scripts.By delineating the brain circuitry involved in the normal experience of irritability, we aimed to provide clues that may help in the future elucidation of the neural basis of pathological irritability states associated with psychiatric disorders.In accordance with the categorization of irritability as an emotional state of negative valence, we predicted the engagement of brain regions previously implicated in the expression of negative emotions, including the ventral prefrontal cortex, subgenual anterior cingulate cortex and insula. 1,2,16In addition, given the proposed prominence of cognitive aspects in the processing of irritability, we aimed to verify whether the irritability state would necessarily involve the anterior cortical brain regions previously implicated in the effortful cognitive regulation of emotion, including the dorsomedial and dorsolateral prefrontal cortices, and the dorsal anterior cingulate gyrus. 2,4
Participants
We studied eleven healthy subjects (5 females, 6 males), aged 21-50 years (mean age = 32.4,sd = 7.2), all right-handed according to the Edinburgh Handedness Inventory, 17 and who had completed at least elementary school (mean number of years of education = 10.5, sd = 1.0).
Subjects were recruited through newspaper and radio advertisements, and were screened by a team of psychiatrists using the following exclusion criteria: current or previous history of neurological and/or general medical conditions, as assessed by non-structured clinical interviewing, physical examination, electrocardiogram and blood and urine tests; current or previous history of psychiatric disorders including substance abuse or dependence, according to Diagnostic and Statistical Manual of Mental Disorders criteria, 11 based on information obtained with the Structured Clinical Interview for DSM-IV; 18 first-degree family history of psychiatric disorders including psychosis, recurrent mood disorders and dependence, using the Family History Screen; 19 current use of other drugs with potentially psychoactive effects; and for female subjects, history of pregnancy or lactation within the last six months.
The Ethics Committee of the Universidade de São Paulo Medical School approved the study (Process number 048/01), and written informed consent was obtained from all subjects.
Interview for the selection of personal experiences and preparation of scripts
Within two weeks of fMRI scanning, subjects were interviewed by one of the researchers (C.T.C.) with the purpose of obtaining information for the later construction of one minute-long autobiographical scripts eliciting feelings of either irritability or happiness, as well as control scripts of neutral emotional content (three scripts for each emotional category).
A list of situations that could potentially be associated with the feeling of irritability (e.g.waiting in long queues, dealing with bureaucracy, traffic jams) was presented, prepared based on the Hassles and Uplifts scale 20 and the Buss-Durkee Hostility inventory. 5Subjects were asked to recall episodes of their lives involving those situations within the past six months, and to select the emotion most intensely felt, from a list including irritability, fear, anxiety, sadness, frustration and anger/aggressiveness (with the latter emotion attested by the presence of an "intention to hurt" someone physically or morally). 7The personal events in which irritability was the main emotion recalled were then selected, and subjects were asked to give ratings from 1 to 10, in each of those events, for the degree of irritability experienced (mean = 7.5, sd = 1.8), as well as for other negative emotions that might have also been felt (including anger/aggressiveness, sadness, disgust, fear).Personal episodes in which any of the additional negative emotions were rated as more severe than the irritability scores were discarded.Episodes for which subjects gave ratings greater than 1 for the presence of anger/aggressiveness were also excluded.After applying those criteria, the three remaining situations that showed the highest scores for irritability were selected.
In order to allow the selection of three personal experiences for the preparation of the happiness scripts, subjects were asked to recall previous events (within the last 6 months) in which they experienced feelings of happiness, prompted by examples of situations such as festivities, personal achievements, birth of new family members etc.For all situations chosen by the subjects, they were asked to give ratings from 1 to 10 for the degree of happiness experienced (mean scores = 9.0, sd = 1.2), as well as to the above emotions of negative valence.We excluded episodes in which there were associated negative emotions. 21The same procedure was repeated in order to allow the selection of three emotionally neutral personal events, in order to provide information for the control scripts.None of the emotions above were self-rated as being present in association with the selected neutral situations by any of the subjects.
Finally, the interview was also used to select general contextual or conceptual subject matter from local newspapers, magazines and internet sites, rated by the subjects as emotionally neutral to them.This information was used in the construction of nine nonpersonal texts for each subject, in addition to the nine personal scripts.During the fMRI session, these neutral, non-personal texts were presented in auditory form preceding the presentation of each of the nine personal scripts.This strategy was aimed at helping to dissipate the previous emotional reaction elicited by the presentation of the personal scripts.
A high level of visual imagery capacity is desirable for an individual to be able to display prominent emotional reactions during recall of autobiographical events.Therefore, at the beginning of the interview, the Vividness of Visual Imagery Questionnaire, translated from its Spanish version, 22 was given to all subjects, in order to provide a measure of their capacity for visual imagery.The current sample presented with a mean score of 31.9 (sd = 11.4), which indicates a good capacity for visual imagery. 22he nine personal scripts for each subject were written in the second person, using the present tense, 23 by a professional writer.A predetermined text structure was employed, dividing each script into three separate paragraphs, describing respectively: the sensorial context in which the experience developed; the temporal, personal and interpersonal contexts; and the details of the emotional reaction elicited.Each paragraph was written with 40 (±4) words and 240 (±30) characters (including spaces).Scripts were read by a professional narrator of the same gender as the subject, in a normal tone of voice, 21 and were recorded digitally.Audio editing of scripts was conducted using Protus ® software in order to adjust the duration of each paragraph to 20 seconds without distortions (60 seconds for the entire script), and to a mean volume of 46 dB, after normalization.
Emotion induction procedure
Before scanning, room lights were turned down, and auditory instructions were provided while subjects lay down on the scanner bench.The fMRI data acquisition was conducted during the presentation, via non-magnetized earphones (Commander-XG ® , Resonance Technology, Los Angeles, USA), of the three kinds of autobiographical scripts (irritability, happiness, and neutral emotional content).Subjects were asked to keep their eyes open and pay attention to the content of each script, recalling the emotions felt during the situation as if it was occurring at the moment of the fMRI scanning; 24 they were also instructed to avoid thinking about other memories not specifically cited in the scripts.Three separate runs were performed, each including different scripts of irritability, happiness and neutral content.
The scanning session comprised three functional runs, involving scripts of irritability, happiness and neutral content.A functional run included three trials of 80 seconds, preceded by a baseline period of no stimulation (80 seconds).Trials were composed of a non-personal script (20 seconds) and a personal script (60 seconds).Non-personal scripts were used to dissipate the previous emotional reaction.In each run, irritability and happiness trials were presented either first or last relative to the neutral script, which was always presented in the mid-position.The position of the two emotional scripts was inverted in the subsequent runs.Also, the order of presentation of the irritability or happiness scripts in the first run (and the subsequent ones) was counterbalanced across subjects.These strategies were used to avoid a beginning-to-end bias of emotional processing for one of the two affective states (irritability or happiness).
Subjective ratings of five different emotions (happiness, irritability, sadness, fear and anxiety) were obtained immediately after the presentation of each personal script (in pseudorandomized order), as well as after the initial baseline period.
Responses were provided using a set of purpose-built conductors previously installed, with velcro, on the ventral face of each of the five fingertips of the right hand.For each visual scale, subjects chose a rating from 1 (not at all) to 4 (high), by pressing the conductor placed on the thumb to one of the four conductors on fingertips 5, 4, 3 or 2, respectively.This apparatus was used in order to speed up the process of response selection and to minimize errors.A desktop computer recorded subjects' choices and response times, with its screen displaying to the examiners the same scales as those seen by subjects.In four subjects, visual scales were projected on a screen and they were visualized by the supine participant using a mirror mounted on the head coil of the fMRI scanner, at a distance of 390cm from the projection screen.In the remaining seven subjects, scales were displayed using goggles with binocular vision (MRIVision2000 ® , Resonance Technology, Los Angeles, USA), worn from the onset of the fMRI examination.
Immediately before and after image acquisition, subjects responded to the State scale of the State-Trait Anxiety Inventory (STAI), 25 in order to determine the levels of state anxiety across the scanning procedure.This assessment was aimed at investigating whether there would be differences in the degree of state anxiety before and after the fMRI procedure that might influence the BOLD signal patterns detected across the three functional runs.
Image acquisition
For each run (including baseline, emotional/neutral personal scripts, subjective scales and neutral non-personal scripts), a total of 220 gradient-echo T2* echo planar imaging (EPI) sets were obtained using a GE LX-MR 1.5T scanner (General Electric, Milwaukee WI, USA), each consisting of 15 interleaved non-contiguous 3mm-thick transaxial slices, parallel to the intercommisural line.Imaging parameters were: TE = 40ms, TR = 2s, 64x64 matrix, interslice gap = 0.3mm, field-of-view = 200x200mm and flip angle = 90.Stimulus presentation was synchronized with image acquisition via an optical relay, triggered by the radiofrequency pulse.Purpose-written software was used for synchronizing the presentation of stimuli, visual scale display, subject responses, and image acquisition.
Within five days before the fMRI session, subjects were trained in a sham session, lying down inside a scanner simulator that replicated the MRI environment and the sounds emitted during image acquisition.This procedure aimed to accustom the participants to the MR environment and to the task format used for eliciting emotional reactions, thus minimizing habituation effects over the three functional runs of the actual fMRI scanning session.
Data analysis
Image processing involved, firstly, data realignment to minimize motion-related artifacts, 26 and Gaussian smoothing at FWHM = 7.2mm.Changes in blood oxygen level-dependent (BOLD) signals in association with each condition were modeled using the General Linear Model, assuming the haemodynamic response function as the convolution of experimental design by two gamma-variate functions (4 and 8 seconds after onset).The weighted sum of these two convolutions providing the best fit to the time series at each voxel was calculated, and a goodness of fit statistic (SSQ ratio) was computed at each voxel. 27The SSQ ratio is defined as the quotient between the sum of squares of residuals under constrained model (assuming there is no activation) and the sum of squares of residuals for the complete model.The SSQ ratio distribution under the null hypothesis (of no activation) was obtained by using wavelet-based permutation. 27This permutation approach has been shown to provide good Type I error control with minimal distributional assumptions.
In order to extend statistical inferences to the group level, the SSQ ratio maps were spatially normalized to Talairach standard space by first applying a rigid body transformation of the fMRI data into high-resolution morphological images of the same subjects, followed by an affine transformation onto a template. 26In order to identify voxel clusters showing significant BOLD response differences between conditions, the median differences of SSQ ratios over all subjects between conditions were initially tested at voxel-wise p-values < 0.05.The "activated" voxels were then assembled into 3D-connected clusters and the sum of the SSQ ratios (statistical cluster mass) was determined for each cluster.The same procedure was repeated for the median SSQ ratio maps obtained by waveletpermutation of data for the specific scripts conditions, in order to compute the null distribution of statistical cluster masses under the null hypothesis.This distribution was then used to determine the critical threshold for the cluster mass under the null hypothesis at a Type I error level of cluster-wise p-value < 0.05.
Skin conductance response (SCR) acquisition and analysis
Skin conductance signals were recorded simultaneously with fMRI acquisition, in order to provide objective measures of psychophysiological changes associated with the emotional reactions elicited by the presentation of scripts.Standard fingertip AgCl leads 28 were placed on the middle phalanges of the index and middle fingers of the left hand.The electrode leads were connected through a high pass filter on the penetration panel to a SCR transducer connected to a stand-alone monitor unit (Psylab ® ) outside the scanner room.Analog signals were recorded at 100Hz, passed to an AD converter, and recorded using Psylab software (Psylab ® ), on a purpose-configured laptop.Measurements were expressed as the difference between the skin conductance level (SCL) obtained during the presentation of a given personal script and the respective baseline value.Runs during which the curves' variation in signal intensity was lower than 0.05µS were discarded.
Subjective self-reported ratings of emotions
Mean emotion intensity scores provided by subjects at the end of each condition are presented in Table 1.As expected, there were significant differences in subjective scores for irritability across the four different conditions (baseline, happiness, irritability and neutral state), with subjects reporting feeling significantly more irritated immediately after presentation of the irritability scripts relative to all other conditions (Table 1).There were also significant differences in the subjective scores for sadness across conditions, with subjects showing higher sadness scores after presentation of the irritability scripts relative to all other conditions (Table 1).
The comparison of STAI scores before and after the fMRI procedure did not show significant differences (t = 1.04; df = 10, p = 0.325, paired t-test).
Patterns of BOLD signal differences between conditions
Results of the comparison of BOLD effects between the irritability, happiness and neutral conditions are displayed in Figure 1.Table 2 provides the coordinates for the voxels of maximal statistical significance in each cluster showing BOLD signal differences between conditions, as well as the size of those clusters and the corresponding statistical test values.
In the comparison of the irritability condition relative to the neutral condition, four foci of regional increases in BOLD effect in association with the irritability scripts were detected (Figure 1 and Table 2), located respectively in: the left subgenual anterior cingulate cortex (BA25) extending towards the left caudate nucleus; the left dorsomedial prefrontal cortex (BA9); the left dorsal anterolateral prefrontal cortex (BA10); and the left dorsal posterolateral prefrontal cortex (BA44) extending towards the precentral gyrus (BA6/4).
The findings of the comparison of the irritability condition versus the happiness condition have been reported elsewhere. 15hese results are presented again here, as this contrast is relevant to the delineation of the brain activity patterns specifically related to the emergence of irritability, as opposed to an emotional reaction of positive valence.When contrasted with the happiness condition, the presentation of irritability scripts was associated with increased BOLD signal in similar locations as described in the paragraph above (Figure 1 and Table 2), including the left subgenual anterior cingulate gyrus (BA25) extending towards the left head of the caudate nucleus, the dorsal anterolateral prefrontal cortex (BA10), and the dorsomedial prefrontal cortex (BA9).However, the focus of increased BOLD signal in the left dorsal posterolateral prefrontal cortex (BA44) was no longer present.In addition, this contrast showed activation of the left cerebellum and fusiform/lingual gyri (BA19/37) in association with the irritability condition (Table 2).
Areas of decreased BOLD effect during the irritability condition relative to the presentation of neutral scripts (Figure 1 and Table 2) were seen mainly in the temporal and occipital areas, including the right middle temporal gyrus (BA21), left middle and superior temporal gyri (BA21/22), right temporal pole (BA38), left inferior temporal gyrus (BA20/37), left fusiform gyrus (BA19/37), and left hippocampus (BA66).Relative to the happiness scripts, the irritability condition was also associated with temporal lobe areas of decreased BOLD signal, involving the right temporal pole (BA38), right inferior and middle temporal gyri extending to the posterior insula (BA20/21), and the left inferior and middle temporal gyri (BA20/21/37).There were also foci of decreased BOLD signal during irritability relative to happiness scripts in the left anterior insula extending to the dorsal postero-lateral prefrontal cortex (BA44), as well as in the left thalamus and hypothalamus (Table 2).
Skin conductance levels
The SCR measures of 3 subjects had to be discarded, due to variation in signal intensity lower than 0.05µS.The results for the remaining 8 subjects, over the three runs, are presented in Table 3. Mean scores for the SCL difference relative to the baseline condition were positive for the three types of scripts, showing that all of those personal script conditions were associated with greater SCR in comparison to the baseline state.Regardless of the type of condition, there was an overall tendency towards lower values from the first to the third run.A trend towards significant differences between conditions was seen during the first run, with a tendency to lower irritability SCL values relative to both happiness and neutral script values (Table 3).
Discussion
To the best of our knowledge, this is the first fMRI study to have used autobiographical scripts to investigate the brain circuitry involved in the mediation of irritability states in healthy subjects.There were significant differences in self-reported ratings of subjective emotions across the irritability, happiness and emotionally neutral conditions, as well as a trend towards differences in skin conductance levels, thus suggesting that the paradigm was successful in eliciting distinct emotional states.When the irritability condition was contrasted against either the happiness condition or the emotionally neutral control state, significant differences in BOLD signal were detected in a limited set of brain regions.These changes included increased activity in the subgenual anterior cingulate cortex and specific portions of the dorsal prefrontal cortex, as well as decreased activity in inferior temporal regions.
We detected a focus of significant left subcallosal cingulate activation during the irritability condition relative to both the happiness and emotionally neutral states.Contemporary models of emotional processing place the subgenual cingulate cortex as an important component of a ventral neuronal network that is critical to the actual generation of normal and pathological emotional states, as well as to the automatic regulation of autonomic responses in the context of those states. 2,3Using this type of autobiographical recall paradigm, we found activation of the subgenual cingulate cortex during presentation of irritability scripts but not during presentation of happiness scripts. 15This distinction supports the view that the engagement of this brain region is more closely related to the generation of negative rather than positive emotions. 1 is also interesting that the pattern of subcallosal cingulate cortical activation reported here, together with the de-activation of inferior temporal regions, resembles the results of previous imaging studies that investigated the brain circuits involved in normal sadness or major depression. 1,2,29,302][3] Thus the engagement of the subgenual cingulate cortex in our study could be due to the fact that the presentation of irritability scripts elicited significant feelings not only of irritability, but also of sadness.Such overlap between irritability and sadness is consistent with the recognized relevance of irritability emotions to the clinical profile of depressive disorders. 11,12Also, recent imaging studies have suggested the involvement of the subgenual cingulate cortex, together with prefrontal areas, in the neurobiology of bipolar disorder 31 and manic and hypomanic states. 32These findings may suggest that the abnormal functioning of the brain regions implicated herein could be related to the expression of irritability not only in the context of depression, but also in that of manic and/or mixed states. 14he pattern of increased activity in the BA 9 portion of the left dorsomedial prefrontal cortex, which was present during the presentation of irritability scripts relative to the neutral emotional state, is unlikely to be specifically related to the emergence of irritability emotions.This portion of the frontal lobe has been engaged in several studies that used autobiographical scripts to elicit different kinds of emotion, both of negative and positive valence. 1,4The lack of specificity of left dorsomedial prefrontal activation to the irritability condition is confirmed by the fact that the use of this paradigm during presentation of happiness scripts elicits increased activity in this prefrontal region relative to the presentation of neutral scripts, as we reported previously. 154][35] This has supported the notion of a general role for this brain region in emotional processing, possibly related to the interoceptive awareness of emotions, and the conscious regulation of emotional arousal and autonomic responses. 1,2,4ne feature specifically related to the emergence of irritability in our study was the pattern of increased activity in the left anterior dorsolateral prefrontal cortex (BA10), which was not engaged during the happiness condition relative to the neutral state, as we reported previously. 15It has been suggested that the involvement of specific dorsolateral prefrontal regions in functional imaging studies of emotional processing in humans could vary as a function of the valence of the specific emotions evaluated in each study. 1 However, it is unlikely that the differential involvement of the anterior dorsolateral prefrontal cortex (BA10) between the irritability and happiness conditions in our investigation would have been determined simply by the opposite valence of those two emotional states; studies using experimental designs to specifically address this issue have suggested a greater involvement of the left anterior dorsolateral prefrontal cortex in direct proportion to the degree of positive valence of the emotional response evaluated. 33Also, it is unlikely that the engagement of the anterior dorsolateral prefrontal cortex would have been influenced by the emergence of sadness during the presentation of irritability scripts; previous functional imaging studies of normal or pathological sadness have actually shown decreased rather than increased activity of the prefrontal cortex in association with the emergence of the latter kind of emotion. 29ne other possibility, which is in accordance with the predictions of our study, is that the engagement of the anterior dorsolateral prefrontal cortex would have been determined by the salience of cognitive-related aspects during the stimulation paradigm.Such engagement could, for instance, be related to the role of the anterior dorsolateral prefrontal cortex in the retrieval of personal memories and/or the attention being paid to one's own emotions. 36This would not be sufficient, however, to explain the more extensive activation in the anterior portions of the dorsolateral prefrontal cortex specifically during the irritability condition, as subjects were able to recall autobiographical events and re-experience equally the emotions of irritability and happiness during the respective scripts, as attested by the high subjective rating scores given for those two emotions during both conditions.On the other hand, previous PET and fMRI studies of healthy subjects have consistently shown the engagement of the anterior dorsolateral prefrontal cortex during the performance of cognitive tasks involving aspects other than autobiographical and longterm episodic memory, such as executive functioning/working memory. 34,35Taking into account the latter imaging findings and those of lesion studies, current models of emotional processing have implicated dorsolateral prefrontal regions as critical to the cognitively effortful regulation of emotional states. 2,4This proposition is entirely consistent with the relevance attributed to cognitive aspects in current conceptualizations of irritability. 8In this context, a robust engagement of anterior dorsolateral prefrontal portions specifically during the emergence of irritability emotions could be related to the use of cognitive-based strategies aimed at, for instance, re-evaluating the magnitude of the emotional response in the face of the stimuli that elicited irritability, or appraising the potentially adverse social consequences that might follow if aggressive reactions were to be expressed in response to such stimuli.
One additional aspect that suggests a salience of cognitive control strategies particularly during the presentation of irritability scripts is the tendency that we found towards lower SCL during the irritability condition compared to during the presentation of neutral scripts (specifically during the first fMRI run). 15Recent fMRI studies have shown SCR decrements (in direct proportion to increased BOLD signal in the prefrontal cortex) when cognitive effortful processing aspects are added to emotion-provoking paradigms in healthy humans. 28,37he above arguments favor the view that the increased anterior prefrontal activity that we found during the processing of irritability emotions would be specifically related to cognitive regulation strategies.
This hypothesis should be evaluated in future studies directly comparing the construct of irritability with other negative emotions that supposedly lack such cognitive control aspects, including anger and sadness.Also, imaging studies directly comparing anger and irritability conditions would be desirable, to clarify other issues raised by the present investigation.For instance, the activation patterns obtained during the induction of irritability were distinct from the findings of previous brain imaging studies that investigated the functional circuitry underlying the induction of anger: in PET and fMRI studies using autobiographical scripts of anger experiences, 24 healthy subjects consistently display increased activation of the orbitofrontal cortex, and also often engage the rostral anterior cingulate gyrus and the temporal pole.In the present study, rather than activation of the orbitofrontal cortex or the temporal pole, we actually detected a site of de-activation of the right temporal pole.If confirmed in subsequent studies directly comparing irritability and anger, the lack of activation of the functional circuitry seen as critical to the processing of anger may indicate that irritability and anger emotions can be distinguished from each other not only in terms of psychopathological characteristics, [6][7][8] but also in regard to their underlying brain mechanisms.
Finally, one finding not consistent with our initial predictions was the attenuation of BOLD signal in the anterior and posterior insula during the irritability condition relative to the happiness state.The insula is thought to be relevant to the cortical mapping of information pertaining to bodily responses that accompany emotional reactions. 38Our pattern of results indicates that, during paradigms of autobiographical recall, emotions of positive valence may elicit greater activity increases in brain regions involved in the central mapping of emotion-based somatic markers than emotions of negative valence.
The interpretation of the findings reported here has to be made with caution, due to several limitations of our study.The size of the sample was modest, and the inclusion of both males and females may have increased the variability of regional brain activity measurements during emotional processing.Also, the gradient EPI protocol employed may be subject to susceptibility-induced signal losses and geometric distortions that could complicate the assessment of brain areas thought to be relevant for the processing of negative emotions, such as the orbitofrontal cortex and amygdala. 39,40However, it should be mentioned that we did detect signal changes in other brain regions that would have been equally susceptible to those artifacts, such as the temporal poles.Nevertheless, replication of our findings is warranted using optimized EPI protocols aimed at improving the signal sensitivity in brain regions subject to susceptibility-induced artifacts.Finally, it should be mentioned that the pattern of brain activity changes during irritability provocation may be specific to the induction method that we have used.For instance, studies investigating other emotions have found that the amygdala is not engaged when emotional states are induced by internally/cognitivelygenerated imagery recall as in our paradigm, 29 but may respond robustly when emotions are induced by externally-cued perceptual stimuli. 1
Conclusion
Our study provided evidence that the induction of irritability, using autobiographical scripts, elicits functional activity changes in several brain regions thought to be critical both to the generation and to the cognitive control of emotional states.There were changes in BOLD signal in the prefrontal cortex, subgenual anterior cingulate gyrus and inferior temporal neocortex, all of which have been previously implicated in the emergence and/or autonomic regulation of negative emotions.Also, we detected specific foci of significantly increased activity in anterior dorsolateral prefrontal areas, which are most probably related to the cognitively effortful modulation and management of the irritability reaction.If replicated in subsequent studies, these findings may provide a basis for future investigations aimed at delineating the dysfunctional brain circuits that mediate the processing of irritability reactions in pathological mood conditions, of depressive, manic or mixed nature.
|
2016-11-05T07:37:12.269Z
|
2010-06-01T00:00:00.000
|
{
"year": 2010,
"sha1": "c98f799f828d80adcb36c0cda26bcea4feabe20a",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/rbp/a/fwBH8XMsTHYMsNBtG6yc54n/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Grobid",
"pdf_hash": "c98f799f828d80adcb36c0cda26bcea4feabe20a",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
249859771
|
pes2o/s2orc
|
v3-fos-license
|
Economic Assessment of Irrigation with Desalinated Seawater in Greenhouse Tomato Production in SE Spain
: This study assesses the impact of irrigating with desalinated seawater (DSW) on the prof ‐ itability of greenhouse tomato in south ‐ eastern Spain, comparing different water ‐ quality sources in both traditional sanding cultivation and soilless hydroponic production. The assessment is based on the combination of partial crop budgeting techniques with field data from the LIFE DESEACROP Project experimental activities. Our results show that the exclusive use of DSW for tomato produc ‐ tion increases fertilization costs by 20% in soilless systems and by 34% in traditional sanding culti ‐ vation, and water costs by 30% in soilless systems and by 48% in traditional soil cultivation. As a result, production costs increase by 5% in soilless cultivation and 3% in soil cultivation, increases that are reduced when DSW is blended with brackish water. However, the lower salinity of DSW, compared with conventional water resources in the area, increases both crop yield and profitability. Soilless cultivation would also increase tomato profitability but only if good quality water is avail ‐ able. The materialization of the potential benefits of soilless production requires improving water quality through the increased use of DSW. Otherwise, the traditional sanding production system, better adapted to the area’s poor soils and bad quality water, would be more profitable.
Introduction
South-eastern Spain is one of the most water-scarce regions in Europe. The high profitability of agricultural activity, coupled with semi-arid climatic conditions, means that the demand for irrigation water far exceeds the availability of water resources [1], generating a situation of chronic shortage that mainly impacts agriculture. Responses to this situation have been multiple. First, water scarcity has encouraged the widespread adoption of pressurized irrigation systems, together with the modernization of water distribution infrastructures, maximizing its efficiency to levels that are difficult to improve from a technical point of view [2] and generating significant benefits for irrigated agriculture [3]. Second, another traditional response to water scarcity in south-eastern Spain has been the transfer of water resources from other areas, first through the Tajo-Segura transfer and later through the Negratín-Almanzora transfer, both of which provide significant volumes for both urban supply and irrigation [4]. Third, there has been an important development of non-conventional water resources, including a high level of reuse of domestic wastewater for irrigation purposes [5][6][7], as well as the recent and growing use of seawater desalination for irrigation [8,9].
Despite all these actions, the scarcity of water resources continues to be a reality in south-eastern Spain. Moreover, it will likely intensify in the future as a result of climate change, with scenarios that forecast a reduction in the average water runoff in the natural regime of between 5% and 11% [6,10]. At the farm level, adaptive responses include the use of more sustainable irrigation strategies based on regulated deficit irrigation techniques and remote-control systems to optimize irrigation management and the shift to less water-demanding crops or varieties [11][12][13]. At the institutional level, there is little margin for improving the distribution of water, increasing the resources from wastewater reuse (limited by urban and industrial consumption) or constructing new water transfers. In addition, future scenarios of water availability in the basins of origin forecast a reduction in the contributions of these transfers [14]. All this reduces the feasible policy alternatives for increasing the resilience of irrigated agriculture in the face of the progressive depletion of hydrological systems and for dealing with the current and future water shortages in the area.
Against other options for managing water demand through economic mechanisms, the main commitment of the Spanish national hydrological authorities has been the development of the availability of desalinated seawater (DSW). Indeed, through the AGUA Program, Spain has invested heavily in the construction of seawater desalination plants (SWDP) over the last two decades in order to cover the structural water deficit, meet the demand for irrigation and guarantee urban supply [15]. In this sense, Spain is the only country in the world, together with Israel, to commit to this water planning strategy [8].
Currently, there exist eleven SWDPs that supply water for irrigation in south-eastern Spain, with a joint production capacity of 362 Mm 3 /year, of which up to 268.3 Mm 3 /year are available for use in irrigation [16]. During the first years of operation of the SWDPs, the demand for DSW for agricultural use was low, between 20 and 25 hm 3 /year. From 2013 onwards, when several large public SWDPs started to operate, the agricultural use of DSW rapidly increased, reaching 177.3 Mm 3 /year in 2017 [16]. This boom in the use of DSW is explained by several favorable circumstances: a large number of SWDP financed by the public AGUA Program, which also use modern and quite efficient desalination technologies that reduce the cost of DSW production; the growing need to provide new water resources to help alleviate the structural water deficit; and the high profitability of irrigated agriculture in many irrigated areas. It also coincides with the 2013 policy agreement that changed the operation rules of the Tagus-Segura Transfer (the main source of water supply in SE Spain), resulting in lower transferred volumes and reducing the water supply reliability for agricultural users [17]. Nowadays, the volume of DSW resources supplied for irrigation is remarkable, supplying more water for irrigation than the reuse of wastewater and approaching the historical average irrigation water volumes supplied from the headwaters of the Tagus [16]. Based on the existing demand and the downward trends in other sources of water supply, the agricultural use of DSW in SE Spain is likely to increase in the future.
This large availability of DSW resources has undoubted advantages for irrigation, some of which are precisely those that have justified the significant public investment that has allowed its development in south-eastern Spain. On the one hand, DSW is a new source of water supply, which increases total water availability for irrigation in a specifically water-stressed area where highly profitable export-oriented crops are grown. Alternatively, new DSW resources can be used to replace groundwater from depleted and/or salinized systems, of which numerous examples exist in south-eastern Spain, thus helping to recover degraded aquifers and reducing the impact of balancing aquifer pumping/recharge rates [18,19]. DSW is also a stable and inexhaustible source of water, without the climatological and hydrological uncertainties associated with conventional water resources, whose incorporation into the pool of resources increases water supply reliability [20], encouraging productive investments and allowing better production planning.
Another potential advantage of DSW is related to the improvement in the quality of irrigation water in some areas resulting from the use of DSW. The quality of DSW is significantly better than that in many Mediterranean coastal areas, where groundwater resources have significant levels of electrical conductivity. The reduced salinity of DSW increases crop yields with respect to low-quality water, as shown in different Mediterranean horticultural crops by [7,21,22]. Water salinity reduces water uptake and plant transpiration because of the physiological adaptation of roots to water stress and the reduction in root density [23]. Using less saline water, such as DSW, changes the spatial distribution of the rates of root water uptake, which increases transpiration [24]. This also reduces vertical hydraulic fluxes, thus reducing the water leaching fraction [25] and nitrate leaching below the root zone [24,25]. In addition to increasing crop yields and reducing water and nitrate leaching, lower electrical conductivity allows the development of crops that are more sensitive to water salinity. For example, in the Campo de Níjar area in the Almería province, where poor groundwater quality has traditionally led to a predominance of tomato cultivation, the improvement in water quality due to the incorporation of DSW is allowing more crop diversification.
On the other hand, the main disadvantage of DSW is the high cost of its supply, including its production and its transportation to irrigable areas, which reduces the profitability of agricultural activity. The final cost of DSW for farmers can double or, in some cases, even triple the cost of the standard water pool, depending on the SWDP production costs, the transportation costs to each irrigated area and the level of public subsidy to DSW [16].
DSW also may cause agronomic problems that arise from its particular physicochemical characteristics, which might affect crop yields, fertilization needs and the conservation of agricultural soils [26]. One of these problems is derived from its low concentration of nutrients, such as calcium, magnesium and sulfate, which are essential for crop development, and whose presence in continental waters makes their supply through fertilization unnecessary [21,26,27]. These deficiencies force adding these nutrients, what increases fertilization costs and impacts farm profitability [28]. Likewise, its high boron and chloride content can generate toxicity in sensitive crops, such as citrus [26]. However, all these impacts are significantly reduced when DSW is blended with resources from other origins. The nutritional imbalances that DSW has for its agricultural use can be corrected by blending with other inland waters, remineralization post-treatments in SWDPs and by reprogramming in-plot fertigation [26]. The first option is the most economical and most frequently used. When DSW is almost the only available resource and blending is not an option, incorporating the nutrients in the SWDP is less costly than reprogramming fertigation, but it is barely done as SWDPs are not interested in further increasing the cost of DSW.
Reprogramming fertigation can increase fertilization costs. Experimental studies in south-eastern Spain's horticulture report increases in fertilization costs ranging between 6% and 22% when irrigating with DSW, depending on the crop and cropping system [28,29]. However, the negative impact of reprogramming fertilization on-farm profitability is reduced when compared to that of the cost of DSW. For instance, [28] calculates a reduction in farmer's profit for soil lettuce cultivation of 26% using a 50% DSW mixture and 55% irrigating exclusively with DSW, most of it caused by the higher cost of DSW. The authors of [29] find that changes in fertilization for several horticultural crops in SE Spain would reduce crop profit by 1-3% in soil production systems and by 4-18% in soilless production systems, depending on the crop considered and that such impact is small when compared to that of the cost of DSW. They also show that if DSW was blended with other sources of water, fertilization costs would not increase at all for soil production systems.
In this sense, the aim of this study is to analyze the economic impact of using desalinated seawater in greenhouse tomato cultivation, one of the main horticultural crops in south-eastern Spain, looking at both the implications in terms of both changes in input use and prices and in terms of improved water quality. This study considers both conventional sanding cultivation and hydroponic systems with reuse of drainage, which are the major production systems used in the area. Apart from the cost for farmers themselves, seawater desalination can also impose costs on society as a whole; the most relevant ones are its environmental impact due to the high energy consumption required for its production, around 4kWh/m 3 , and the associated GHG emissions [30]. Consequently, this study also looks at the implications in terms of GHG emissions but also in terms of nutrient lixiviation, a major problem in aquifers across the Mediterranean area.
Agricultural use of DSW is a relatively recent topic for agronomic research, and scientific evidence is still limited to a few published references and crops, mainly from Israel and Spain, where several research groups are developing projects to generate a better understanding of the physiological and agronomic response of crops irrigated with DSW and its impacts on soil, aquifers and crop profitability. This paper contributes to this growing literature by looking at the economic impact of irrigation with DSW in Spanish greenhouse tomato production using very detailed experimental data. Previous studies have looked at this issue in other horticultural crops [21,28,29]. This paper builds on [31] and [32], which look at the environmental and food quality implications of using DSW on greenhouse tomato production by using data and results from the same experimental activities. The study follows with a detailed description of the methodology used and the results obtained, to finish with the major conclusions that can be drawn.
Materials and Methods
This study looks at the impact on farm profitability and input productivity of the changes in the tomato production process resulting from the use of DSW under two alternative production technologies, basically changes in input use, input cost and crop yields. The approach for such assessment is based on the combination of partial crop budgeting techniques with experimental field data. Partial crop budgeting consists of calculating the effect on the profitability of changes in the crop production process, either in terms of changes in production costs, crop yields or farm prices. It is the most common tool used to analyze the profitability of alternative farming practices and agricultural technologies [33]. The basis for any partial budgeting is the elaboration of a detailed technical-economic characterization of the standard crop production process in terms of farming practices, crop yields, input use and production costs, from which detailed budgets can be built to integrate changes in input use and output for their comparison.
Analyzed Productive Strategies and Experimental Data
Data on the response of greenhouse tomato to irrigation with DSW, both on traditional soil cultivation and hydroponic soilless cultivation, comes from experiments carried out within the LIFE+ DESEACROP project, which deals with the use of desalinated seawater in soilless tomato production systems in south-eastern Spain. These experiments consider both different sources of water and greenhouse tomato production technologies. A detailed description of the experimental design and set-up can be found in [31,32].
To analyze the effect of irrigation with desalinated seawater, three types of irrigation treatments with different water salinity were considered: T1: Desalinated seawater (DSW) from the Carboneras SWDP, with a 0.5 dS/m electric conductivity; T2: A mix of 83.36% of DSW and 16.64% of saline water, with a final electric conductivity of 1.5 dS/m. T3: A mix of 44.56% of DSW and 55.44% of saline water, with a final electric conductivity of 3 dS/m, similar to the usual source of supply in the area (brackish groundwater).
Two different greenhouse tomato production systems were considered to analyze the effect of the cultivation system: H: Hydroponic soilless cultivation system with recirculation and reuse of drainage flows using coconut fiber substrate bags, the most commonly used substrate in SE Spain protected horticulture; S: Traditional soil cultivation using a sanded soil ("enarenado") without the reuse of drainage flows, which percolate to the subsoil. The "enarenado" consists of three layers of clay or gravel, manure and sand that allow cultivating over the commonly very poor soils of the area using low-quality water.
Experimental plots were set up in a greenhouse located in Retamar (Almería) in SE Spain. The greenhouse is a traditional Almerian-type plastic greenhouse without heating and with automated natural rooftop and lateral ventilation. The experiment consisted of eighteen demonstrative subplots with an area of 80.8 m 2 , each with a plantation density of two plants per m 2 . The experimental setting consisted of six repetitions per type of water (T1, T2 and T3), three of them for each productive system (H and S) on a random block design. Each repetition included four rows of plants with two additional rows in the borders of the repetitions to avoid possible border effects in the measurements. The experiment was carried out between September 2018 and June 2020 and included four sequential short productive cycles (4-5 months long): two autumn-winter cycles of tomato (Solanum lycopersicum L. cv. Ramyle) and two spring-summer cycles of tomato (Solanum lycopersicum L. cv. Racymo).
The results presented in this study correspond to the average values for the four tomato experimental production cycles. Crop yield variability has already been analyzed by [32], which concluded that differences in tomato yield across experimental treatments (water source and cultivation system) are statistically significant. Additionally, tomato quality was not considered in the present study, as [31] did not find a statistically significant relationship between tomato quality and the experimental irrigation treatments for any of the production processes considered using data from the same experiment.
Technical-Economic Characterization of the Standard Production Processes
The first step is characterizing the crop production processes, for both production systems, in technical terms. To that end, the productive process is defined on the basis of the natural crop cycle in order to obtain the income and production costs associated with each production activity in a more direct and realistic manner [34]. Moreover, the production cycle/process for each crop would be considered in isolation, even if the farms have more than one crop or production process. Therefore, based on these methodological criteria, the production cycle must be understood as a double process, both agronomic and economic.
The technical characterization of the standard production processes requires collecting detailed information on all the different farming operations implemented along the temporal sequence of the productive cycle. To that end, the farming operations were organized sequentially, starting right after harvesting of the previous crop cycle and ending in the harvesting of the crop, and per type of farming operation (plowing, irrigation, fertilization, weed and pest control, pruning, harvesting, etc.). Each farming operation can imply the use of labor and machinery, consumption of water and energy or the use of different materials. Such information is expressed in physical units to characterize output and input use (hours, m 3 , liters, kilograms, etc.).
In this study, the standard technical characterization of both tomato production processes was based on the farming operations performed in the experimental plots, crossvalidated with the relevant literature [7,[35][36][37] and consultation with technical agricultural experts from the area. Technical data from the experimental plots used in the analysis include: (1) quantity of tomato production; (2) use of inputs such as fertilizers, pesticides and other agrochemicals, energy and irrigation water (type of input, quantity applied, hours/number of applications); (3) farm machinery used (type of machinery, crop operations, hours of use, fuel consumption, etc.); and (4) labor (crop operations, working hours/days per operation, etc.). Because of the different nature of the two productive systems considered, which involve the traditional soil cultivation ("enarenado") and hydro-ponic substrate cultivation, technical data collection includes both variable inputs (consumed in each crop cycle) and fixed inputs (used in different crop cycles). This allows both a more accurate assessment of crop profitability for each experimental treatment and the comparison of both cultivation systems.
Next, the economic characterization of the production processes was built to define the standard cost structure and crop budget. The standardization of costs allows the reduction of biases and variabilities resulting from differences in the prices of inputs and eases the analysis of water use and the comparison of the different alternatives analyzed. The standard economic characterization was obtained from the technical characterization using input market prices and average market product prices to allow for the standardization of production costs. Therefore, only technical information was collected from experimental plots, while economic information (such as wages, cost of inputs, O&M costs of machinery or irrigation equipment, crop selling price, input prices, labor cost, etc.) were obtained from public statistics and market prices from commercial input suppliers. The definition of the cost structure follows the crop production cost assessment methodology and cost items used by the Spanish Ministry of Agriculture [35], in accordance with standards set for the European Farm Accountancy Data Network.
The standard direct cost structure includes the following cost items: raw materials (plants and seeds, fertilizers, plant protection products and herbicides, other materials); irrigation, if applicable (water, energy, maintenance and repair); machinery, if applicable (consumables, such as fuel and lubricants, maintenance and repair, external contracting); labor; other miscellaneous. Similarly, the standard indirect costs structure for each productive system was defined based on the characterization of the productive structure of the standard greenhouse farm in the area of study in terms of equipment and infrastructures (greenhouse, cropping system, irrigation systems, etc.).
Direct costs arise from the use of inputs that are used in only one crop cycle. These include tomato seeds and seedling trays, fertilizers, pesticides, irrigation water, electricity, labor, a plastic soil cover for weed control used in traditional soil cultivation, pollinators (Bombus terrestis), tutoring ropes and natural predators (Nesidiocoris tenuis) for the main tomato pests in the area. Table 1 presents the inventory of the productive inputs used in each crop cycle plus the average crop yields obtained and the environmental impacts considered in the analysis, while Table 2 details the unitary prices of variable inputs. More detail on the data used on crop yields, water use and drainage, fertilizers, manure and pesticide use, substrate materials, etc., and on the environmental impacts considered can be consulted in [32]. Direct costs do not include any machinery item, as machinery is only used in the preparation of traditional sanded soil and substrate and, therefore, is included in the cost of these operations, which, as they concern several years and productive cycles, are accounted for as indirect costs.
Indirect costs arise from the use of productive inputs that are used in more than one crop cycle. In our case, these include the following: Common to both cultivation systems: the greenhouse, including both the structure and the plastic cover; the irrigation water reservoir; and the shed for the irrigation system. The irrigation system, which is different for the hydroponic (H) and for the traditional soil cultivation system (S). The sanded soil ("enarenado") in the traditional soil cultivation system (S). The substrate in the hydroponic system (H), which includes the substrate sachets and the sachets holders. A plastic soil cover for the control of weeds in hydroponic cultivation (H).
The total cost of the equipment and infrastructures considered and their imputation per crop cycle are shown in Table 3. The cost of the traditional sanded soil includes both the cost of materials and the cost of building the "enarenado" structure. However, it does not include the cost of the manure layer because the amortization period is different as it is replaced every three years. The cost of the manure layer of the "enarenado" includes both the cost of the manure itself and the cost of substituting it every three years. The cost of the hydroponic substrate corresponds to 4837 sachets of coconut fiber substrate per hectare with a unitary cost of 2.27 EUR/sachet. Sachets are used, on average, for two years. The cost of the base of the hydroponic substrate corresponds to the cost of the substrate holders (4837 units at a unitary price of 1.94 EUR/unit). Last, the cost of the plastic base for weed control used in hydroponic production, which is substituted every two years, corresponds to 680.64 kg of plastic with a unitary price of 2.29 EUR/kg. The standard crop budget includes both costs and revenues. To calculate farm revenue, the market crop price was calculated as the average detrended yearly crop price calculated using data from the official agricultural databases. The results from the experimental plots were integrated into the standard crop budget for each productive process. The different experimental treatments imply changes in farming operations, input use and crop yields ( Table 1) that result in changes in production costs and revenues. The integration of such changes in the standard crop budget results in a separate cost structure and budget for each experimental treatment.
Economic Assessment of the Experimental Treatments
The assessment of the impact of the different experimental treatments on crop profitability and input productivity for each tomato productive system is based on the calculation of several financial and economic indicators from the standard crop budgets built, indicators that are then used to compare the different experimental treatments and production systems. First, cost measures were calculated based on the data collected on the crop's production process, being expressed in average per hectare values and in average unitary values per kilogram of tomato production (Unitary production cost or break-even price). Second, crop profitability was measured through the farm profit, calculated by subtracting direct costs, assets depreciation and other indirect costs (e.g., the land rent) from farm revenue, following the methodology in [35]. Third, different relevant partial productivity measures were calculated, such as average land productivity (revenue per hectare), average water productivity (revenue per unit of irrigation water), average labor productivity (revenue per unit of labor) and average energy productivity (revenue per unit of energy consumed). Fourth, some indicators, such as labor use per input use (land, water, energy), were calculated to account for the social profitability of the resources used in tomato production.
Last, in addition to assessing the impact of the analyzed productive strategies in terms of crop profitability and partial input productivity, the environmental implications of the different production processes and experimental treatments were also analyzed. More specifically, we look at the environmental impact in terms of the balance of CO2 emissions and the eutrophication potential, which are identified, together with water use, as the most relevant environmental issues related to seawater desalinization and intensive horticultural production. In this sense, partial productivity and labor use measures per unit of CO2 emissions and per unit of eutrophication potential were calculated. Both CO2 emissions and eutrophication potential for each water source and productive system are those calculated by [32] and are shown in Table 1. The balance of CO2 emissions is measured in kilograms of CO2. Eutrophication potential is measured as kilograms of equivalent phosphate anion (kg PO4 3eq), and its main contributors are ammonia, nitrogen oxides, nitrate and chemical oxygen demand.
All the indicators calculated were used to assess the social and economic implications of the use of DSW for tomato cultivation under both traditional soil and hydroponic production, alternatives that, as commented, also have environmental implications. For example, greater productivity of water or higher use of labor per kg of CO2 emitted imply a more efficient use of scarce resources.
Results and Discussion
The average crop yields in Table 1 show a positive impact of both water quality and of the use of soilless productive systems. The reduced conductivity of water increases average crop yields by 15% in traditional sanded soil and by 46% in soilless cultivation with respect to using the worst-quality water. This is consistent with [40], who obtained a 44% yield increase in greenhouse tomato when using DSW against using brackish groundwater. The differences in the average crop yields between T3 and T2 are significant, while the difference between T2 and T1 is smaller (but statistically significant, as shown by [32]). In the case of traditional soil production, the average crop yield is slightly greater for T2 than for T3. It must be noted that the increase in average crop yield when irrigating with less saline water is significantly greater for soilless production with the recirculation of drainage flows (H) than with traditional sanded soil cultivation (S). The greater difference in crop yields between T2 and T3 for soilless cultivation (H) with respect to soil cultivation (S) can be explained by the different nature of both cultivation technologies. Traditional soil production systems in the area were developed to accommodate poor soils and bad quality water, therefore, the use of soilless cultivation barely increases crop yields if saline water is used. Table 4 summarizes the average cost structure calculated for each experimental treatment and presents profitability indicators, while Table 5 summarizes all the productivity and social indicators calculated. Both direct and indirect production costs are greater for soilless cultivation (H) than for traditional soil cultivation (S), as shown in Table 4, while only direct costs increase with the use of DSW (T1 > T2 > T3). Differences in direct costs are explained by the higher cost of DSW, the higher water, energy and fertilizer consumption of soilless production (H) and the harvesting cost that depends on crop yield. The soilless production system has the advantage of avoiding the percolation of nutrients to the soil, as drainage is recirculated, but at the same time consumes more water, fertilizers and energy. On the other hand, soilless production also increases crop yield. Differences in indirect costs are explained by the amortization cost of the substrate and the water recirculation system. Because of the above, unitary production costs per kilogram are lower for soilless production (H) in T1 and T2, while they are greater for more saline water (T3) because of the lower yields obtained ( Table 4).
Looking at the cost of fertilization, a major concern when irrigating with DSW, it can be seen that reprogramming fertigation because of the nutritional deficiencies of DSW increases fertilization costs (T1 > T2 > T3). Fertilization costs increase when using DSW by 20% for soilless cultivation (H) and by 33% for traditional soil cultivation (S). These figures are greater than those in [28], which obtained fertilization cost increases of 10% in openair lettuce and 6% in hydroponically grown greenhouse lettuce, but in a similar range to those evidenced in [29], which estimated fertilization cost increases of 15% in lettuce, both hydroponically and in soil, 12% in soil-grown sweet pepper and 22% in hydroponically grown sweet pepper.
With respect to profitability, Table 4 shows that per-hectare farm profit increases significantly with water quality in the case of soilless production (T1 > T2 > T3) but not in the case of traditional soil cultivation, where profit increases from T3 to T2 but not from T2 to T1. The comparison between soilless (H) and soil cultivation (S) shows that farm profit is greater for soilless cultivation for better quality water (T1 and T2) but not for lower-quality water (T3), where the opposite occurs because the small impact that improving water quality has on crop yields for soilless production does not compensate the greater production costs. Additionally, it can be seen that differences in farm profit between soilless and soil production are greater for T1 than for T2 and T3. Again, this result shows that the benefits of DSW in terms of improved water quality are greater in soilless production but also that the benefits of hydroponic production with respect to traditional sanded soil cultivation require high-quality water to be reached.
It is difficult to frame these results within previous studies (e.g., [28,29]), as these authors compare the use of DSW with fair-quality surface resources and do not consider the effect on crop yields of improving water quality. However, our results are consistent with previous findings in terms of the increased cost of fertilization and water use when using increasing proportions of DSW, both impacts being greater in hydroponic production systems.
Looking at partial productivity measures, Table 5 first shows that partial input productivities increase with water quality for soilless production (T1 > T2 > T3) but not for traditional soil cultivation, where productivities increase from T3 to T2 but not from T2 to T1. Second, comparing productive systems, it can be seen that both land and labor productivity are greater for soilless production (H > S) for all water qualities. However, differences between soilless production and traditional soil production for low-quality water (T3) are small because of the above-mentioned similar crop yields. On the contrary, both water and energy productivities are greater for traditional soil production for T2 and T3 because of the greater water and energy requirements of soilless production. Only in the case of T1 does the increase in crop yields provided by hydroponic production compensate for the associated increase in water consumption and, therefore, water productivity for H surpasses that for S. 0.632 0703 0.882 0.750 0.737 0.833 Source: own elaboration. Data in euros per hectare unless otherwise stated. Turning to the social indicators that look at labor demand per unit of the different inputs, Table 5 shows that the improvement of water quality through the use of DSW and the use of soilless productive systems with drainage recirculation results in a slight increase in labor use per hectare, but with very small differences for the different water sources (T1, T2 and T3). On the contrary, the more intensive use of water and energy in soilless cropping systems (H) reduces labor use per m 3 and per kWh with respect to soil cultivation (S). Labor use per unit of water and energy consumption increase with water quality for soilless cultivation (H) but barely has an impact for conventional soil cultivation (S).
Moving to the environmental impact in terms of GHG emissions, because of the greater energy, water and fertilizer consumption of soilless production (H), the associated CO2 emissions balance is greater than for traditional soil cultivation (S) ( Table 1). This causes both the productivity per kg of CO2 and the demand for labor per kg of CO2 to be significantly lower for soilless production than for conventional soil cultivation (Table 5). Likewise, the CO2 emissions balance increases with the use of DSW for both productive systems (Table 1). However, the productivity per kg of CO2 increases with water quality in soilless production (H), despite the increasing use of more energy-demanding and CO2 − emitting DSW, because of the increases in crop yield that the use of better-quality water allows (Table 5). Regarding labor demand per kg of CO2, it is barely affected by the use of DSW. In the case of traditional soil production (S), both the productivity per kg of CO2 and labor use per kg of CO2 sharply decreases with water quality because the resulting increases in crop yields and labor requirements do not compensate for the increase in CO2 emissions that the increasing use of DSW causes.
Last, the reduced lixiviation and associated eutrophication potential that soilless cropping systems allow for (Table 1) results in significantly higher productivities and labor use per kg of equivalent phosphate anion with respect to traditional soil production ( Table 5). The impact of the use of DSW on the eutrophication potential is relatively limited (caused by the higher fertilization needs) and, because of its positive effect on crop yields, the productivity per kg of equivalent phosphate anion slightly increases with the use of DSW, i.e., with water quality, for both soil and soilless production. However, the differences between T2 and T3 are not significant.
Conclusions
The Spanish national water authorities have made a clear commitment to desalination as a reliable source of resources for the continuity of irrigation in south-eastern Spain, where conventional water resources are already compromised. This commitment is materialized in investments to increase the production of DSW and interconnect infrastructures for its distribution. In addition to being a new source of water supply, DSW may, in some areas, reduce water salinity and increase crop yields. However, its higher cost and the need for more specialized and expensive fertilization to cover nutritional deficiencies increase production costs and thus impact farm profitability. In that sense, this study has assessed the economic impact of the use of desalinated seawater (DSW) for tomato production in soilless greenhouse cropping systems of the Almería province, based on the results of the LIFE DESEACROP Project experimental activities, and comparing the use of different water sources in both traditional soil and soilless protected agricultural production systems.
Our results first show that the use of DSW increases tomato production costs but also crop yields, as water salinity gets reduced, resulting in higher crop profitability. Using only DSW for tomato production increases fertilization costs by 20% in soilless systems and by 34% in soil cultivation, and water costs by 30% in soilless systems and by 48% in traditional soil cultivation. This results in an increase in production costs of 5% in soilless cultivations and 3% in soil cultivation, increases that are smaller when DSW in blended with saline groundwater. Despite this, the use of DSW in tomato production in the area is profitable. Additionally, all input productivities increase with the use of better-quality water.
Secondly, regarding the effect of the cultivation system, soilless cropping systems are more intensive in terms of input use, especially water, energy and fertilizers, which results in higher production costs that, in this specific case, are compensated by higher crop yields and higher crop profitability. However, crop profitability when more saline water is used is greater for the traditional soil production system. Additionally, the use of soilless production systems increases land and labor productivity with respect to traditional soil systems but results in lower productivity of water and energy.
In sum, both the use of DSW and soilless production systems would increase the profitability of protected tomato production in SE Spain. However, the materialization of the potential benefits of soilless production requires the use of better-quality water resources. In the study area, where available natural water resources are highly saline, improving irrigation water quality implies using DSW. Otherwise, the traditional soil production system, which is better adapted to poor soils and low-quality water, would be more profitable.
However, from the societal perspective, the advantages of irrigating with DSW are more ambiguous. The use of DSW improves input productivity, and thus resource use efficiency and the demand for labor, but significantly increases CO2 emissions. However, despite this, the productivity of CO2 emissions increases with water quality for soilless production but conversely decreases for traditional soil production, as the increase in tomato production is not compensated by the increase in CO2 emissions. The use of DSW also increases nutrient lixiviation due to the higher fertilization needs. However, because of its positive effect on crop yields, productivity per kg of equivalent phosphate anion slightly increases with the use of DSW for both productive systems.
Our results suggest that the benefits, both private and social, of using DSW for irrigation are linked to an improvement in the quality of water resources that might improve crop yields. The deficient quality of water resources in the area of study results in significant increases in crop yield when better quality water (i.e., DSW) is used. In other areas without this positive effect, the use of DSW would definitively increase production costs; the cheaper conventional water resources are, the more the increase. In general, only highvalue, intensive fruit and vegetable crops could withstand the increased costs of irrigating with DSW, which would be unbearable for other crops, which makes integrated management of DSW with other sources of supply necessary in order to maintain their economic viability.
A similar conflict arises with the use of hydroponic systems. If conventional water resources available present a high electrical conductivity, soilless systems seem to take greater advantage of the water quality improvement that DSW provides than conventional soil systems and, in any case, soilless systems increase farm profitability and labor demand. However, in environmental terms, they drastically reduce lixiviation, and thus soil and water pollution, but at the cost of increasing the use of very limiting productive resources (water and energy) and CO2 emissions.
To finish, we highlight that this study has presented results based, as other previous studies in other areas and crops, on experimental activities on the use of DSW for irrigation. As a relatively novel issue, there are few but increasing experimental studies on the issue. However, we think that in order to allow for more complete economic assessments, there is a need for more research on the modeling of root water uptake and plant transpiration and on the optimization of water consumption when irrigating with DSW.
|
2022-06-20T15:08:42.718Z
|
2022-06-18T00:00:00.000
|
{
"year": 2022,
"sha1": "08d8fb901fe5830e0e5a3bb376bc57081319ad02",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4395/12/6/1471/pdf?version=1655889536",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e6a897934be9c1adb7c654eca063cce6ccafec3c",
"s2fieldsofstudy": [
"Economics",
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": []
}
|
4825731
|
pes2o/s2orc
|
v3-fos-license
|
Viral Vector Induction of CREB Expression in the Periaqueductal Gray Induces a Predator Stress-Like Pattern of Changes in pCREB Expression, Neuroplasticity, and Anxiety in Rodents
Predator stress is lastingly anxiogenic. Phosphorylation of CREB to pCREB (phosphorylated cyclic AMP response element binding protein) is increased after predator stress in fear circuitry, including in the right lateral column of the PAG (periaqueductal gray). Predator stress also potentiates right but not left CeA-PAG (central amygdala-PAG) transmission up to 12 days after stress. The present study explored the functional significance of pCREB changes by increasing CREB expression in non-predator stressed rats through viral vectoring, and assessing the behavioral, electrophysiological and pCREB expression changes in comparison with handled and predator stressed controls. Increasing CREB expression in right PAG was anxiogenic in the elevated plus maze, had no effect on risk assessment, and increased acoustic startle response while delaying startle habituation. Potentiation of the right but not left CeA-PAG pathway was also observed. pCREB expression was slightly elevated in the right lateral column of the PAG, while the dorsal and ventral columns were not affected. The findings of this study suggest that by increasing CREB and pCREB in the right lateral PAG, it is possible to produce rats that exhibit behavioral, brain, and molecular changes that closely resemble those seen in predator stressed rats.
Introduction
Study of the neurobiology of long-lasting changes in affect occurring after stressful events is of interest, an interest heightened by the fact that fearful events may precipitate affective psychopathologies [1,2]. In extreme cases, a single aversive experience may induce posttraumatic stress disorder (PTSD) [3,4]. Animal models are useful to enhance understanding of the impact of stress on brain and behavior, permitting simulation of a human condition in a controlled setting allowing study of disorder development. Conditioned fear paradigms, behavior in unfamiliar situations that are fear or anxiety provoking, and more recently, predator stress, are all models used to understand the neurobiology of the impact of fearful events on affect.
Predator stress in our hands involves the unprotected exposure of a rat to a cat [5]. Predator stress may model aspects of PTSD for several reasons. First, predator stress has ecological validity due to the natural threat posed by the predatory nature of the stressor. Second, duration of anxiety-like effects in rats after predator stress, as a ratio of life span, is comparable to the DSM IV duration of psychopathology required for a diagnosis of chronic PTSD in humans. Third, predator stress has neurobiological face validity in that right amygdala and hippocampal circuitry are implicated in behavioral changes produced by predator stress, and these areas are consistent with brain areas thought to be involved in PTSD [6][7][8][9]. For example, brain imaging implicates hyperexcitability of the right amygdala in response to script-driven trauma reminders in the etiology of PTSD [10][11][12][13][14]. Fourth, parallel path analytic studies using data from Vietnam veterans suffering from PTSD and predator stressed rodents find that in both humans and rodents, features of the stressor predict the level of anxiety [6]. For example, in predator stressed animals, the more cat bites received, the higher the level of anxiety measured a week later. Finally, similar lasting changes in startle and habituation of startle are seen in both predator stressed rats and humans with PTSD [6,[15][16][17][18].
Predator stress is fear provoking and stressful [19][20][21][22]. Moreover, cat exposure produces long-lasting increases in rat anxiety-like behavior (ALB) [5,23], with some behavioral changes lasting three weeks or longer [5,6,24]. Behavioral effects of predator stress have been evaluated in a number of tests including hole board, elevated plus maze (EPM), unconditioned acoustic startle, light/dark box, and social interaction. Anxiogenic effects of predator stress are NMDA receptor-dependent. Systemic administration of both competitive and non-competitive NMDA receptor antagonists 30 minutes before, but not 30 minutes after, predator stress prevents lasting changes in ALB [16,25]. Moreover, local NMDA receptor block in the amygdala prevents predator stress-induced increases in ALB [26].
In addition to the behavioral changes, amygdala efferent and afferent neural transmission is altered after predator stress. Specifically, predator stress causes a long-lasting potentiation in neural transmission from the right amygdala (central nucleus-CeA) to the right lateral column of the periaqueductal gray (PAG), and from the hippocampus via the right ventral angular bundle (VAB) to the right basolateral amygdala (BLA) [9,23,27]. Moreover, potentiation in these pathways is NMDA receptor-dependent [7]. In addition, NMDA receptor antagonists produce anxiolyticlike effects when microinjected into the dorsolateral PAG [28,29]. The PAG is also implicated in rodent ALB [30], and is activated by predator stress [31]. Together, these data suggest NMDA receptor-dependent long-term potentiation (LTP)-like change in amygdala afferent and efferent transmission following predator stress contribute to the lasting anxiogenic effects of cat exposure [7,9,16]. In support of this conclusion are the findings that amygdala afferent and efferent LTP-like changes are highly predictive of severity of change in ALB following predator stress [9,23,27].
Predator stress induced changes in ALB and amygdala neural transmission are accompanied by changes in phosphorylated cAMP response element binding protein (pCREB). Specifically, pCREB-like-immunoreactivity (lir) is elevated in the basomedial (BM), BLA, CeA, and lateral (La) amygdala after predator stress compared to control rats [32]. This is consistent with the elevation of pCREBlir in the amygdala after forced swimming stress [33,34], fear-conditioning in mice [35], retrieval of a cued-fear memory [36], and electric shock [37]. In addition to the amygdala, predator stress increases pCREB-lir in the right lateral column of the PAG (lPAG) [23].
As mentioned, NMDA receptor antagonism prior to predator stress blocks increases in ALB and potentiation of amygdala afferent and efferent neural transmission. Since phosphorylation of CREB may be regulated by NMDA receptors [38,39] and pCREB-lir is increased after predator stress [23,32], the question of whether NMDA receptor antagonism can block predator stress induced enhancement of pCREB-lir was recently tested. Blocking NMDA receptors with the competitive blocker, CPP, 30 minutes prior to predator stress, prevented stress induced increases in pCREB expression in amygdala, and right lPAG [40]. Of importance, the same dosing regime also blocks predator stress effects on affect and amygdala afferent and efferent transmission [7,16,25,26].
Together these findings provide compelling evidence that predator stress induced increase in pCREB is an important contributor to the changes in brain and behavior of predator stressed rodents. The purpose of the present study was to directly manipulate CREB and pCREB expression to confirm this notion.
Local changes in gene expression in the brain can be achieved with viral vectoring as a method of delivering recombinant genes directly into neurons [41]. There are a variety of viral vectors available but several characteristics of the herpes simplex virus (HSV) make it an ideal candidate for this study. The non-toxic replication defective HSV vector is capable of infecting most mammalian differentiated cell types, it accepts very large inserts and has high efficiency in infecting neurons, being naturally neurotrophic [41,42]. One of the earliest studies to utilize this method and apply it to rodent anxiety tests found that HSV vectored expression of CREB in the BLA increased behavioral measures of anxiety in both the open field test and the EPM, and enhanced cued fear conditioning [43].
The present study was designed to test the functional significance of pCREB changes within the right lateral column of the PAG. To do this we genetically induced increased expression of CREB in the right lPAG with HSV vectors and determined the effects of these manipulations on behavior and amygdala efferent transmission (CeA-lPAG). We transfected the neurons of the right PAG in an area where pCREB levels and CeA-PAG transmission are elevated after predator stress (see Adamec et al. [8,23]).
Ethical Approval.
The procedures involving animals reported in this paper were reviewed by the Institutional Animal Care Committee of Memorial University and found to be in compliance with the guidelines of the Canadian Council on Animal Care. Every effort was made to minimize pain and stress to the test subjects while using as few animals as possible.
Animals.
Subjects were male hooded Long Evans rats (Charles River Canada). Rats were housed singly in clear polycarbonate cages measuring 46 cm × 24 cm × 20 cm for one week prior to any testing. During this week, rats were acclimatized to their cage, and handled. Handling involved picking up the rat and gently holding it on the forearm. Minimal pressure was used if the rat attempted to escape, and grip was released as soon as the rat became still. Rats were handled in the same room as their home cage for one minute each day during the week long adaptation period. Rats were given food and water ad lib and were exposed to a 12-hour light/dark cycle with lights on at seven a.m. The rats weighed approximately 200 g on arrival and between 230 and 280 g on the day of testing.
Groups.
After lab adaptation and handling, the 12 subjects were randomly assigned to one of three groups of four. One group served as a handled control (Handled GFP) while another was predator stressed (Predator Stressed GFP). Both these groups were injected in the right lateral PAG (described in what follows) with the HSV-GFP vector before further treatment. This vector consisted of an HSV virus carrying a green fluorescent protein gene (GFP), a reporter used to visualize vector placement and virus induced gene expression. This injection also served to control for any effects that GFP per se might have. The third group was also handled (Handled CREB) and before further treatment received an injection in the right lateral PAG with an HSV-CREB vector. This vector included genes for both CREB and GFP. The GFP served as a reporter of gene expression, and the CREB gene elevated CREB levels in the target area.
It is recognized that a group size of four is small for behavioral studies of this nature. The small numbers were necessitated by the availability of the virus. The implications of the small group size are addressed further at the end of Section.
Surgical Microinfusion of Viral Vectors. Virus injections
were done in the lateral column of the right PAG, where pCREB increases in predator stressed rats have been observed [23,32,40]. The injections involved lowering a sterile 25 gauge needle attached to a microliter syringe into the brain using a sterotaxically mounted microliter syringe holder. The coordinates for the microinfusion according to the atlas of Paxinos and Watson [44] were, 6.3 mm posterior to bregma, 0.5 mm lateral from the midline, and 5.5 mm below the skull. The injection of 0.5 μL (in a concentration of 4.0 × 10 7 infectious units/mL supplied from University of Texas South West Medical School) was given at a rate of 0.5 μL per five minutes with the needle left in place for five minutes post injection. This dose and rate were derived from the experience with the vector of one of us (Berton). Moreover, in pilot studies with HSV-GFP, a 0.5 μl injection at this rate produced GFP expression localized to the right lateral column of the PAG over an AP plane range of.7 mm at three days post injection, the time of maximal protein expression induced by this vector [43,45].
Injections were performed under chloral hydrate anesthesia (400 mg/kg, IP) using aseptic technique. Preanaesthetic doses of atropine were given (1.2 mg/kg). Local anesthesia of wound edges was achieved with marcaine and epinephrine (2%) infusion and supplemented as needed. Holes in the skull were closed with sterile gel foam and sealed with sterile bone wax and scalp wounds sutured. Rats were kept warm under a lamp post surgery until they began to walk and groom, at which time they were returned to their home cage. Surgery took approximately one hour for each subject.
Cat Exposure and Handling
Procedures. Three days after virus (HSV) injection, when viral expression is peaking [43,45], rats were either handled or predator stressed. On the day of testing, predator stressed rats were exposed to the same adult cat as described elsewhere [5]. The cat exposure lasted 10 minutes and was videotaped to capture the activities of both the cat and the rat. The cat generally observed the rat at a distance with the intermittent approach and sniffing. On occasion, the cat would mildly attack the rat but no injuries were ever observed. At the end of the test, the rat was placed back into its home cage and left undisturbed. Rats in the other two groups whose treatment included only handling did not come into contact with the cat, cat odors or rats that had previously been exposed to cats. On the day of testing, rats in these groups were weighed and handled as previously mentioned for 1 min. After this handling period the rats were returned to their home cage and left undisturbed. Handled and predator stressed rats home cages were kept in separate rooms.
Behavioral Testing and Behavioral
Measures. Four days after HSV injection and one day after treatment, ALB was measured in the hole board, EPM and startle tests. The hole board test took place just before the EPM as an independent test of activity and exploratory tendency [46].
Hole Board and Elevated Plus Maze Testing and Measures.
The hole board and EPM were constructed and used as described elsewhere [5]. The behavior of the rats in the hole board and EPM was videotaped remotely for later analysis. Rats were first placed in the hold board for 5 minutes. At the end of this time period they were transferred by gloved hand to the EPM for a further 5 minutes of testing. At the end of this testing period the rats were returned to their home cages.
Several measures of activity and exploration were taken while the rat was in the hole board. They included frequency of rearing (activity), and head dips, a measure of exploratory tendency scored when the rats placed its snout or head into a hole in the floor. Fecal boli deposited were also counted. A measure of thigmotaxis was time spent near the wall of the hole board. This measure was quantified as the rat having all four feet in the space between the holes for head dipping and the wall. Time spent in the center of the hold board was also recorded. A rat was considered to be in the middle when all four feet were in the center space defined by a square drawn through the four holes in the floor of the box.
In the EPM, exploration and activity were scored as the number of entries into the closed arms of the maze (closed arm entries). An entry was only recorded when the rat had all four feet inside one arm of the maze. Other measures of exploration included head dips, scored when a rat placed its snout or head over the side of an open arm, and rearing as a measure of activity. These behaviors were divided into three types: protected (rat had all four feet in closed arm for rearing or hindquarters in the closed arm for head dips), center (rat has all four feet in center of maze), and unprotected (rat has all four feet in an open arm). Time spent grooming was also recorded using the same three subdivisions.
Measures of anxiety-like behavior were also taken. Two measures assessed open arm exploration: ratio time and ratio entry. Ratio time was the time spent in the open arms of the maze divided by the total time spent in any arm of the maze. The smaller the ratio the less open arm exploration indicating a more "anxious" rat. Ratio entry was the number of entries into the open arms of the maze divided by the total entries into any arm of the maze. Again, the smaller the ratio, the less the open arm exploration experienced, the more "anxious" the rat.
Adamec and Shallow [5] were the first to adapt the concept of risk assessment to the EPM. This measure was scored when the rat poked its head and forepaws into an open arm of the maze while keeping its hindquarters in a closed arm. The frequency of risk assessment was measured and converted to relative risk assessment by dividing these frequencies by the time spent in the closed arms. Fecal boli deposited in the EPM were also counted.
Startle Testing and Measures.
Startle testing was conducted on the same day as the hole board and EPM. The startle response was determined using a standard startle chamber (San Diego Instruments). The apparatus was fitted with a 20.32 cm Plexiglass cylinder used to hold the animal during the test, as well as a speaker for producing the sound bursts. A piezoelectric transducer positioned below the cylinder detected motion of the animal in the cylinder. The output from this transducer was fed to a computer for sampling.
Prior to startle testing, animals were adapted to the apparatus for 10 minutes with a background white noise level of 60 dB. Then rats were subject to 40 trials (1/30 seconds) of 50 milliseconds bursts of 120 dB of white noise rising out of a background of 60 dB. Half the trials were delivered while the chamber was dark while the other half were delivered with an accompanying light (light intensity of 28 foot candles or 300 lux). The light trials were randomly interspersed among the dark trials. During the light trials, the lights would come on 2.95 seconds prior to the sound burst and remain on for the duration of the sound burst, terminating at sound offset (lights on for a total of 3 seconds). The chamber was in darkness between trials. A computer attached to the transducer recorded 40 samples of output. Samples included a 20 milliseconds baseline and 250 milliseconds sample after onset of the noise burst. Average transducer output just prior to noise burst was saved as a baseline (V start ). The computer then found the maximal startle amplitude within each of the samples (V max ). Both these measures were saved for later analysis. Peak startle amplitude was expressed as V max -V start for analysis. At the end of the startle session the rats were returned to their home cages. The apparatus was washed between rats.
Electrophysiological Recording
Procedure. Five days after HSV injection and two days after treatment, all rats were anaesthetized with urethane (1.5 g/kg) given in three divided doses separated by 10 minutes. Then the rats were placed in a sterotaxic instrument and injected under the scalp with marcaine (2% epinephrine) to locally anesthetize and reduce bleeding. The skull was exposed and holes drilled to permit stereotaxically guided insertion of stimulating electrodes into the central amygdala (CeA). Recording microelectrodes were placed into the PAG. Stimulating and recording electrode pairs were placed in both hemispheres. In addition, skull Table 1). Rats were placed in a shielded box for stimulating and recording experiments. Temperature was maintained between 36-37 • C by a rectal thermistor connected to a digital thermometer and feedback control to a DC heating pad (Frederick Haer) under the rat. CeA was stimulated using a single biphasic constant current pulse (width .2 milliseconds) at 1/5 seconds over a range of intensities (.025-2.5 mA), 10 stimulations per intensity. Evoked potentials were sampled by computer and later analyzed from data stored on computer using DataWave software (see Adamec et al. [27] for further method details). At the end of recording, rats were overdosed with Chloral Hydrate (1000 mg/mL, 1 mL, IP) and perfused with cold phosphate buffered saline and 4% Para-formaldehyde. Brains were extracted, sunk in 20% sucrose overnight at −4 • C and then stored at −70 • C. Subsequently brains were examined histologically for electrode locations, under green fluorescence microscopy to visualize GFP production and immunohistochemically to study pCREB expression.
Electrophysiology Analysis Methods.
The main measure of the size of the evoked potential was peak height (PH). The peak height at each intensity was taken by computer from field potential averages as illustrated in Figure 4. The raw PH at each intensity was expressed as a ratio of PH observed at threshold (see [23,27]).
Immunocytochemistry.
Thick frozen coronal sections (40 μm) were cut from 5.8 to 6.8 mm posterior to bregma [44] to capture the same areas of the PAG studied in past predator stress experiments, and to capture the targets of virus injection and electrophysiological recording . Anteriorposterior (AP) plane location was determined by counting sections from the decussation of the anterior commissure (AP −0.26 from bregma, [44]) to the desired AP plane. This counting of sections allowed for an estimation of the AP plane position to the nearest 40 μm during cutting. Every second section was saved, which provided 12 sections from Neural Plasticity 5 each brain for processing. To ensure even distribution a multiple of three brains (one brain from each group) was cut and processed at the same time.
After sectioning, one section from each group was placed in a plastic tube with nylon covering at one end and then immersed in a plastic well containing phosphate-buffered saline (PBS). Each tube contained three sections, which were processed at the same time. The tubes were removed, blotted, submerged in a solution of normal goat serum and Triton X-100 and placed on a rocker for 1 hour. The sections were washed with PBS, blotted and incubated at −4 • C for either 24 or 48 hours (reused antibody) in the primary phospho CREB antibody (Upstate/Chemicon). Consistent with past work [23,32], a dilution of 1/500 for the primary antibody was used. After incubation, sections were washed again with PBS, blotted, and then immersed in the secondary biotinylated antibody (goat antirabbit) for 1 h. Sections were washed, blotted, and placed in the ABC (Vector Stain kit) solution for 1 h on a rocker. Finally, sections were washed with PBS for a third time, blotted and submerged in diaminobenzadine (DAB) solution for 5-25 min, monitoring for staining. Sections were then washed with PBS again, before mounting onto slides, dehydrated and cover slipped.
Image Analysis (Densitometry).
Stained sections were analyzed blind to group assignment using image analysis software (MOKA software, Jandel). Hemispheres were measured separately. The PAG was divided into ventral, dorsal, and lateral areas to reflect the functional columnar organization described by Bandler and Depaulis [47]. This was done using the aqueduct of Sylvius as a guide. Horizontal lines were drawn from the top of the aqueduct to the outside edge of the PAG and from the bottom of the aqueduct to the outside edge of the PAG for both left and right hemispheres (see also [23]). The top sections were considered dorsal PAG, the middle sections were lateral PAG and the bottom sections were ventral PAG.
Raw pCREB lir densitometry data of each column in each hemisphere were converted to optical density (OD) units relative to the whole section. This was done by converting the raw PAG and raw whole section densitometry data to OD units via a calibrated step wedge. An image of the calibrated step wedge was taken at the same time as section images for each rat. Exponential fits of raw transmission values (x) to calibrated OD values were done by computer (Table Curve program, Jandel). All fits were good (all df adjusted r 2 > .9, P < .01). The exponential was then used to interpolate and convert raw transmission values to OD units. Analysis was performed on the ratio of average OD values in particular PAG areas to average OD values for the entire section. reduced open arm exploration (increased anxiety) the most relative to controls (Handled GFP). Injection of HSV-CREB in the right PAG alone was also anxiogenic in the EPM, reducing ratio time and entries in Handled-CREB rat to a level between Handled GFP and Predator Stressed GFP rats (Figure 1, Tukey Kramer test, P < .05).
Anxiety-Like
With regard to ratio frequency of risk assessment, though there was no group effect (F (2,9) = 2.58, P <.13), a planned t-test contrasting the predator stressed group with the two handled groups combined (which did not differ) revealed that predator stress reduced risk assessment relative to both Handled groups (Figure 2; t (9) = 2.19, P < .029, 1 tailed). This finding of reduced risk assessment following predator stress is consistent with many previous studies.
Exploration and Activity in EPM and Hole
Board. There were no differences between groups in closed arm entries (activity) in the EPM (Figure 1, (a) groups did not differ in rears (activity) and head dips (exploration) in the hole board ( Figure 1, (b) two panels). These data indicate that group differences in open arm exploration seen in the EPM are not the result of changes in activity or exploration.
Acoustic Startle
Response. Startle in the light and dark trials did not differ so analyses across light and dark trials were combined.
Startle Amplitude.
Between groups startle data were not normally distributed (Omnibus Normality Test = 148.07, P < .0001). Therefore, Kruskal-Wallis one way nonparametric ANOVA on medians of peak startle amplitude over trials was used. Groups differed (χ 2 (2) = 119.90, P < .001). Planned comparisons (Kruskal-Wallis multiple comparison z-test z > 3.98, P < .01) revealed that predator stress increased startle over both handled groups (Figure 3, bottom left panel). Nevertheless startle amplitude of Handled CREB rats was also higher than Handled GFP, but lower than predator stressed animals ( Figure 3, (b) left panel).
Habituation of Acoustic Startle Response.
Predator stress prolongs habituation to startle [6,15,16,48]. Therefore, habituation to startle in the three groups was determined and compared. Exponential decline functions of the form were fit to the peak startle amplitude mean data from each group across 20 trials (combined light and dark startle trials) using Jandel table curve V 4.0. In (1), y and y 0 are peak startle amplitude, a is a constant, e is the base of the natural logarithm, t is the trial number and τ is the trial constant, or the number of trials to decline to 37% of the maximal peak startle amplitude. To improve the fit, an FFT smoothing function provided by the program (20% FFT smooth) was applied. Care was taken to ensure the smoothing did not distort the data (Figure 3(a)). All fits were good (degrees of freedom adjusted r 2 > .84; all fits F (2,17) > 58.3, P < .001; t (38) ≥ 6.18, P < .01 for all t-tests of differences from zero of τ). The estimate of τ included a standard error of estimate. These standard errors were used to perform planned two tailed t-tests between groups using the different τ values (Figure 3(b), right panel). The pattern of the findings from this analysis was surprising. Both the Handled CREB and predator stressed group took significantly longer to habituate than Handled GFP controls. While this result was expected for the predator stressed group given previous work, the Figure 4: In the lower right is a computer average of a CeA-PAG evoked potential illustrating how peak height (PH) was measured by computer. Plotted in the graphs are means ± SEM of PH of CeA-PAG evoked potentials expressed as a ratio of threshold PH versus intensity of stimulation in μC/pulse (calculated as intensity in μA times pulse width in microsecond to take pulse width into the intensity measure). Means are plotted separately by group and within a group separately by hemisphere. fact that the Handled CREB group took longer to habituate than the predator stressed group was uncharacteristic of the amplitude findings.
Electrophysiology.
A three way ANOVA was done on ratio PH of the CeA-PAG evoked potential data. The factors examined were Group (Handled GFP control, predator stressed, and Handled CREB), Hemisphere (right and left) and Intensity of stimulation. There was a significant Group x Hemisphere x Intensity interaction (F (12,54) = 2.24, P <.04). The interaction is displayed in Figure 4. Intensity of stimulation was expressed in μC (micro-coulombs) per pulse. All groups were stimulated using the same intensity series, so group differences cannot be attributed to difference in the intensity of stimulation.
Planned comparisons t-test mean contrasts were used to examine the interaction by comparing the three groups at each intensity in each hemisphere. All groups showed the same ratio PH values at intensity 1 in both hemispheres. Moreover, ratio PH in Handled GFP controls were equal in both hemispheres and unchanged over intensity of CeA stimulation (Figure 4(c)). Similarly, left hemisphere ratio PH of Handled CREB rats did not change over intensity and did not differ from ratio PH in right or left hemisphere of Handled GFP controls. In contrast right hemisphere ratio PH of Handled CREB rats rose over intensity (t(54) = 5.61, 8 Neural Plasticity Figure 4(b), top right panel). Therefore, CREB injection per se selectively potentiated right hemisphere CeA-PAG evoked potentials relative to the left hemisphere and relative to Handled GFP controls, which did not differ from Handled CREB rats in the left hemisphere. As might be expected from previous work, predator stress potentiated right and left hemisphere CeA-PAG evoked potentials (Figure 4(a), upper left panel). Ratio PH in left and right hemispheres rose over intensity (all t(54) > 4.68, P < .01, comparing intensity 1 and 10). However, right hemisphere response exceeded the left at intensities 4-9 (all t(54) > 2.09, P < .05). This suggests that left hemisphere potentiation in predator stressed rats was fading relative to right hemisphere potentiation two days after treatment. Nevertheless, predator stress potentiated left CeA-PAG ratio PH over that seen in the left hemisphere of Handled CREB rats or in the left or right hemispheres of Handled GFP control rats, in that left ratio PH of predator stressed rats exceeded left ratio PH of Handled CREB rats (and left and right ratio PH of Handled GFP control) rats at intensities 3, 5, 9-10 (all t(54) > 2.04, P < .05).
Comparing right hemisphere ratio PH of Handled CREB and predator stress rats suggests nearly equal potentiation. Groups did not differ at intensities 1-2 and 4-8, but Handled CREB ratio PH did exceed that of predator stressed at intensities 3, 9 and 10 (all t(54) > 2.15, P < .05). Therefore right PAG CREB injection per se is as effective, or even more effective, than predator stress in potentiating right CeA-PAG evoked potentials.
Histological Verification of Electrode and Cannula Placements.
Tips of stimulating and recording electrodes were visualized microscopically from tissue sections and plotted onto rat atlas sections [44]. Rats from all three groups had correctly placed electrodes, allowing the use of each subject for data analysis. Two way ANOVAs were done examining group and hemisphere factors with separate analyses for the coordinates of each plane (AP, lateral and vertical) for each electrode target. Lateral and vertical coordinates were taken from the atlas sections while AP plane was calculated from section number. No group, hemisphere, or groups x hemisphere interactions were observed. The CeA stimulating electrodes were correctly placed in the medial central nucleus while the recording electrodes were in the lateral columns of the right and left PAG. Average location of tips for both the stimulating and recording electrodes appear in Table 1. Verification of cannula placement was completed in much the same way, average coordinates appear in Table 2. Furthermore, the absolute distance from the recording electrode was very small ( Table 2) indicating that electrophysiological recordings were taken from a position close to viral injection.
pCREB lir Immunohistochemistry Densitometry Analysis.
Relative OD data were analyzed separately for each of the three columns in the PAG. The lateral column was of primary interest since this was the area where CREB protein expression was enhanced ( Figure 5(a), top left panel). A one way ANOVA of right hemisphere data revealed a significant difference between the groups (F (2,41) = 3.30, P < .05). In contrast, groups did not differ in the left hemisphere (F (2,41) = 1.88, P <.17). Predator stressed rats had significantly more pCREB lir than Handled GFP controls with the Handled CREB rats falling in between these two groups, differing from neither (Tukey-Kramer Test, P < .05). The mean of pCREB lir in Handled CREB rats measured here at 5 days post HSV injection is likely an underestimate of its value at peak expression of CREB, which occurs at three days after HSV injection, when treatments occurred (stress or handling), and which fades thereafter [43].
One tailed t-tests were used to compare within groups across hemispheres based on the prediction that right column pCREB-lir would be increased in predator stressed rats based on previous findings, and on the prediction that increased CREB expression in Handled CREB rats would increase pCREB-lir. Both the predator stressed and Handled CREB rats exhibited more pCREB lir in the right over the left (all t, P <.04, 1 tailed), whereas there were no hemisphere differences in the Handled GFP control group.
Data from the dorsal column of the PAG were analyzed in the same way with somewhat differing results ( Figure 5(b), top right panel). A one way ANOVA of right hemisphere data revealed a significant difference between groups (F (2,41) = 3.66, P <.04) while the left hemisphere again showed no group difference (F (2,41) = 0.74, P <. 49). Comparison of the groups in the right hemisphere revealed that the predator stressed rats showed elevated pCREB lir which was greater than the Handled groups which did not differ (Tukey-Kramer tests, P < .05). Furthermore, comparison of groups across the two hemispheres revealed that, like the lateral The left side of each panel displays data from the right hemisphere while the right side of the panel illustrates left hemisphere data. For a given column, means marked with the same letter do not differ, but differ from those with different letters, while means marked with two letters do not differ from means marked with either of the letters (Tukey-Kramer tests, P < .05). Means marked with "@" show a within group difference between hemispheres (P < .05 1 tailed test).
column, both the stressed and Handled CREB group had elevated pCREB lir in the right hemisphere as compared to the left (all t, P <.04 t tailed) with the Handled GFP control group again showing no difference between hemispheres. Expression of pCREB in the ventral column of the PAG presented another pattern of results ( Figure 5(c), bottom panel). A one way ANOVA in the right hemisphere revealed a significant difference between groups (F (2,41) = 6.93, P <.003). In this case however, the stressed rats had significantly lower pCREB expression than Handled CREB rats with the Handled GFP control group falling in between, differing from neither (Tukey-Kramer tests, P < .05). Much like the other two columns, no difference was seen between groups in the left hemisphere (F (2,41) = 1.36, P <. 28). Comparisons within groups across hemispheres showed that both the Handled GFP control and Handled CREB rats had increased pCREB in the right over the left hemisphere (all t, P < .01), while there was no hemisphere difference in predator stressed rats. These ventral column results mirror previous findings with the exception of the hemisphere differences [40]. The fact that the Handled CREB group did not differ from the Handled GFP controls indicates that CREB may not be having an effect in this column. This also suggests that regional differences in the pathways controlling phosphorylation of CREB may be dependent on predator stress.
Visualization of GFP.
Verification of gene expression was achieved by examining all PAG sections taken for green flourescence as evidence of expression of the reporter GFP. Green flourescence in the right PAG verified gene expression of GFP occurred after HSV injection in the vicinity of the injection cannulas and PAG recording electrodes (Figure 6, e.g., five days after HSV injection). Since flourescence ranges from cannula to PAG electrodes, one can derive a sense of Right lateral column 6.5× Left lateral column 6.5× Right lateral column 25× the AP plane range of gene expression from PAG electrode position relative to cannulas. Referring to Table 2, evidence of gene expression five days post HSV injection appears over a range of ±.28 ±.034 mm (mean ± SEM) from the cannula in the AP plane. This represents a range nearly as extensive in previous pilot work which found that at the time of peak gene expression (three days post HSV injection), GFP expression was localized to the right lateral column of the PAG over an AP plane range of ±.35 mm from the cannula.
Power Associated with Significant Results
. Given the small n of groups, power (α =.05) of all significant findings was calculated. Significant behavioral and electrophysiological findings all had power values in excess of.90. Power associated with pCREB expression analyses varied with column of the PAG, ranging from .82 to .91 in the dorsal and ventral columns to a reduced power for the lateral column results of .60.
The power of a test depends on the value of the type I error (here α = .05), the sample size, the standard deviation, and the magnitude of the effect being tested reflected here in magnitude of mean differences. Most findings appear quite robust with power values in excess of .80, suggesting robust effects of predator stress and virally induced CREB expression on brain and behavior. The reduced power for the lateral column pCREB findings suggests a fading effect in this column.
Discussion
The primary purpose of this study was to examine the functional significance of pCREB changes within the right lateral column of PAG. This was accomplished by genetically inducing an increased expression of CREB, through viral vectoring, and determining the behavioral, electrophysiological and pCREB expression changes in comparison to predator stressed and Handled GFP control rats.
Behavioral Effects of Viral
Vectoring CREB . Viral vectoring to induce CREB expression in the right lateral column of the PAG produced behavioral effects resembling those seen in predator stressed rats. Handled CREB rats showed increased open arm avoidance in the EPM (decreased ratio time and entry) as compared to Handled GFP controls. However, predator stress was even more effective, increasing open arm avoidance over that seen in Handled CREB rats. Despite, this graded change in anxiety between groups, measures of activity and exploration in the plus maze or hole board did not differ (Figure 1). This pattern of results suggests changes in open arm exploration (anxiety) in EPM are not due to changes in activity or exploratory tendencies, consistent with previous findings using predator stressed rats in similar testing situations [5,9,15,23,25,32].
The ability of CREB per se to increase open arm avoidance in the absence of any predator stress is a remarkable finding. It suggests a direct role for CREB and possibly pCREB expression [32] in behavioral changes produced by stress. Predator stress likely induces CREB signaling change, and then behavioral changes via NMDA receptor activation in the PAG [7,23,25,40]. In the present study, stress effects were mimicked by bypassing the NMDA receptor activation and directly activating CREB mediated processes.
Not all effects of predator stress were mimicked by PAG CREB induction, however. Normally predator stress reduces ratio frequency risk assessment in an NMDA receptordependent manner [5,7,16,25,26]. While predator stress in the present study also reduced risk assessment, the risk assessment of Handled CREB rats was unaffected and did not differ from Handled GFP controls ( Figure 2). The lack of a predator stress type response in the Handled CREB rats suggests that increasing CREB expression in PAG may not be the only factor that mediates suppression of risk assessment, or alternatively may only affect some EPM behaviors. Other necessary factors at play could include changes in amygdala pCREB expression and potentiation of ventral hippocampal to BLA transmission, both of which follow predator stress [32]. In addition, risk assessment changes produced by predator stress are highly predicted by right hemisphere changes in transmission in both CeA-PAG and hippocampal to BLA pathways [9]. Since only PAG was manipulated in Handled CREB rats, it is likely these other factors were not engaged, but were engaged in predator stressed rats. Perhaps changes in risk assessment require all changes to occur. A change in hippocampal spatial information transfer to BLA might make sense, since risk assessment is described as a form of sampling the immediate environment for potential threats [49]. Other possible reasons include the following. Handled CREB rats were more anxious than Handled GFP controls in EPM, but their level of anxiety was not as great as predator stressed rats. Greater levels of anxiety may be associated with less risk assessment [49], and so the more anxious predator stressed rats displayed reduced risk assessment. Further testing will be required to decide between these possibilities.
Handled CREB rats also had elevated median peak startle amplitude in comparison to the Handled GFP control group. Moreover, the predator stressed group showed startle amplitudes that surpassed those of the Handled CREB rats. This graded response of enhanced startle over groups is reminiscent of open arm avoidance in the EPM, and supports the notion that inducing CREB expression per se induces an anxious state which is milder than that produced by predator stress. Reasons for the milder effects of direct PAG manipulation in comparison to predator stress may parallel those raised above to explain risk assessment discrepancies. Finally, the enhancement of startle amplitude in predator stressed rats is consistent with past studies [6,23,26,27].
Predator stress also reliably decreases rate of habituation of the acoustic startle response [15,16,48]. Present data are consistent with these findings in that predator stressed rats took significantly longer to habituate than Handled GFP controls ( Figure 3). This replication furthers the validity of predator stress as a model of hyperarousal aspects of PTSD, since delayed habituation to startle is also observed in PTSD patients [50][51][52][53].
Surprisingly, the Handled CREB group took even longer than the predator stressed rats to habituate to startle. This finding implicates CREB dependent mechanisms in delay of startle habituation, which are likely NMDA receptordependent, given that CPP administered 30 minutes prior to predator stress blocks delay of startle habituation as well as increased right lateral PAG pCREB expression [16,40]. However, this finding also suggests some difference in mechanisms of induction of neural changes by CREB in PAG underlying enhanced startle amplitude and delay of habituation. Delay in startle habituation has been observed in the absence of increased startle amplitude making it likely that different neural circuits/mechanisms mediate changes in these two responses to acoustic startle [7,16]. Additionally, recent studies suggest that separate portions of the CeA-PAG pathway mediate the stress induced changes in startle amplitude and startle habituation [7]. Another possible explanation could be the following. Though NMDA receptor-dependent potentiation of efferent transmission from amygdala to PAG mediates increases in startle amplitude [9,23,26], it is homosynaptic depression in brain stem startle pathways that underlies habituation [54], and direct CREB expression in PAG more powerfully engaged such depression than predator stress per second.
Effects of Viral Vectoring CREB on CeA-PAG Transmission.
A fascinating finding was that viral vectoring of CREB induced a potentiation of the CeA-PAG pathway in the right hemisphere ( Figure 4) analogous to that seen after predator stress. Moreover, potentiation in this group was restricted to the same hemisphere as injection. In fact the evoked potentials in the left hemisphere of the Handled CREB rats did not differ from those observed in Handled GFP controls. This implies that any behavioral changes observed in this group can be attributed to the change in transmission due to CREB induction in the right hemisphere.
In past studies, CeA-PAG potentiation by predator stress has been shown to be NMDA receptor-dependent. CPP administration prior to predator stress blocks both anxiogenic effects and and CeA-PAG potentiation [7,16]. Moreover, given that predator stress induces NMDA receptordependent right PAG pCREB expression, it has also been suggested that long lasting right CeA-PAG pathway potentiation is dependent on pCREB expression [7,40]. Present findings in Handled CREB rats support this hypothesis.
The present study also adds new data on the time course of CeA-PAG pathway potentiation in predator stressed rats. Current results show that, as expected, predator stressed rats exhibited potentiation in the right CeA-PAG pathway two days after predator stress (Figure 4), complementing those studies that have replicated this finding at 1, 9 and up to 12 days post predator stress [9,23,32]. A novel finding was the fading, but still present, potentiation in the left CeA-PAG of predator stressed rats. The presence of potentiation in the left hemisphere adds to previous studies showing left CeA-PAG one day after predator stress [27], but fading completely by 9 days [7]. Present findings suggest a left hemisphere potentiation lasting at least two days.
The presence of bilateral CeA-PAG pathway potentiation in predator stressed rats and the unilateral induced right CeA-PAG pathway potentiation in Handled CREB rats at the time of anxiety testing may account for some of the differences in open arm avoidance, risk assessment, and startle response between groups. This especially concerns the absence of reduced risk assessment in the Handled CREB group, since NMDA block in the left dorsolateral amygdala 30 minutes prior to predator stress prevents stress effects on risk assessment [26]. Moreover path analysis suggest that changes in open arm exploration and risk assessment may depend on bihemispheric changes in limbic transmission in the early stages after predator stress [27].
Long lasting potentiation in the right CeA-PAG pathway by predator stress has been suggested to reflect some, but not all, of the anxiogenic neuroplastic changes after predator stress [9,23]. Taken together present findings lend strong support to this view.
Effects of Viral
Vectoring CREB on pCREB lir. Given that predator stress increases pCREB lir selectively in the right lateral column of the PAG, and that CeA-PAG potentiation persists longer in the right hemisphere, it has been suggested that increased production of pCREB underlies right CeA-PAG potentiation. Furthermore, degree of pCREB expression and right CeA-PAG potentiation correlate highly with the same measures of the predator stress experience suggesting a strong relationship between these two phenomena [23].
In the present study, densitometry analysis revealed a right over left lateral PAG increase in pCREB lir in both Handled CREB and predator stressed groups. Thus increasing CREB expression directly and genetically in the right lateral PAG also increased pCREB in a pattern similar to predator stress in a group which had not been predator stressed. Moreover, in Handled CREB rats, the increase of pCREB in the right but not left lateral column of the PAG is consistent with potentiation in the right but not left CeA-PAG pathway in this group. The fact that pCREB expression in the right lateral column in the Handled CREB group was intermediate, neither differing from the predator stressed group nor the Handled GFP controls, is consistent with their milder than predator stressed rats increase in anxiety in the EPM and acoustic startle tests. Taken together, these results support the suggestion that elevated pCREB leads to neuroplastic changes that induce right CeA-PAG potentiation and increased anxiety [23,32].
This conclusion must be tempered by the reduced power associated with lateral column significant findings. The reduced power here likely reflects a reduced effect evidenced in the small mean differences encountered in the analyses. As pointed out above (Section 3.6) the mean of pCREB lir in Handled CREB rats measured at 5 days post HSV injection is likely an underestimate of its value at peak expression of CREB, which occurs at three days after HSV injection, when treatments occurred (stress or handling), and which fades thereafter [43]. Moreover, effects of predator stress on pCREB expression are evident at 20 minutes post stress and fade thereafter (20 and unpublished observations). Since transient NMDA receptor block prevents predator stress effects on brain and behavior and suppresses pCREB expression [7,25,26,40], it is likely that changes in brain and behavior depend on immediate effects of increased pCREB expression, which in this study would have likely begun before the time of pCREB measurement. Further studies examining CREB and pCREB expression in lateral PAG at 1-3 days post HSV injection are required to clarify present findings.
Present findings mirror those seen in previous work with respect to the lateral column of the PAG. However, the dorsal column results in comparison require greater interpretation. The pattern of dorsal column pCREB changes stand in contrast to findings that predator stress alone does not alter pCREB lir in this column when measured 20 minutes after predator stress [23,40]. In the current study predator stressed rats had elevated pCREB expression in the right dorsal PAG, while the two Handled groups had lower and similar levels of expression two days after treatment ( Figure 5). A right over left hemisphere expression effect was observed in both the predator stressed and Handled CREB groups, similar to the lateral column. The fact that the right exceeds the left in the Handled CREB rats suggests that right lateral column pCREB enhancement may have spread to the dorsal column, but not enough to differ from the Handled GFP control. Other explanations include a potential leak up the cannula tract or the possibility that this is a function of CREB induction, since the predator stressed group demonstrated similar effects though more pronounced. The increase of right over left pCREB expression in predator stressed rats suggests that the EPM is having an effect on the dorsal column up to 24 hours later. This extends previous findings which showed that dorsal column pCREB was elevated bilaterally in predator stressed rats 20-25 minutes after exposure to the EPM which took place 7 days after predator stress [55]. Previous and present findings differ, however, in that in the present study, there was no pCREB increase over control in the left hemisphere in predator stressed rats. This suggests that an increased time interval between the predator stress experience and EPM testing may allow for left hemisphere pCREB levels to increase. Conversely, in the present study 24 hours elapsed between EPM testing and pCREB testing. Perhaps left dorsal column pCREB expression faded over this time interval. Further research into time course of pCREB changes following predator stress and EPM exposure seems warranted.
Though lateral and dorsal column findings are somewhat in line with previous wok, the results of the ventral column are not. In the present study pCREB expression in predator stressed rats was decreased in comparison to both Handled groups in the right hemisphere, and right and left hemisphere expression did not differ in predator stressed rats. Moreover, Handled groups displayed increased pCREB expression in right over left hemispheres ( Figure 5). There are discrepancies and similarities with previous work examining pCREB expression 20 minutes after handling or predator stress. Previous work showed no differences in pCREB expression between predator stressed and handled controls in ventral PAG of both hemispheres, with right hemisphere expression elevated over the left [23]. Perhaps differences in time of sampling pCREB expression accounts for the discrepancies between past and present findings, since pCREB in the present study was measured two days after treatment.
If decreases of pCREB expression in ventral PAG are normally delayed after predator stress (for which we have preliminary evidence, unpublished data), then present findings suggest such decreases are independent of enhanced pCREB expression in lateral PAG induced by direct genetic induction at least. If increase in lateral column and decrease in ventral column pCREB expression parallel enhancement and suppression of normal functioning, then one might suspect a shifting of defensive response bias toward avoidance of threatening stimuli and away from a relaxed immobility, along the lines of functional columnar differences in the dorsolateral and ventral PAG described by Depaulis and Bandler [47]. Further time course studies of shifting defensive response bias following predator stress seem warranted.
Summary and Conclusions.
In summary, the present study demonstrated that directly inducing CREB (and pCREB) expression in the right lateral PAG reproduced behavioral, brain, and molecular changes that closely resemble those seen in predator stressed rats. These findings suggest increased CREB (and perhaps pCREB) expression in the lateral PAG is at least sufficient to produce brain and behavioral changes normally induced by a brief predator stress. Moreover, similar effects of inducing CREB expression in basolateral amygdala on EPM anxiety at least, have been reported by Wallace et al. [43]. Together these data support the idea that the CREB-pCREB pathways in the right lateral PAG, and perhaps amygdala, are important entry level molecular paths to lasting anxiogenic effects of predator stress. To the extent that predator stress models some aspects of PTSD, present finding point to CREB and pCREB pathways as possible new therapeutic targets.
|
2016-04-30T06:45:01.844Z
|
2009-04-01T00:00:00.000
|
{
"year": 2009,
"sha1": "f396db490507a0f21034c916ee3412db1cad0dbf",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/np/2009/904568.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e958002207bb644bd438af198a9724773e67584a",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
252417090
|
pes2o/s2orc
|
v3-fos-license
|
Identifying changes in e-cigarette use among a longitudinal sample of Canadian youth e-cigarette users in the COMPASS cohort study, 2017/18–2018/19
Highlights • Those who use e-cigarettes may increase, decrease, or keep the same frequency of use.• Half of current youth e-cigarette users increased their frequency of use.• One third of current youth e-cigarette users decreased their frequency of use.• E-cigarette use patterns differed by gender and ethnicity.
Introduction
The prevalence of e-cigarette use (or vaping) among adolescent populations in Canada and the United States (US) has increased over the last decade (Cole et al., 2020;Johnston et al., 2021;Levy et al., 2018). For example, between 2013 and 2018, the prevalence of current ecigarette use among a sample of Ontario, Canada high school students increased from 7.6 % to 25.7 % (Cole et al., 2020), and nationally representative data suggest that the prevalence of current e-cigarette use among Canadian youth aged 16 to 19 years doubled between 2017 (8.4 %) and 2019 (17.8 %) (Hammond, Rynard, et al., 2020). Other nationally representative data indicate that the prevalence of current e-cigarette use among Canadian youth in grades 7 to 12 doubled from 10 % in 2016/17 to 20 % in 2018/19 (Health Canada, 2019. It was during this period that nicotine-containing devices became legally available for sale in the Canadian market (Tobacco and Vaping Products Act, 2018) and vaping devices and brands were more widely advertised online and in stores (Hammond, Reid, et al., 2020). Under this Act, the minimum legal age for purchasing e-cigarettes was 18 or 19 years (depending on the province), however labelling, packaging, and advertising restrictions were not in place. While the long-term consequences of e-cigarette use are relatively unknown, it is known that nicotine is highly addictive and has a negative impact on the developing adolescent brain (U.S. Department of Health and Human Services, 2014) and as such, e-cigarettes with nicotine should not be used by youth.
Despite the rising prevalence of youth e-cigarette use, relatively few studies report the frequency of vaping (e.g., number of days used ecigarettes in the past month), which is necessary to fully understand the potential public health impact of vaping. Those that do report vaping frequency indicate that 68-80 % of current youth e-cigarette users vape infrequently (<10 days per month) (Bold et al., 2017;Villanti et al., 2017). Other repeat cross-sectional data indicate that 1.8-5.7 % of youth aged 16 to 19 years in Canada reported using e-cigarettes 20 or more days in the past 30 days between 2017 and 2019 (Hammond, Rynard, et al., 2020). While these cross-sectional and repeat cross-sectional data provide a glimpse into youth vaping behaviours, they do not provide indepth insight into how youth e-cigarette use changes over time. Students who use e-cigarettes may keep the same level (i.e., maintain), increase (i.e., escalate), or decrease their level of vaping (i.e., reduce or stop), and knowledge of these e-cigarette use patterns can be used to develop more robust prevention and cessation efforts.
There are few studies that have examined longitudinal changes in ecigarette use patterns. These studies estimate that 4-34 % of students (aged 12-19 years) escalate the frequency of vaping nicotine over 2-5 years (Harrell et al., 2021;Lanza et al., 2020;Park et al., 2020;Westling et al., 2017). There are limited data indicating that some youth stop vaping (Stanton et al., 2020). Given the relative lack of longitudinal data describing changes in vaping patterns among current youth e-cigarette users and the changes in vaping devices since these longitudinal studies were conducted, there is a need for more current data showing changes in e-cigarette use patterns among current youth e-cigarette users. In particular, given that the prevalence of youth e-cigarette use in Canada increased significantly between 2017 and 2019 when nicotinecontaining devices became legally available (Cole et al., 2020;Hammond, Rynard, et al., 2020), data showing changes in vaping patterns among current youth e-cigarette users can help public health professionals fully understand the youth vaping epidemic, including how many youth are interested in quitting or reducing e-cigarette use, and aid in evaluating the impact of e-cigarette prevention policies and programs.
Therefore, the objective of this study was to identify one-year changes in e-cigarette use patterns among a large, longitudinal sample of Canadian youth who were current e-cigarette users between 2017/18 and 2018/19 when the largest increase in e-cigarette use prevalence occurred. We also explored whether e-cigarette use patterns differed among sociodemographic groups (e.g., grade, gender, ethnicity) in the sample.
Materials & methods
The current study used two years of linked longitudinal data from students participating in the COMPASS study. The COMPASS study is a CIHR-funded (2012-2027) school-based prospective cohort study that collects data annually from a rolling cohort of students in grades 9-12 (Secondary I-V in Quebec) and the schools they attend in a convenience sample of schools in Canada . Participating schools permitted active-information passive consent parental permission protocols (passive consent), whereby parent(s) or guardian(s) of students were provided an information letter describing the study and asked to email study staff if they did not want their child to participate (Thompson-Haile & Leatherdale, 2013). All students in participating schools were eligible to participate. A full description of the COMPASS study methods can be found in print or online (https://www.compass.uwaterloo.ca). The COMPASS study received ethics approval from the University of Waterloo Research Ethics Board (#30118) and appropriate school board review committees. This secondary data analysis received ethics approval from the Ontario Tech University Research Ethics Board (#15884).
Participants
This study used data from a sample of current (past 30-day) youth e-cigarette users in Year 6 (2017/18, baseline) and Year 7 (2018/19, follow-up) of the COMPASS study. These study waves were selected given that the legal sale of e-cigarettes with nicotine occurred during this time and these waves were prior to the COVID-19 pandemic when changes to data collection methods occurred. At baseline, n = 40,887 students in grades 9-11 (secondary III-IV in Quebec) attending n = 112 secondary schools across Ontario (n = 58), Quebec (n = 31), British Columbia (n = 14), and Alberta (n = 8) participated (81.1 % student participation rate). Student non-response was primarily a result of student absenteeism the day of the data collection.
A series of steps were used to narrow the sample to baseline current youth e-cigarette users with data at both baseline and follow-up (flow diagram in Supplementary Fig. 1). Given our focus on identifying changes in vaping behaviours among current youth e-cigarette users, in Step 1, students who had not used e-cigarettes in the last 30 days at baseline were removed from the sample (n = 31,396), as were students who had missing e-cigarette use data at baseline (n = 633). This left a sample of n = 8,858 current (past 30-day) youth e-cigarette users.
Step 2 involved linking these current youth e-cigarette users between baseline and follow-up using a unique code generated by each student (Battista et al., 2019). Of those current youth e-cigarette users (n = 8,858), 46.3 % (n = 4,102) could be linked at follow-up. Loss to follow-up was primarily a result of student absenteeism the day of the data collection. Of the 4,102 students with linked data, n = 31 had missing e-cigarette use data at follow-up and were removed (Step 3), leaving a final linked sample of n = 4,071. Given that very few students (n = 2) had missing values for grade, gender, or ethnicity, they were retained in the analytic sample. Current e-cigarette users who could be linked at follow-up tended to be in grade 9 or 10, female, white, and reported lower vaping frequencies at baseline relative to current e-cigarette users who could not be linked (Supplementary Table 1).
Measures
Student-level data were collected during class time using the COM-PASS questionnaire (Cq), a machine-readable paper survey (Bredin & Leatherdale, 2014). At the time of the survey, the Cq referred to vaping devices as "e-cigarettes" and did not include a definition or examples of devices. The survey also did not differentiate between e-cigarettes with or without nicotine. Consistent with validated measures for cigarette smoking (Wong et al., 2012), students reported the number of days in the last 30 days that they used e-cigarettes [response options: none, 1 day, 2 to 3 days, 4 to 5 days, 6 to 10 days, 11 to 20 days, 21 to 29 days, and 30 days (every day)]. Based on responses to this question at baseline and follow-up, current e-cigarette users who escalated vaping reported increasing the number of days they used e-cigarettes between baseline and follow-up, those who reduced vaping reported decreasing the number of days they used e-cigarettes between baseline and follow-up but still reported using e-cigarettes in the last 30 days, those who stopped vaping did not report e-cigarette use in the last 30 days at followup, and those who maintained vaping reported using e-cigarettes for the same number of days at baseline and follow-up. The Cq also included demographic questions consistent with other Canadian surveys including student grade (9, 10, 11), self-reported ethnicity (White, Black, Asian, Latin American/Hispanic, Other) and gender (female, male).
Analysis
Statistical analyses were conducted using SAS software, Version 9.4 (SAS Institute Inc, 2012). We identified the prevalence of escalating, reducing, stopping, and maintaining vaping in the sample of current ecigarette users overall and by demographic characteristics. We also examined changes in e-cigarette use frequencies across e-cigarette use patterns and gender to better understand one-year changes in student behaviours among current e-cigarette users. Finally, we identified associations between demographic characteristics and e-cigarette use patterns using multilevel logistic regression models, which accounted for student-level clustering within schools. Models were stratified by gender. Due to the small number of respondents in some categories, ethnicity was grouped as "White" versus "non-White" (which included any student who reported an ethnicity other than "White", including multiple ethnicities) in regression analyses. Similarly, e-cigarette use frequencies were grouped as 1-5 days, 6-10 days, 11-19 days, and > 20 days in regression analyses due to the small number of respondents in some categories.
Results
Just over half of the sample of current e-cigarette users were male (54.3 %), with 33.2 % of students in grade 9, 41.3 % in grade 10, and 25.6 % in grade 11; 77.1 % were white and 53.8 % were from Ontario, Canada.
At baseline, 29.2 % of current e-cigarette users reported vaping 1 day, 25.2 % reported vaping 2-3 days, 11.9 % reported vaping 4-5 days, 9.0 % reported vaping 6-10 days, 9.9 % reported vaping 11-20 days, 6.5 % reported vaping 21-29 days, and 8.3 % reported vaping 30 days (everyday) in the past 30 days. Over one year (2017/18-2018/19), 49.2 % of baseline current youth e-cigarette users escalated their frequency of use, 12.8 % reduced their frequency of use, 20.2 % stopped using ecigarettes, and 17.8 % maintained the same frequency of e-cigarette use. At follow-up, 20.2 % of students reported vaping 0 days, 9.0 % reported vaping 1 day, 11.3 % reported vaping 2-3 days, 6.8 % reported vaping 4-5 days, 7.5 % reported vaping 6-10 days, 9.4 % reported vaping 11-20 days, 11.3 % reported vaping 21-29 days, and 24.6 % reported vaping 30 days (everyday) in the past 30 days. Table 1 presents the prevalence of e-cigarette use patterns according to sociodemographic characteristics. Table 2 presents the number of days current youth e-cigarette users reported using e-cigarettes in the past 30 days at baseline among those who maintained and stopped using e-cigarettes over one year. While half of students (50.2 %) who maintained e-cigarette use reported using ecigarettes 1-5 days at baseline, more than one third (36.7 %) reported using e-cigarettes > 20 days in the past 30 days. Female students tended to report lower frequencies of e-cigarette use than males. The vast majority of students (83.4 %) who stopped using e-cigarettes reported using e-cigarettes 1-5 days at baseline, and similar to the pattern among those who maintained e-cigarette use, female students tended to report lower frequencies of e-cigarette use than males. Fig. 1 presents the change in frequency of past 30-day e-cigarette use among those who escalated their e-cigarette use over one year. While the majority of students (72.6 %) reported using e-cigarettes 1-5 days at baseline, at follow-up only 16.9 % of students reported using e-cigarettes 1-5 days in the past 30 days; in contrast, the proportion of students who reported using e-cigarettes > 20 days increased almost 10-fold over oneyear (6.5 % at baseline vs 57.8 % at follow-up). The pattern of increasing frequency of use was similar across gender, although male students tended to report a higher frequency of e-cigarette use at both baseline and follow-up relative to female students. Fig. 2 presents the change in frequency of past 30-day e-cigarette use among those who reduced their e-cigarette use over one year. While over one-third of students (37.2 %) reported using e-cigarettes 1-5 days at baseline, this proportion doubled to 76.9 % at follow-up. While almost one-third of students (30.3 %) reported using e-cigarettes > 20 days at baseline, only 7.1 % reported this frequency of e-cigarette use at followup. The pattern of reduced frequency of use was similar across gender, although female students tended to report a lower frequency of e-cigarette use at both baseline and follow-up relative to male students.
Multilevel logistic regression models identified few sociodemographic characteristics associated with each e-cigarette use pattern (Supplementary Table 2). Relative to those who maintained the same level of e-cigarette use, male current youth e-cigarette users had 1.27 95 % CI [1.05-1.54] higher odds of escalating e-cigarette use relative to female students. A ceiling effect was observed among current youth ecigarette users who vaped > 20 days in the past 30 days; these youth had 0.13 95 % CI [0.06-0.16] lower odds of escalating e-cigarette use relative to those who vaped 1-5 days in the past 30 days. Relative to those who maintained the same level of e-cigarette use, current youth ecigarette users who vaped 6-10 and 11-19 days in the past 30 days had over 3 times higher odds of reducing e-cigarette use relative to those who vaped 1-5 days in the past 30 days (OR 3.86, 95 % CI [2.17-6.88], and OR 3.13, 95 % CI [1.89-5.22], respectively). Finally, relative to those who maintained the same level of e-cigarette use, current youth ecigarette users in grade 10 had 1.44 95 % CI [1.05-1.97] higher odds of stopping e-cigarette use relative to those in grade 9, those indicating a non-White ethnicity had 1.34 95 % CI [1.03-1.74] higher odds of stopping e-cigarette use relative to those indicating a White ethnicity, and those who vaped 11-20 and > 20 days in the past 30 days had lower odds of stopping e-cigarette use relative to those who vaped 1-5 days in the past 30 days (OR 0.43, 95 % CI [0.24-0.74], and OR 0.10, 95 % CI [0.06-0.16], respectively). Similar results were obtained when the sample was stratified by gender ( Supplementary Tables 3 & 4).
Discussion
We found that between 2017/18 and 2018/19, most student ecigarette users in our longitudinal cohort reported significantly changing their e-cigarette use frequency. While about half of baseline ecigarette users reported that they increased their frequency of e-cigarette use, a significant number (33.0 %) also reported decreasing or stopping e-cigarette use completely during a time when the largest increase in youth e-cigarette use prevalence was reported by Canadian a this group includes students who identified as "off-reserve Aboriginal", who identified as another ethnic group not listed in the survey question (other), and who selected more than one response to the survey question (multiracial). surveillance systems (Cole et al., 2020;Hammond, Rynard, et al., 2020). We observed some differences in the demographic characteristics and ecigarette use frequencies of students within each e-cigarette use pattern. These results highlight the need for longitudinal data to continue to monitor and evaluate changes to e-cigarette use patterns that may be in response to changing public health policies, such as recent federal policies that limit the nicotine concentration in vaping products. Although two-thirds (66.3 %) of youth reported infrequent e-cigarette use at baseline (i.e., < 5 days in the last 30 days), indicating some level of experimentation, it is concerning that many youth already reported high frequencies of e-cigarette use at baseline. For example, 21.4 % of female students and 48.1 % of male students who maintained ecigarette use reported using e-cigarettes > 20 days in the last month at baseline. Using e-cigarettes > 20 days in the last month was even higher among female and male students who escalated vaping (49.3 % and 64.3 %, respectively). The results of our regression models further support the lower likelihood of escalating (relative to maintaining) ecigarette use when students report using e-cigarettes > 20 days in the last month. Given the high dose of nicotine in many e-cigarette products (EL-Hellani et al., 2018) and the risk of nicotine addiction, there is a need for continued monitoring of the frequency of e-cigarette use among youth, particularly as public health policies change.
Our results reinforce current literature that suggests that escalating e-cigarette use among adolescents is occurring and is a significant issue (Bold et al., 2016;Goldenson et al., 2017;Lanza et al., 2020;Park et al., 2020;Westling et al., 2017). The prevalence of escalating e-cigarette use over one year in our sample was quite high compared to other studies. Evidence from the US indicates that the prevalence of continuing and escalating e-cigarette use ranged between 4 % and 34 % (Harrell et al., 2021;Lanza et al., 2020;Park et al., 2020;Westling et al., 2017). Escalating e-cigarette use was also more common compared to the prevalence of escalating cannabis use (29.5 %) found in another study using COMPASS data (Zuckermann et al., 2019). Our data were collected during a period when devices with nicotine (such as the brand Juul®) became legal and increasingly available in the Canadian market (Tobacco and Vaping Products Act, 2018). Different data collection periods and regulatory environments may account for some of the differences in the proportions of youth who escalated e-cigarette use, highlighting the importance of local and national surveillance systems for evaluating the potential impact of e-cigarette policies on youth ecigarette use patterns.
Of surprise, about one-third of current youth e-cigarette users in our study reported reducing (12.8 %) or stopping e-cigarette use (20.2 %) over a one-year period of time. Other cross-sectional studies from the US have reported that many students have seriously thought about quitting and reported past-year quit attempts (Dai, 2021;Smith et al., 2021) and recent Canadian data indicate that 63 % of youth aged 15 to 19 years have tried to quit using e-cigarette in the last year (Health Canada, 2020). However, to the best of our knowledge, these are the first data identifying the prevalence of youth actually reducing or stopping ecigarette use, filling a critical knowledge gap. The regression model results indicate that students who used e-cigarettes less frequently at baseline were more likely to reduce (relative to maintain) e-cigarette use at follow-up, while students who used e-cigarettes more frequently at baseline were less likely to stop (relative to maintain) e-cigarette use at follow-up. Given these findings, it is possible that youth who vape less frequently are less addicted and may not require the use of e-cigarette cessation programs to quit vaping but could still benefit from public health messaging encouraging them to stop using e-cigarettes. Additional research is needed to understand why these youth reduced and stopped e-cigarette use, which could help to inform future cessation interventions. While it is encouraging that a significant number of students were able to reduce the frequency or stop using e-cigarettes, it is apparent that it was not sufficient to prevent a rise in youth e-cigarette use during this time. Other data from this cohort study indicate that almost one third of students who had not yet initiated e-cigarette use tried e-cigarettes over one follow-up year (Williams et al., 2021). Given that youth e-cigarette use continues to be a significant public health concern, evidence-informed cessation and prevention programs are urgently needed.
Strength and limitations
A key strength of this study is the use of a large, school-based longitudinal data set to identify-one-year changes in e-cigarette use patterns. To our knowledge, these are the first Canadian data to present changes in e-cigarette use patterns among current youth e-cigarette users, and these data could serve as a baseline for future evaluations of ecigarette policies. Our study is not without limitations. Since the Cq did not include a definition of an e-cigarette or examples of common brands, our results may underreport the prevalence of youth e-cigarette use. Furthermore, the use of a convenience sample and attrition of current ecigarette users between baseline and follow-up limits the generalizability of the findings. The survey question included defined categorical responses which may not represent the usual e-cigarette use pattern of students. Our results may underestimate the proportion of youth who escalated/reduced e-cigarette use and overestimate the proportion of youth who maintained e-cigarette use given that the distances between successive response options was not the same (i.e., a change in e-cigarette use from "1 day" to "2 to 3 days" in the past 30 days is not equivalent to a change in e-cigarette use from "11 to 20 days" to "21 to 29 days" in the past 30 days); there may be students who escalated or reduced the frequency of e-cigarette use but are still captured by the same response category. Future studies should consider alternative ways of capturing e-cigarette use frequency.
Conclusions
Over one year, most youth in our sample who reported current ecigarette use at baseline reported changing their e-cigarette use frequency at a one-year follow-up. While about half of these students increased their frequency of e-cigarette use, about one-third also reported decreasing or stopping e-cigarette use completely. Few sociodemographic characteristics differentiated vaping patterns. Additional longitudinal data are needed to monitor and evaluate changes to ecigarette use patterns that may be in response to changing public health policies.
Role of Funding Sources
The COMPASS study has been supported by a bridge grant from the CIHR Institute of Nutrition, Metabolism and Diabetes (INMD) through the "Obesity -Interventions to Prevent or Treat" priority funding awards (OOP-110788; awarded to STL), an operating grant from the CIHR Institute of Population and Public Health (IPPH) (MOP-114875; awarded to STL), a CIHR project grant (PJT-148562; awarded to STL), a CIHR bridge grant (PJT-149092; awarded to KP/STL), a CIHR project grant (PJT-159693; awarded to KP), and by a research funding arrangement with Health Canada (#1617-HQ-000012; contract awarded to STL), a CIHR-Canadian Centre on Substance Abuse (CCSA) team grant (OF7 B1-PCPEGT 410-10-9633; awarded to STL), and a SickKids Foundation New Investigator Grant, in partnership with CIHR Institute of Human Development, Child and Youth Health (IHDCYH) (Grant No. NI21-1193; awarded to KP) funds a mixed methods study examining the impact of the COVID-19 pandemic on youth mental health, leveraging COMPASS study data. The COMPASS-Quebec project additionally benefits from funding from the Ministère de la Santé et des Services sociaux of the province of Québec, and the Direction régionale de santé publique du CIUSSS de la Capitale-Nationale. This work was supported by an operating grant from CIHR (#170256; grant awarded to AGC).
The funding sources had no role in the study design; in the collection, analysis and interpretation of data; in the writing of the manuscript; or in the decision to submit the article for publication.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2022-09-22T15:07:23.672Z
|
2022-09-20T00:00:00.000
|
{
"year": 2022,
"sha1": "1d675cf6f5c9a1153b105458e1e8ceac60741823",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.abrep.2022.100458",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b25eb2dc4087ce31465c7505487bd75b0e3d9868",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
17862591
|
pes2o/s2orc
|
v3-fos-license
|
Sentinel lymph node biopsy for high-risk cutaneous squamous cell carcinoma: clinical experience and review of literature
High-risk cutaneous squamous cell carcinoma (SCC) is associated with an increased risk of metastases. The role of sentinel lymph node (SLN) biopsy in these patients remains unclear. To address this uncertainty, we collected clinical data on six patients with clinical N0 high-risk SCC that underwent SLN biopsy between 1999 and 2006 and performed a literature review of SLN procedures for SCC to study the utility of SLN biopsy. There were no positive SLN identified among six cases and there was one local and one distant recurrence on follow-up. Literature review identified 130 reported cases of SLN biopsy for SCC. The SLN positivity rate was 14.1%, 10.1%, and 18.6%; false negative rate was 15.4%, 0%, and 22.2%; and the negative predictive value was 97.8%, 100%, and 95.2% for all sites, head/neck, and truncal/extremity sites, respectively. SLN biopsy remains an investigational staging tool in clinically node-negative high-risk SCC patients. The higher false negative rate and lower negative predictive value among SCC of the trunk/extremity compared to SCC of the head/neck sites suggests a more cautious approach when treating patients with the former. Given the paucity of long-term follow up, an emphasis is placed upon the need for close surveillance regardless of SLN status.
Introduction
Cutaneous squamous cell carcinoma (SCC) is overall the second most common skin cancer with approximately 200,000 new cases diagnosed each year in the U.S. and accounts for nearly 25% of annual skin cancer deaths [1][2][3][4]. Fortunately, the majority of cases is associated with a favorable prognosis and is often curable by surgical or local destructive therapy. However, a small subset of SCC tumors can be characterized by aggressive biologic behavior with an increased risk of locoregional recurrence and distant metastases. Numerous studies have identified high-risk factors in SCC patients [5][6][7] associated with a worse prognosis including large size, rapid growth rate, irregular borders, moderate/poor differentiation, perineural invasion, recurrent lesions, sites of prior radiotherapy or chronic inflammation, immunocompromised states, and genetic disorders including albinism and xeroderma pigmentosum. In terms of size and location, SCC tumors are considered high-risk when measuring greater than 2 cm on the trunk and extremities; > 1 cm on the cheeks, forehead, scalp and neck; and > 0.6 cm on the "mask areas" of the face, genitals, hands and feet. More recent studies have suggested that tumor thickness (Clark's level IV), desmoplastic growth, and development of nodal metastases are the strongest predictors for survival resembling cutaneous melanoma [8,9]. Patients with cutaneous SCC associated with high-risk tumor features reportedly have a higher rates of local recurrence ranging between 10-47.2%, and rates of regional and distant metastases between 11-47.3% [5,10].
Prognosis is generally poor in patients who develop nodal metastases with an expected 5-year survival of 26-34% and a 10-year survival rate of only 16%, underscoring the importance of early detection and treatment [5,10]. Recognizing that SCC typically spreads first to regional lymph nodes prior to the development of distant metastases [10][11][12], there may be a beneficial role to identify subclinical nodal metastasis for prognostic staging and guide further therapy including therapeutic lymph node dissection and adjuvant radiation. Currently, there is no consensus agreement on the standard of care staging practice for patients with high-risk cutaneous SCC.
Sentinel lymph node (SLN) biopsy has been widely accepted as a minimally invasive and highly accurate technique for detecting occult nodal metastases in breast cancer and cutaneous melanoma and has been validated as an independent prognostic factor for survival [13][14][15][16][17]. The utility of SLN biopsy for the staging of cutaneous SCC remains unproven and there is a lack of evidencebased practice guidelines. We contribute our institutional experience with SLN biopsy in patients diagnosed with high-risk cutaneous SCC and perform a review of current medical literature to define the predictive value and role of SLN biopsy in the management of occult nodal metastases from cutaneous SCC.
Materials and methods
We reviewed our cumulative experience with SLN biopsy in patients diagnosed with high-risk cutaneous SCC undergoing surgical treatment between 1/1/1999 and 12/ 31/2006 at the VA Puget Sound Health Care System and the University of Washington Medical Center. Institutional review board approval was obtained from both institutions to conduct this retrospective study. Data were collected based upon retrospective review of the medical record and institutional tumor registry. A total of 6 patients were identified with clinically node-negative cutaneous squamous cell carcinoma associated with at least two high-risk features as shown in Table 1. The diagnosis of SCC was verified on histological examination and all patients had no clinical evidence of nodal metastases on physical examination or imaging studies.
All patients underwent preoperative lymphoscintigraphy using technetium-labeled sulfur colloid. Skin landmarks were marked to assist intraoperative SLN localization. Lymphazurin 1% isosulfan blue was injected intradermally surrounding the primary tumor site at the beginning of the procedure in 4 of 6 SCC patients. Two patients with cutaneous SCC lesions of the head and face did not undergo intraoperative blue dye injection. A small skin incision was made overlying the SLN location as determined by preoperative lymphoscintigraphy and intraoperative hand-held gamma probe guidance. All SLNs and any additional palpable nodes were harvested for pathologic examination. Surgical excision of the primary tumor was performed in 5 patients with a minimum 1 cm wide margin. One patient with a recurrent SCC of the temple was excised with a 0.4 cm narrow margin due to anatomic constraints. Submitted candidate sentinel lymph nodes were step-sectioned with the microtome at intervals of 150 micrometers (um) and examined under light microscopy with conventional H&E staining. Three patients underwent additional immunohistochemical staining using a pancytokeratin marker.
We conducted a literature review of sentinel lymph node procedures performed for the primary diagnosis of cutaneous SCC. The Medline, Ovid and Cochrane Library databases were searched using the following terms: sentinel lymph node, squamous cell carcinoma, cutaneous. All publications available in English were reviewed and data recorded including: number of cutaneous SCC cases, SLN results, adjuvant treatments, and follow up status. Using these cumulative results, we evaluated the utility of SLN biopsy to predict nodal disease/recurrence and excluded those studies without follow up information for this analysis. We calculated the probability of sentinel lymph node positivity, based upon the total number of patients undergoing successful SLN biopsy for all sites, head/neck, and truncal/extremity sites. The accuracy of SLN could not be assessed since completion lymph node dissection (LND) was not routinely performed following negative SLN biopsy. Previous studies in melanoma have also applied SLN failure rate, which is defined as the percentage of recurrences in the SLN-negative biopsied nodal basins, to estimate the overall rate of SLN biopsy failure to detect regional spread of the disease [14]. We also calculated the SLN failure rate for high-risk cutaneous SCC. The false negative rate, as defined in previous studies [18,19] as the rate of nodal recurrences to the number of false negative and true positive SLN cases, was also calculated along with the negative predictive value.
Results
Six patients (5:1, M:F) with high-risk cutaneous SCC underwent SLN biopsy (mean age = 72 years, range 51-89 years). All patients had at least two previously described high-risk factors, two patients had 3 high-risk factors, and one patient had 4 high-risk factors. One patient developed a cutaneous SCC of the extremity during immunosuppression following successful heart transplantation. Mean tumor size in this case series was 3.2 cm (range: 1.3-7 cm) and were located on the extremities (n = 2), head/face (n = 2), chest wall (n = 1) and perineum (n = 1, Figure 1). Three patients were referred for recurrent SCC tumors that had been previously treated within one year prior to the SLN procedure. Preoperative lymphoscintigraphy was performed in all 6 patients and identified 10 suspected SLNs. Intraoperative blue dye injection was used in 4 patients with extremity, truncal and perineal lesions. SLN exploration identified a combined total of 11 SLNs (median: 1.7 nodes per patient; range 1-3) as shown in Table 1. Upon pathologic examination with conventional H&E staining, there was no evidence of metastatic carcinoma in any of the submitted lymph nodes. Immunostaining was performed with pancytokeratin in three cases which showed no evidence of micrometastatic disease ( Figure 2). There were no surgical complications following wide excision and SLN biopsy. None of the patients received further adjuvant therapy and no completion LNDs were performed following negative SLN biopsy. Four patients are alive without evidence of disease progression after a median follow up of 10.1 months (range 1.3 -15.5 months). One patient with a high-risk recurrent SCC of the right temple developed a second local recurrence 15.2 months following narrow-margin excision with negative SLN biopsy. A second patient with a high-risk large and deep perineal SCC developed metastatic lesions in the lung and vertebral bone 6.6 months after undergoing negative wide margin excision and negative SLN biopsy.
A review of the literature identified a total of 161 worldwide patients in 14 case series including this study [9,10,20-30], and 5 case reports [31][32][33][34][35] describing the use of SLN biopsy in patients with cutaneous SCC. Three case series [27][28][29] and one case report [31] were excluded since these patients were later combined into larger institutional case series resulting in a total of 130 evaluable cases ( Table 2). All of the studies, except Hatta et al. [30] clearly designated cutaneous SCC cases with at least one high-risk feature. SLNs were successfully identified in 128 cases (98.5%). The probability of SLN positivity for all sites, head/neck, and truncal/extremity sites was found to be 14.1%, 10.1% and 18.6%, respectively. An evaluation of SLN outcomes from all available studies was performed (Table 3). Three studies [20,22,30] did not provide follow up status after SLN biopsy and only three studies [9,21,34] had a median follow up exceeding 2 years. A total of 100 SCC patients in 12 studies who underwent SLN biopsy had useful follow up information. Despite this limitation, an analysis of all documented recurrences showed an overall negative predictive value (NPV) of 97.8% for SLN status in high-risk patients. Among the head and neck cases (n = 51), the NPV for SLN biopsy was 100%, i.e. there were no regional nodal recurrences in any patient found to have a negative SLN. On the other hand, SLN biopsy for patients with high-risk lesions of the trunk and extremities (N = 49) had a noticeably lower NPV of 95.2%. Two patients in this high-risk group developed recurrent nodal disease despite undergoing a negative SLN biopsy. Also of note, there were two patients who relapsed with distant metastases despite a negative SLN biopsy (not included for NPV calculation).
The SLN failure rate was 2.2%. There were no falsenegative SLN among the group of head/neck SCC tumors, while two patients with truncal/extremity SCC developed nodal recurrences despite negative SLN biopsy resulting in a SLN failure rate of 4.8%. The false negative rate was found to be 15.4% for all cases and 22.2% for the truncal/extremity group.
Discussion
Though metastases from SCC of the skin are uncommon with a cumulative incidence between 2-6%, highrisk skin lesions are reported to have metastatic rates exceeding 30% [2]. It has been shown that regional nodal involvement increases both the risk of recurrence and mortality [9]. Metastases from cutaneous SCC tend to spread first to regional nodal basins and generally appear within the first 2 years of follow up [36]. Aggressive surgical treatment has been shown to benefit selected patients with locoregionally confined advanced SCC and long term survivors have been reported following radical salvage resection and therapeutic LND, though complication and mortality rates were reported in one study to be as high as 42% and 11%, respectively [6,9]. The role for elective LND in high-risk SCC remains undefined with most studies limited to head and neck primary sites. For these reasons, SLN biopsy is an unproven and yet theoretically appealing surgical technique to accurately stage high-risk SCCs with minimal morbidity, identify early occult nodal disease and select patients that might benefit from therapeutic LND or other adjuvant therapy.. The optimal management of clinical N0 patients with cutaneous SCC remains unclear. It appears that the overall SLN positivity rate (14.1%) for high-risk SCC is comparable to studies of high-risk melanoma which ranges from 13.9% -29.4% [18]. SLN failure rate, false negative rate and NPV for SCC also resemble rates described in numerous melanoma studies. The standardized use of serial sectioning and immunostaining has significantly improved staging results of occult lymph node metastases in melanoma patients with one group reporting improved SLN positivity rates from 17.2 to 34% [37]. However, the benefit of routine immunostaining with cytokeratin markers for SCC patients has not been established. Given the distinct morphologic appearance of SCC characterized by very large and clustered cells [10], routine immunohistochemistry may not provide additional benefit. In fact, none of the studies reporting a positive SLN ( Table 2) described a case where cytokeratin markers identified micrometastases not readily apparent on conventional H&E staining.
Regional node involvement of SCC is associated with an increased risk of recurrence and decreased survival. LND is recommended for patients with regional lymph node disease, though there are no significant studies that have shown whether this impacts overall survival in SCC patients. In a larger series of patients from the M. D. Andersen Cancer Center [9], 52% of patients who underwent LND for SCC regional nodal disease (n = 23) had disease recurrence and 75% of these patients later developed distant metastases. Unfortunately, there are no published prospective studies comparing LND with close observation in patients with clinical N0 high-risk SCC. Further studies on the utility of SLN biopsy as well as survival benefit from undergoing an elective LND after a positive SLN biopsy are needed.
We found, compared to head/neck sites, there were increased false negative rate and lower NPV for high-risk SCC of the trunk and extremities. This may have been secondary to differences in important prognostic factors for metastasis such as tumor thickness, immunosuppresion, desmoplasia, and increased horizontal size [38]. This was not evaluable given that many studies lacked these information. We cannot rule out the possibility that there may be inherent tumor biology differences between the two sites, and suggest a more cautious approach when treating patients with high-risk SCC of the trunk and extremities. In addition, considering the relatively short follow up in the majority of studies, the calculated NPV of SLN biopsy may in fact be overestimated. Considering the rarity of this tumor and lack of long-term follow up in the majority of studies, including our study, a clear emphasis is placed upon the need for close surveillance regardless of the SLN status. This study and review of literature highlights the potential limitations of SLN biopsy for SCC and the critical importance of careful long-term follow-up in these high-risk patients.
Though cytokeratin immunostaining may not directly impact the sensitivity or specificity of SLN status, recent studies have suggested that other pathologic markers can provide additional insight into tumor biology and cancer prognosis. A prospective study of non-well-differentiated SCC and matched controls confirmed that tumor thickness is the strongest prognostic risk factor in these SCCs [39]. This study also identified the potential value of Ki-67 expression to predict recurrence. Ki-67 is a cell-cycle protein that is upregulated during cellular proliferation and has been shown to correlate with the differentiation status of skin cancers. There is ongoing research to identify novel tumor biomarkers to define cancer prognosis and promote individualized therapies.
Conclusions
We conclude that SLN biopsy remains an investigational staging tool in clinically node-negative high-risk cutaneous squamous cell carcinoma patients. It is obvious that larger, prospective studies with longer follow-up times are needed to establish the efficacy of SLN biopsy and define the optimal treatment of occult nodal metastasis for high-risk cutaneous SCC. It is unlikely that a large randomized controlled trial can be accomplished considering the relative low incidence of high-risk SCC and long accrual period that would be required. An alternative approach would be to contribute and analyze large prospective databases to define the role and limitations of SLN biopsy in this unique subset of SCC patients. Meanwhile, it is incumbent upon treating physicians and teams to closely follow these high-risk patients at greater risk for recurrence whether they undergo SLN biopsy or not.
|
2016-05-17T03:46:46.457Z
|
2011-07-19T00:00:00.000
|
{
"year": 2011,
"sha1": "886465b7b297f68626b7fa331e6414da23ae26fc",
"oa_license": "CCBY",
"oa_url": "https://static-content.springer.com/esm/art:10.1186/1477-7819-9-80/MediaObjects/12957_2011_826_MOESM1_ESM.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff227f08e1f7eb9b3c2a17370b8c21497c24be89",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
41301843
|
pes2o/s2orc
|
v3-fos-license
|
Facial Sculpturing by Fat Grafting
Autologous fat grafting is one of the most demanded facial cosmetic procedures. Fat reservoirs are usually available in large amounts in most patients. The procedure of fat grafting may be repeated several times without any considerable complications. Facial tissues readily accept autologous fat without any fear of immune reaction or carcinogenicity. It is a popular technique that may be used in maxillofacial esthetic surgery. This procedure may be done as an isolated procedure or as an adjunct to any facial esthetic operation such as face lifting to enhance the final esthetic outcome. The main drawback of this procedure is possibility of resorption and unpredictable results of the augmentation; however it is generally believed that prognosis of fat grafting is directly related to proper case selection and meticulous surgical technique. This chapter provides an overview of current concepts and key points in fat harvesting, refinement and injection that may potentially lead to long-lasting, predictable results. Common compli‐ cations are discussed, and effort is made to explain ways to avoid these events and to solve the problems when they happen.
Introduction
Autologous fat grafting is one of the most demanded facial cosmetic procedures.Fat reservoirs are usually available in large amounts in most patients.The procedure of fat grafting may be repeated several times without any considerable complications.Facial tissues readily accept autologous fat without any fear of immune reaction or carcinogenicity.It is a popular technique that may be used in maxillofacial esthetic surgery.This procedure may be done as an isolated procedure or as an adjunct to any facial esthetic operation such as face lifting to enhance the final esthetic outcome.The main drawback of this procedure is possibility of resorption and unpredictable results of the augmentation; however it is generally believed that prognosis of fat grafting is directly related to proper case selection and meticulous surgical technique.This chapter provides an overview of current concepts and key points in fat harvesting, refinement and injection that may potentially lead to long-lasting, predictable results.Common complications are discussed, and effort is made to explain ways to avoid these events and to solve the problems when they happen.
History of fat grafting
The story of fat grafting started in 1893 when a German surgeon (Adolf Neuber) reported his new technique in operating a depressed scar in the infraorbital region of a young man.He harvested a small piece of subdermal fat from the patients upper arm and inserted it to elevate a depressed scar; surprisingly he also explained his frequent failures in treating larger defects and suggested to reserve fat grafting for defects the size of a bean.This effort was occasionally repeated by some other surgeons.The graft results were extremely controversial till 1983 when suction lipectomy was introduced.This technique provided a safe and conservative method for fat harvesting and transfer.At this time a new drawback of fat grafting appeared which was resorption and unpredictable results.Coleman explained structural fat grafting with long lasting results.His concept was mainly a refinement of known technique with a great attention to atraumatic handling of fat cells during harvesting, processing and grafting.[1][2][3][4][5][6] This concept opened a new era in the field of facial esthetic surgery and found its popularity in a really short time; nowadays fat grafting is well-known technique and studies are underway to turn structural fat grafting to a regenerative procedure using stem cells, platelet derivatives and other additives to fat grafts.
Surgical technique
Fat grafting may be divided into three dominant steps; firstly fat is extracted from a secondary donor site then it is processed and purified by one of the known techniques to separate vital fat cells from other redundant ingredients and finally it is injected or transferred to the recipient site; each step needs crucial attention and plays a role in success of the surgery.
Selection of donor site
Fat harvesting may be done from the lateral thigh, medial thigh, abdomen, suprapubic area and any other part of the body that shows a considerable amount of fat tissue.Some authors believe that medial knee has the least amount of elastic fibers and will lead to a better quality fat, though this finding is not supported by other clinical studies.It is assumed that all donor sites may provide an acceptable amount of vital fatty tissue and patient compliance, surgeon's preference and donor site contours are the main concerns when selecting a donor site.It is sometimes recommended in massive fat harvestings or in lean patients that bilateral donor sites be used to prevent contour deformities [Fig.1].
Donor site preparation and local anesthesia infiltration
A small 2-3mm stab incision is made, a small cannula is inserted and 20 to 30cc of local anesthetic(lidocaine with 1/200 000 epinephrine) is dispersed in donor site, after 10 to 15 minutes fat harvesting may be started through the same stab incision [Fig.2].
Fat harvesting
Historically fat harvesting was performed by an open approach and direct resection of fatty tissues; use of microcannulae in 1981 changed fat harvesting techniques to a simple conservative procedure.Cannulae may be connected to a suction machine, negative pressure of the machine takes fat parcels from donor sites toward a sterile reservoir; some authors use a 10mm syringe to induce negative pressure in this technique and the cannula is connected to syringe.By withdrawing the plunger, negative pressure is provided, back and forth hand movement will gather fat into the syringe; it is believed that vigorous negative pressure will endanger vital fatty cells and it is proposed that the process of fat harvesting should be based on curettage of several openings located on the lateral sides of a cannula.Slight negative pressure on the plunger of a syringe (1mm to 3mm negative pressure in a 10cc syringe) which is connected to a cannula plus gentle back and forth hand movements in a relatively longer period of suctioning will gather considerable fat in the syringe (Fig. 3).[7][8][9][10] After 10 to 15 minutes a blunt tip cannula is inserted again and with gentle back and forth movements of the dominant hand fat is extracted from donor adipose tissue while the nondominant hand holds and stabilizes the donor tissue (Fig. 4).
Perils and pitfalls
1. Vital fat cells are very sensitive, so strict attention to sterility and infection control principles is mandatory; any contamination may lead to infection or destroy the vital cells and result in early resorption.
2. Small diameter cannulae (2-3mm) will easily transfer the fat particles and will impose minimal trauma to cells; larger cannulae will accelerate the procedure but take larger particles which is not desirable for facial tissues and may potentially deform the donor sites.
Low negative pressure
(1-3mm negative pressure by withdrawing the plunger up to 3mm mark) will take longer but is less traumatic to vital fat cells.
Fluid injection.
It is usually recommended to infiltrate 1cc of local anesthetics for each cc of harvested fat ; larger quantities of local anesthetics may be added to ringer's solutions.Super wet environment (injection of tumescent solution) which is routinely used in liposuction operations will cause the fat cells to float and may potentially rupture the cells and should be avoided.
Blood in harvested fat.
It is believed that blood leads to easier and faster degradation of viable fat cells so it is recommended to stop the harvesting process when blood is seen in the harvesting syringe and to proceed to some other donor site to obtain fat.
Fat processing
A usual harvest is a mixture of three main components; the first part is local anesthetics and ringers solution; this part is the solution which is usually injected preoperatively; this liquid is partly transferred to harvested fat and must be separated to eliminate devastating effects of epinephrine on fat cells; the second part is an oily liquid which lacks vital fat cells this liquid has no adverse effect on donor site when injected but it disturbs intraoperative judgments as it increases postoperative swelling and lengthens recovery time so it is best separated from the third and main part which is vital fatty cells.Fat processing includes any procedure that may help to separate fat cells from two other redundant components.Many methods have been introduced for fat processing but the main two are: 1-Centrifuge, 2-filtering and washing.
Centrifuge: harvested fat is poured in 10cc syringes (Fig. The first part is a liquid that is easily discarded by gentle pressure over the plunger; the second part includes fat cells that are transferred to several 1 cc syringes and are made ready for injection (Fig. 7 a,b,c,d).
Washing and filtering
Harvested fat is poured in a strainer and washed several times with normal saline; some surgeons close both sides of a strainer and stir it for few minutes to provide a more concentrated fatty compartment.Then one side of the strainer is opened and fat is transferred to 1 cc syringes by a sterile surgical spoon or spatula to get it ready for lipoinjection (Fig 8 a,b, c).
Fat transfer or injection
Injection sites are carefully designed and marked preoperatively; possible pathways of injection cannulae are drawn with a marker then the usual preparation and draping is performed(Fig.9 Proper diameter of injection cannula, amount of graft in each recipient site and injection technique may directly affect the graft viability; these determinant factors will be discussed in detail.
Injection technique
A stab incision is made in pre-planned site cannula is gently inserted.By gentle movement of cannula a tunnel is formed; a small amount of fat (0.3 to 0.5cc) is injected while withdrawing the cannula; the process is repeated several times till the total amount of pre-planned fat is delivered to recipient site.A 40 to 60 percent overcorrection may be done to overcome any possible delayed resorption and relapse.
Size or diameter of cannula
The size of the cannula will definitely determine the size of transferred fat particles; these sizes are usually from delicate 0.7mm cannulae which are used to fill tear troughs to larger ones (up to 1.5 mm) may be used in cheek and chin augmentations.
Regional approaches
The lips: Lips are mobile and extremely sensitive elements that are challenging sites for augmentation; some authors believe mobility will lead to early resorption while others show long-term stability in their cases.To augment the lips a stab incision is made in center of the lip; left and right sides are separately penetrated by a delicate cannula and 0.5cc of fat is placed in each side then 0.5cc is separately inserted in the middle portion.
Tear troughs: Thin skin with very delicate underling tissue makes this region a critical area in fat grafting; use of a delicate cannula, incremental fat placements in small drops or parcels and meticulous technique of injection may guarantee an acceptable result in periorbital rejuvenation.
The cheek and chin: Malar pads sag with aging; this may lead to flattening of malar contours.This unpleasant deformity may easily be camouflaged by fat grafting; 4cc of fat may be enough to recontour the cheeks.These sites are the most common sites treated.A relatively large (1.2-1.5mm)cannula is usually used to augment chin and cheeks.Chin and cheek augmentation will moderately improve soft tissue contours and should not be accounted as an alternative to hard tissue augmentations (genioplasty, chin implants, malar prosthesis).
Paranasal creases: Elimination of a deep paranasal crease is a big challenge in facial rejuvenation.Filling of nasolabial folds by fat grafting may be added to any face lift procedure or may be performed as a sole procedure; 2-3cc of fat in each site will improve deep nasolabial grooves.
Jaw lines: Gradual appearance of jaw lines and deepening of marionettes line are frustrating sequels of aging; these sites may be easily accessed by small stab incisions that are made for paranasal crease or a separate small incision may made in mandibular border to approach these areas.
Sharp needle injections:
Sharp needle injection is a controversial modification of original fat grafting.In this procedure fat is injected transdermally; the main indication of this procedure is to fill deep skin creases or scars.
Amount of injection:
The amount of graft may be determined by specific case characteristics though it is generally recommended to use known guidelines and do small modifications from case to case.
Indications for fat grafting
Fat grafting has been used for many different purposes but it can be generally mentioned that fat graft rehydrates facial skin and improves the patients skin quality; it is also a good filler which may be used to fill a defect, to correct a contour and finally to augment facial volume.Thus, the main indications of fat graft are based on these two dominant properties of fat grafts.
Rejuvenation and soft tissue augmentation
Aging is a complex phenomena it is recently proved that volume loss is one of the main factors that manifests characteristics of an aging face; so fat grafts may potentially restore volume deficits.This procedure may be done solely or added to other rejuvenation procedures such as face or brow lifting [Fig.10].As an adjunct to other major maxillofacial procedure such as rhinoplasty and orthognathic surgery: the role of soft tissue in overall esthetic appearance of the face cannot be underesti-mated; fat injection may improve soft tissue conditions and will help the patient to obtain a more pleasant appearance (Fig. 11 ).12 To augment and fill lips, paranasal tissues and cheeks, there is a common trend toward the To augment and fill lips, paranasal tissues and cheeks, there is a common trend toward the use of fillers to shape and augment facial tissues; infection, foreign body reactions and carcinogenicity of some fillers has made the fat graft an ideal material.As a filler it may be easily provided in larger amounts, it is cheaper when used in larger amounts and easily accepted by most patients(Fig13).
Fat injection to the nose
Fat grafting in rhinoplasty is rapidly finding great popularity.Dorsal irregularities after rhinoplasty are extremely challenging in revision rhinoplasty; use of crushed or morselized cartilages or use of a delicate rasping is not usually efficient and sometimes exaggerate the problem.Fat injection was recently reported to be effective in these cases; some recent studies advocate the use of fat graft in some primary cases, fat may be used in radix augmentation, dorsal refinements and alar pinch deformities though this field is open to future studies.This harmless but extremely unpredictable technique may be best used in patients with other clear indications of fat grafting as an ancillary procedure in hope to obtain the desired results.[10,11]
Complications
Fat grafting is a relatively safe procedure it is usually followed by some swelling, bruising and ecchymosis both at donor site and facial recipient site; these sequelae are self limiting and will subside spontaneously in maximum two or three weeks.
1. Accumulation of fat particles and visible lumps under the skin: Sometimes small irregularities and lumps are easily seen and palpated under thin skins this will lead to an unesthetic appearance.This complication like most other complications may be best prevented by preoperative planning and delicate surgical technique use of small cannulae in harvesting fat to obtain smaller fat parcels; fat injection and transfer should be done by smaller canulas to help the surgeon delicately place the fat graft in recipient tissues in thin skin areas like lower eyelids and tear troughs; injection may be done in deeper layers.
Resorption and relapse:
resorption of grafted fat is commonly reported; some authors believe in these cases the procedure should be repeated several times though some studies report long lasting results after one stage surgery; it is unanimously accepted that surgeons skills and expertise directly affects the predictability of results.Sometimes it is suggested to do 40 to 50 percent over-contouring to see the best results after usual estimated resorption.
Facial asymmetry:
Asymmetries may be due to uneven injections; this complication may be best prevented by proper planning and preoperative mapping over the face; in case this asymmetry remains after six months a secondary revision fat grafting may be scheduled.
Immediate postoperative asymmetries in case of precise surgical procedures may be due to asymmetric edema common in facial surgeries and is usually expected to be corrected after subsiding edema.
Fat emboli:
Fat may be placed in medium to large vessels; these particles may be transferred to vital organs and lead to severe life-threatening problems.Blindness and respiratory dysfunctions are amongst the reported cases.Use of blunt cannulae instead of sharp needles that were previously used for fat injection has considerably reduced this possibility.[12][13][14][15][16][17][18] Donor site complications: Surface depressions and contour irregularities: Careless fat resection from a limited area and massive harvesting from a single site may disturb surface integrity of the donor site and may also lead to body asymmetries; it is recommended to harvest the fat in a radial fashion from insertion site to include a wider donor surface.Massive fat resection may be done from two bilateral sites; in case the problems remain after several months it may be restored by a separate fat transfer to damaged donor tissue asymmetric limbs.The total amount of fat which is usually needed in facial fat augmentation will not cause limb asymmetries in normal patients; in thin patients or those who have undergone extensive liposuction procedures both sides should be prepared and a bilateral symmetrical harvest be considered to prevent this unwanted effect.Any possible congenital or developmental preoperative asymmetry should be determined preoperatively and use of the larger limb in asymmetric limbs may help prevent exaggerated limb asymmetries.[19][20][21][22][23]
Figure 1 .
Figure 1.In thin patients bilateral multiple donor sites should be considered; in this patient bilateral medial, lateral thighs and medial knees are prepared.
Figure 2 .Figure 3 .
Figure 2. Local anesthetic solution is dispersed into the donor site; it may be done using a 1.5 to 2mm cannula.
Figure 4 .
Figure 4. Non-dominat hand holds and stabilizes donor site while dominant hand starts the harvesting procedure.
Figure 5 .
Figure 5. a. Harvested fat is poured in 10cc syringes.b.Then the syringes are placed in their special slots in the centrifuge machine.The syringes are inserted in their special slots in a centrifuge and spun at 3000 rpm for 3 minutes to separate different components (Fig.6 a,b).
Figure 6 .
Figure 6.a-Harvested fat is a mixture of lysed fat, local anesthetics and vital fat cells.b-The same view after centrifuge shows the lower part which is local anesthetics and preoperatively injected solutions, middle part is viable fatty tissue and the third upper part is lysed fat cells and triglyceride.
Figure 7 .
Figure 7. a.In all centrifuged syringes the middle part which is viable fat should be separated by slight pressure over plunger until the first part (local anesthetic) is depleted.b.By gradual turning of syringes the upper part which is lysed fat is easily separated and discarded.c.The middle part which is the main part is transferred to 1cc syringes.d. 1cc syringes are set and ready for injection to recipient sites.
3. 5 . 1 .
Washing and filteringHarvested fat is poured in a strainer and washed several times with normal saline; some surgeons close both sides of a strainer and stir it for few minutes to provide a more concentrated fatty compartment.Then one side of the strainer is opened and fat is transferred to 1 cc syringes by a sterile surgical spoon or spatula to get it ready for lipoinjection (Fig8 a,b, c) .
Figure 7 .
Figure 7. a.In all centrifuged syringes the middle part which is viable fat should be separated by slight pressure over plunger until the first part (local anesthetic) is depleted.b.By gradual turning of syringes the upper part which is lysed fat is easily separated and discarded.c.The middle part which is the main part is transferred to 1cc syringes.d. 1cc syringes are set and ready for injection to recipient sites.
Figure 7 .
Figure 7. a.In all centrifuged syringes the middle part which is viable fat should be separated by slight pressure over plunger until the first part (local anesthetic) is depleted.b.By gradual turning of syringes the upper part which is lysed fat is easily separated and discarded.c.The middle part which is the main part is transferred to 1cc syringes.d. 1cc syringes are set and ready for injection to recipient sites.
Figure 8 .
Figure 8. a. Harvested fat is gently poured in a strainer; b. it is washed several times to separate redundant materials.c.Sterile instrument is used to transfer the purified fat to 1cc syringes.Selecting processing techniques: Many studies have tried to compare known techniques, up to now none of these trials have convinced the surgeons to leave one technique and unanimously accept the other; but it is clear that skills and expertise, gentle handling of fat and sterility may directly affect the success rate of each technique.
Figure 9 .
Figure 9. Careful pre-operative drawing and mapping will prevent many post-operative complications.
Fig- 10 -
Fig-10-This 41 year-old woman severe characteristics of early aging such as volume
5 and conservative rejuvenation by fat graft was performed. The 1 -year follow-up shows 6 acceptable rejuvenation and improvement of skin quality. 7 Figure 10 .
Figure 10.This 41 year-old woman severe characteristics of early aging such as volume loss, deepening of facial creases and loss of skin quality is seen; esthetic nasal surgery and conservative rejuvenation by fat graft was performed.The 1-year follow-up shows acceptable rejuvenation and improvement of skin quality.
6 7 8 Fig. 12 -
Fig. 12-In this 44 year-old woman a masculine face with exaggerated border and contours 9 was planned for feminization; simultaneous forehead lifting, mandibular angle reduction 10 and total facial fat graft was performed.The 10-year follow up shows acceptable long 11 term results.
13 use
of fillers to shape and augment facial tissues; infection, foreign body reactions and 14 carcinogenicity of some fillers has made the fat graft an ideal material.As a filler it may be 15 easily provided in larger amounts, it is cheaper when used in larger amounts and easily 16 accepted by most patients[Fig13].
Figure 12 .
Figure 12.In this 44 year-old woman a masculine face with exaggerated border and contours was planned for feminization; simultaneous forehead lifting, mandibular angle reduction and total facial fat graft was performed.The 10year follow up shows acceptable long term results.
Figure 13 .
Figure 13.This young class III woman underwent mandibular setback to correct the skeletal deformity.Lack of vermilion show was a frustrating complaint.A 3-year follow up shows long term effects of fat grafts of the upper lip.
|
2017-09-18T06:04:21.379Z
|
2013-06-26T00:00:00.000
|
{
"year": 2013,
"sha1": "ae968ceec9061beec670d4df951bfc7f1e31aead",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5772/54874",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ae968ceec9061beec670d4df951bfc7f1e31aead",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212813308
|
pes2o/s2orc
|
v3-fos-license
|
Simulation and measurement of spectra of reference filtered X radiation
: The energy spectrum is one of the most effective methods to characterise the quality of reference filtered X radiations. To obtain the energy resolution, the authors choose seven sealed standard radioactive sources to carry out energy calibration for high-purity germanium detector whose energy resolution is within 3%. Then they use this spectrometer to measure the reference filtered X radiations with its energy from 55 to 125 keV of low air-kerma rate series, meanwhile the BEAMnrc is used to simulate the spectra of the same four radiation qualities. By analysing the data of spectral distribution, they can determine mean energy and spectral resolution of these reference radiation qualities. By comparing simulation results with actual measurements, the result shows the spectra stimulated by the BEAMnrc are consistent with the spectra measured by high-purity germanium detector and the deviation of the mean energy is within 4.0%. The spectral resolution of the reference filtered X radiation is 22.8, 22.4, 22.5 and 22.8%, respectively.
Introduction
Since Roentgen discovered X radiation in 1895, X radiation has widely utilised in various fields such as disease treatment, medical diagnosis, radiation protection, and environmental protection and so on. X-ray is a double-edged sword. If X-rays are used excessively, it will cause radiation damage to the human, so we need to control the total of X-rays while using its advantages to benefit to human. In order to unify the methods of calibrating dosimeters and dose rate meters that are used to measure radiation precisely in different countries, International Organization for Standardization revised ISO4037 and published four technical specifications, including ISO4037-1, ISO4037-2, ISO4037-3, ISO4037-4 [1][2][3][4]. According to ISO4037-1 which is used to describe the radiation characteristics and production methods, we have established four reference filtered X radiations of the ISO 4037 low air-kerma rate series in the range from 55 to 125 kV. Generally, the quality of a filtered X radiation is characterised by the following parameters: (a) mean energy of a beam expressed in kiloelectronvolts; (b) resolution expressed in percentage; (c) half-value layer expressed in millimetres of Al or Cu. Actually, the quality of the radiation obtained depends primarily on the following parameters: (d) the high-voltage of the X-ray tube; (e) the thickness and quality of the total filtration; (f) the quality of the target.
By experiment, we obtain the half-value layer of four reference radiations, then use the Monte Carlo to simulate spectrum, and calculate mean energy and spectral resolution [5][6][7]. Moreover, the high-purity germanium detector is used to measure these reference radiations to identify simulation results.
Experimental principle
When high electron charges are fired at a metal target, charges will be decelerated to lose energy which is transformed into bremsstrahlung. Because the loss energy is randomised, the bremsstrahlung is characterised by a continuous distribution of X radiation. Characteristic X-rays are emitted from heavy elements when their electrons make transitions between the lower atomic energy levels such as K, L. The X-ray unit used in the experiment is a bipolar industrial X-ray unit with various advantages, such as adjustable X-ray tube voltage, small temperature drift, continuous exposure, long-term use, stable performance and so on. The model of the X-ray tube is COMET MXR-320/26. The energy of X-rays generated by the X-ray unit not only continuous but single.
The experimental device of reference filtered X radiation shown in Fig. 1 is mainly composed of X-ray shielding box, X-ray unit, primary diaphragm, filtration system, secondary diaphragm, removable rails, calibration platform, controlling software and other components. The shielding box has three shielding structures, aluminium, lead and aluminium, respectively. The 5 mm thick lead as the main shielding material can effectively reduce the leakage and part of scattered radiation. Primary diaphragm and secondary diaphragm are made of 20 mm thick tungalloy, which can effectively reduce the impact of scattered radiation on the experimental results. Filtration system consists of additional filters, filter discs, controlling software and other components. The purity of additional filtrations, including tin, copper and aluminium, are >99.99%. By using additional filtration, the desired reference radiations shown in Table 1 are obtained. The deviations of the first half-value layer are all within 5%, which are complying with the certain conditions given in ISO4037-1.
Energy calibration of gamma spectrometer
Nuclear radiation detectors are known as nuclear radiation detectors that use nuclear radiation to cause ionising effects, luminescence, physical or chemical changes in gas, liquid or solids to detect nuclear radiation. Common ionising radiation detectors can be divided into gas detectors, scintillation detectors, semiconductor detectors.
Gas detectors measure nuclear radiation by collecting the ionised charge generated by the radiation in the gas [8]. The main types are ionisation chamber, proportional counter and Geiger-Muller counter. Their structure is similar, they are generally cylindrical containers with two electrodes, filled with some kind of gas, the voltage between the electrodes. And they all have their own characteristics and applicable fields. The difference of them is the operating voltage range is different.
The scintillation detector is driven by a charged particle on a scintillator to ionise and excite atoms or molecules, emit light in a desensitisation process, and measure the nucleus by using an J. Eng optoelectronic device to convert an optical signal into a measurable electrical signal [9] radiation. The scintillation counter has a short resolution time and high efficiency, and it can also determine the energy of the particles based on the size of the electrical signal.
Semiconductor detectors are radiation detectors using semiconductor materials as the detection medium [10]. The most common semiconductor materials are germanium and silicon. The basic principle of the semiconductor detector is that charged particles generate electron-hole pairs in the sensitive volume of the semiconductor detector. The electron-hole pairs drift under the effect of the external electric field and output electrical signals, and the radiation is measured by the electrical signals. Common semiconductor detectors include P-N junction semiconductor detectors, lithium drift semiconductor detectors, and high-purity germanium semiconductor detectors. High-purity germanium detectors have the advantages of high energy resolution, short manufacturing cycle, and storage at room temperature. The use of ultrapure germanium materials also facilitates the fabrication of X and gamma-ray detectors, which can be made of very sensitive volumes with very thin dead layers that can be used to detect both X and gamma rays.
In order to use high-purity germanium detector to detect X-rays precisely, it is most important to carry out energy calibration. In fact, we get every channel and its total counts when we use gamma spectrometer to detect X-rays, and the channel is a function of energy. First of all, we select six sealed standard radioactive sources for energy calibration, which are 57 Co, 133 Ba, 241 Am, 137 Cs, 109 Cd, 139 Ce, respectively. Secondly, as shown in Fig. 2, we use gamma spectrometer to detect the energy of rays emitted by the six radioactive sources to carry out energy calibration for the relationship between energy and channel [11]. In the measurement process, it should be noted that the central of these radioactive sources and the probe are in a line. And the distance between them is 5 cm. The functional relationship between energy and channel generally can be expressed as a linear function shown as following equation: where the intercept a in the formula is the corresponding energy when the channel is zero, and the slope b represents the energy of one channel. If some non-linear factors in the spectrometer and electronic system are considered, a non-linear quadratic term can be added to the equation, expressed as a quadratic function where c is the coefficient of the quadratic term. Because the linear relationship between energy and channel of the gamma spectrometer is good enough to detect the energy of rays, we choose the first to carry out energy calibration. The measurement results of these sources are shown in Table 2. Then we use a gamma spectrometer to detect the X-rays emitted by 55 Fe whose energy of the characteristic X-rays is 5.9 keV. The amount of characteristic rays all exceeds 10,000 to decrease the error of statistical fluctuations. In the end, by data processing, we can get the relationship between energy and channel that is shown in Fig. 3. The linear correlation coefficient R 2 is equal to 1, and the deviation of the energy emitted by 55 Fe is within 0.79%, which proves that the linear fit is good enough to get the correct energy of these reference radiations.
Simulation and measurement of spectra
According to the experimental device shown in Fig. 1, we had established four radiation qualities of the ISO 4037 low air-kerma rate series in the range from 55 to 125 kV which are in conformance with the given specifications. In addition, the spectra can be obtained by simulation and measurement. The EGS (Electron-Gamma-Shower) system of computer codes is a general purpose package for the Monte Carlo simulation of the coupled transport of electrons and photons in an arbitrary geometry for particles with energies above a few keV up to several hundreds of GeV. In this experiment, we use the enhanced version called EGSnrc where the radiation transport of electrons or photons can be simulated in any element, compound, or mixture [12]. BEAMnrc is built on the EGSnrc Code System for modelling radiotherapy sources with various independent modular components [13,14]. The X-ray tube as a prototype is simulated by BEAMnrc in which the target material is tungsten, and the target inclination is set to 20°C. The energy of incident electron is adjusted by tube potential. The number of histories is set to 1.0 × 10 9 . In the coupled transport of particles, all information is stored in the phase-space file. BEAMDP can be used to analyse the phase-space file and to derive the data of spectral distribution [15]. Moreover, the high-purity germanium detector was used to measure the actual spectra of these radiation qualities. The simulated spectra by EGSnrc and measured spectra by high-purity germanium detector are shown in Figs. 4-7. By comparison, we can find that the simulated spectra are in well conformance with the measured spectra. By analysing the data of spectral distribution, the mean energy of the corresponding radiation is obtained. As shown in Table 3, we can see the results of the mean energy whose deviations are all within 4.0%. And the spectral resolution of the reference filtered X radiation shown in Table 4 is 22.8, 22.4, 22.5 and 22.8%, respectively, whose deviations are all within 8.6% of the values given in ISO4037-1.
Conclusion
The spectrum is one of the most effective ways to characterise the reference filtered X radiation. In this experiment, we have obtained the spectra by simulation and actual measurement, which are all consistent. And the deviation of the mean energy and the spectral resolution is within 4.0, 8.6%, respectively, which are all conforming to the requirements of the standard specifications. These all approve the established reference filtered X radiations by experiment are conforming to the requirements of the standard specifications. It lays the foundation for the follow-up study on the methods of traceability and transmission of X-ray at the low dose rate.
Acknowledgments
This work was supported by National Key R&D Plan of China under grant no. 2017YFF0205100, Research Fund for the Research on key technology of measurement of low dose rate X-rays and γrays.
|
2019-12-19T09:21:58.169Z
|
2019-03-25T00:00:00.000
|
{
"year": 2019,
"sha1": "c4c7ef2e6611c6922e45b936e0d409f4ccc41da3",
"oa_license": "CCBY",
"oa_url": "https://ietresearch.onlinelibrary.wiley.com/doi/pdfdirect/10.1049/joe.2018.9088",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1514fc2fa68af5df77f4aa296182070dce1b74a1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
233651515
|
pes2o/s2orc
|
v3-fos-license
|
Fixed-Time Synchronization for Dynamical Complex Networks with Nonidentical Discontinuous Nodes
This article investigates the fixed-time synchronization issue for linearly coupled complex networks with discontinuous nonidentical nodes by employing state-feedback discontinuous controllers. Based on the fixed-time stability theorem and linear matrix inequality techniques, novel conditions are proposed for concerned complex networks, under which the fixed-time synchronization can be realized onto any target node by using a set of newly designed state-feedback discontinuous controllers. To some extent, this article extends and improves some existing results on the synchronization of complex networks. In the final numerical example section, the Chua circuit network is introduced to indicate the effectiveness of our method by showing its fixed-timely synchronization results with the proposed control scheme.
Introduction
As we know, in the last few decades, complex networks have been widely presented in our real world, for example, electrical power grids, metabolic pathways, neural networks, food webs, and World Wide Web [1][2][3][4][5]. Synchronization is a well-known crucial collective behavior for complex networks, so the synchronization of complex networks has received more attention due to many crucial applications [6][7][8] in information processing, secure communication, and biological systems [9][10][11][12]. Up to now, there are many research studies on complex network synchronization, most of which focus on asymptotic synchronization, mainly on asymptotic synchronization behavior [13][14][15] and exponential synchronization results [16], but the two kinds of synchronization belong to the infinite-time category [17][18][19].
Since it has been found that finite-time control ways will further enhance the rate of convergence greatly and synchronization will be performed in a settling time by designing appropriate finite-time synchronization controllers, the finite-time synchronization research [20][21][22][23][24][25] in complex networks has been carried out one after another [26][27][28][29][30]. In [20], the issue of the finite-time synchronization is studied between complex networks with nondelay and delay coupling by using pulse control and periodic intermittent control. By use of aperiodically intermittent control, Liu et al. [21] considered the finite-time synchronization problem in dynamic networks with time delay. e global random finite-time synchronization issue is investigated in [22] for discontinuous semi-Markov switched neural networks with time delay and noise interference. e finite-time synchronization analysis of linear coupled complex networks is discussed [23] with discontinuous nonidentical nodes.
e convergence rate of classical finite-time synchronization is relatively fast in contrast to asymptotic synchronization and exponential synchronization. However, it has an obvious disadvantage that the synchronization convergence rate of complex networks depends on the initial states of all nodes. Unfortunately, it is very difficult or even impossible for some chaotic systems to know their state previously. In these results, the finite-time control methods may be ineffective. Taking advantage of the benefits of finite-time control, a special finite-time synchronization is proposed in [20]. As for the novel fixed-time synchronization, the settling time has no relation with the initial conditions of the network system and only depends on the control parameters of the system controller, see [21,22]. us, synchronization can be accomplished by using the fixed-time controller within a specified time.
Moreover, if the dynamics of the nodes are different, then the synchronization issue will be more complex and challenging than the same node condition. By using the free matrix, the equilibrium solution synchronization is concerned on all alone nodes together with the average state trajectory synchronization of different nodes in [35]. e intermittent controller is employed to fix the complex network with different nodes in [36]. In [37], the cluster synchronization problem is investigated for complex dynamic networks with time-delay coupling and nonidentical nodes by the pinning control method. Furthermore, the finite-time synchronization issue is considered for coupled complex networks with discontinuous nonidentical nodes in [23].
Recently, the complex networks with perturbations have attracted more attention for their wide applications [38][39][40][41][42][43]. In [40], the global exponential synchronization issue is studied for linear coupled neural networks with impulsive disturbance and time-varying delay. e clustering synchronization scheme is deeply concerned with regard to uncertain delayed complex networks in [41]. e adaptive pinning control design is proposed in [42] for the clustering synchronization problem of coupled complex networks with uncertain disturbances.
Until now, there are several research results on the finitetime synchronization of complex networks with different nodes or uncertain disturbances, mostly about the asymptotic or exponential synchronization. However, it has not been fully investigated for the fixed-time synchronization analysis of heterogeneous networks with uncertain disturbances, and the relevant research results are rarely covered. In a word, it is indispensable and significant to consider the fixed-time synchronization problem of complex networks with different nodes and uncertain disturbances, which has profound theoretical and practical significance. From the above analysis, we face two difficulties: (i) what conditions are applicable and easy to verify for general complex networks with different nodes and uncertain disturbances? (ii) How to design the controller to overcome heterogeneity and uncertain disturbance of network nodes? is paper tries to conquer these two difficulties and realize the fixed-time synchronization of a certain kind of complex linear coupled networks with different nodes and uncertain disturbances, and then the theoretical results of network synchronization can be further enriched.
Applying the discontinuous control scheme, the fixedtime synchronization problem is analysed for complex networks with uncertain disturbances and nonidentical nodes. Our main contributions here can be concluded as follows: (1) for a class of heterogeneous networks with uncertain disturbances, a novel state-feedback discontinuous controller is designed to get over the influence on the fixed-time synchronization from heterogeneous nodes and uncertain disturbances simultaneously; (2) several criteria are proposed to deduce the fixed-time synchronization for the considered networks. Unlike most existing results, the obtained fixed-time synchronization conditions are expressed by linear matrix inequality, which is easy to be verified; (3) as special cases, the fixed-time synchronization of complex networks without uncertain disturbances is also considered by employing some existing controllers, respectively, and the corresponding results are given in some corollaries. e rest of the paper is arranged as follows. A network model is established with uncertain disturbances and nonidentical nodes, and then the problem of the fixedtime synchronization is described; meanwhile, some necessary definitions and assumptions are given in Section 2. e fixed-time synchronization conditions are achieved in Section 3. Several numerical examples are introduced in Section 4 to indicate the effectiveness of the proposed results. Section 5 summarizes the research conclusions of this paper and puts forward the future research directions.
Problem Formulation and Preliminaries
A kind of nonlinear system including N nonidentical nodes with diffusion linear coupling is considered, in which each node can be regarded as an n-dimensional dynamic system, as shown in the following: where x i (t) � [x i1 (t), . . . , x in (t)] T ∈ R n denotes the state vector of the ith dynamical node; the dynamics of the ith uncoupled node is _ x i (t))] T : R + × R n ⟶ R n is a uncertain vector and represents the disturbance. e constant c > 0 can be considered as the coupling strength of the concerned networks, and Γ � (c ij ) ij ∈ R n×n is a matrix and denotes the inner coupling relation between the network nodes and indicates how the components of each pair of nodes are connected with each other, and c ij ≥ 0; G � (G ij ) N×N is a coupling configuration constant matrix, which describes the topological structure and can be exhibited as the diffusion structure, i.e., G ij ≥ 0 and G ii � − N j�1,j≠i G ij . In this paper, the driven dynamical node of (1) satisfies where A 0 ∈ R n×n , f 0 (t, x 0 (t)) ∈ R n , and h 0 (t, x 0 (t)) ∈ R n . In fact, most of the well-known chaotic systems can be described by the above dynamical equation, such as Sprott circuit, Chua circuit, Rössler's systems, and Chen system [42].
Definition 1 (see [44]). Complex network (1) is said to be synchronized onto (2) in finite time if there exist a designed feedback controller to system (1) and a constant t * > 0 such that where t * > 0 is called the settling time and often depends on the initial state vector value Definition 2 (see [44]). Complex network (1) is said to be synchronized onto (2) in fixed time if there exists a fixed settling time T * > 0 such that where T * > 0 is called the settling time and is independent of the initial synchronization error In this paper, the goal is to fixed-timely synchronize the state of network (1) onto the driven one (2) by designing feedback controllers.
Obviously, controlled complex network (1) can be rewritten as follows: Introduce the synchronization errors e i (t), which are defined as (5), the error dynamical network model can be given by In order to obtain our main results, some necessary assumptions are listed as follows.
Assumption 1.
ere exist constant M i > 0 and uniformly symmetric positive definite matrix ere exist constant M 0 > 0 and uniformly symmetric positive definite matrix L 0 such that f 0 (t, x) satisfies Assumption 3 (see [43]). ere exists a time-varying function μ(t) ≥ 0 such that Assumption 4. For any i, i �� 1, 2, . . . , n, the uncertain function vector h i (t, x i (t)) is assumed to be continuous at t, x i (t) ≥ 0 and bounded. Moreover, there is a known nonnegative number h max such that Remark 1. Assumptions 1 and 2 are general and satisfied with most of the well-known chaotic systems, for instance, Chua circuit [43], Rössler's systems, and discontinuous Chen system. In fact, the above systems meet the following conditions: there exist some positive constants . , x n ) T ∈ R n , and y � (y 1 , y 2 , . . . , y n ) T ∈ R n . Using condition (11), we have Mathematical Problems in Engineering where Clearly, if α � max 1≤i≤n l i jj , j � 1, 2, . . . , n and M � max 1≤i≤n β i , then Assumptions 1 and 2 involve conditions (H2) and (H3) in [25]. Moreover, the continuous chaotic system is also a special case by setting M i � 0 in Assumption 1 or M 0 � 0 in Assumption 2, for instance, the continuous Rössler system, Chua's circuit, Chen system, Lorenz system, and logistic differential system. Hence, Assumptions 1 and 2 are more general, and most popular chaotic systems are applicable. Assumptions 3 and 4 take advantage of conditions on the activation function, and it is seen that they are diffusely imposed in the literature [23,[35][36][37].
According to Definition 2, it is clear that the fixed-time synchronization of dynamical network (5) onto (2) can be degenerated into the fixed-time stabilization of error dynamic system (6).
Fixed-Time Synchronization Analysis
In this part, the controllers are designed for the fixed-time synchronization problem of complex network (1), and concerned complex network (5) can realize the fixed-timely synchronization under the appropriate designed controllers. Firstly, we give the synchronization controller design of complex network (1), and then fixed-time synchronization criteria can be obtained based on error system (6). Several corollaries are also obtained for (5) and (2) with identical nodes. For concerned complex network (1), the control input u i (t) ∈ R n , i � 1, . . . , N, is designed as follows: where t)), . . . , sign(e in (t))), and the real numbers p, q follow 0 < q < 1, p > 1. e following lemmas are necessary and given to derive the subsequent main results.
Denote υ(t) � V(x(t)) if there exists a continuous function
for any t > 0 that υ(t) > 0, and υ(t) is differentiable at t and satisfies en, we have υ(t) � 0 for t ≥ t 1 . In particular, if c(υ) � Qυ μ , where μ ∈ (0, 1) and Q > 0, then the setting time is estimated by Lemma 2 (see [47]). For matrices A, B, C, and D with appropriate dimensions and a scalar α, the following assertions hold: where ⊗ is the Kronecker product.
Theorem 1. For concerned complex network (1) with the control input, if Assumptions 1-4 hold and
, then drivenresponse complex networks (1) and (2) can achieve fixed-time synchronization under controller (13), with the settling time where
For error dynamical system (6), a Lyapunov function is listed by
and the derivative of the above Lyapunov function along the trajectory of system (6) can be computed as with According to Lemma 2, we know the following equation is true: For the term I 2 (t), using condition (10) in Assumption 4, we get the following inequality:
Mathematical Problems in Engineering
Also, it can be seen that with . From (7) in Assumption 1, it is known that where L � diag(L s 1 , L s 2 , . . . , L s N ). By (9) in Assumption 3, one can get Submitting (28)-(32) into (27) yields that where By use of Lemma 2 and (24), it gives that where D � diag (d 1 , d 2 , . . . , d N ).
Mathematical Problems in Engineering
Submitting (36), (37), and (43) to (33), we can get According to Lemma 4,V(t) converges to zero within a settling time T * , which is defined in Definition 2, and one can obtain that, by use of controller (13), the considered complex network (1) is fixed-timely synchronized onto driven node (2) within the fixed time T * , which is given by . (45) erefore, it can be concluded that the error vector e i (t) converges to zero within T * , and driven-response complex networks (1) and (2) are fixed-timely synchronized under controller (13) within the fixed time T * . e proof is completed.
Remark 2.
In recent years, a lot of extensive research has been conducted on the finite-time synchronization and fixed-time synchronization of complex networks, and many breakthroughs have been made. However, as far as we know, there are few published papers that deal with the fixed-time synchronization of heterogeneous complex networks. eorem 1 suggests a way to choose the controller to realize the fixed-time synchronization for the heterogeneous complex network. e controller consists of three sections: the first two terms are used to overcome the influence from the linear condition of the nonlinear function, the second one − η i (t)sign(e i (t)) is introduced to compensate the influence of disturbance h i (t, x i (t)), and finally, the last section sign(e i (t))(a|e i (t)| p + b|e i (t)| q ) is employed to force the considered networks achieve the fixed-time synchronization. Now, if M � max 1≤i≤n M i and η(t) � max 1≤i≤n η i (t), then the controllers u i (t) ∈ R n can be designed as follows (i � 1, . . . , N): where the parameters d i , a, b, p, and q are defined as the same as in (13). erefore, by using the same analysis method in eorem 1, we can obtain Corollary 1 that is a similar conclusion with [25]. (1) and (2) under controller (46), if Assumptions 1-4 hold and the control parameters η i (t) and d i in (46) satisfy the following inequalities,
Corollary 1. For concerned complex networks
where , then driven-response complex networks (1) and (2) can achieve fixed-time synchronization under controller (46), with the settling time If the uncertain disturbance is not considered in the complex network model, i.e., h 1 � h 2 � · · · � h N � 0, then the following network model is degenerated as Let x 0 | t�0 � x 0 (0), and then the driven network node is governed by en, the corresponding error dynamical system can be rewritten as follows: where e controllers are the same as before, and then a criterion can be obtained on the fixed-time synchronization of the concerned complex networks with nonidentical nodes. By taking h max � 0 in eorem 1, one can easily get the following corollary, and its proof is omitted here. (46). If Assumptions 1-3 hold and the controller parameters satisfy the following matrix inequalities,
Corollary 2. Consider complex network (49) with drive node (50) under the set of controllers
, then (49) can be synchronized to the state of drive node (50) within a fixed time T * and the settling time Furthermore, if h i � 0, A i � 0, and f i � f for i � 0, 1, . . . , N in (49), then complex network (1) is further reduced to 8 Mathematical Problems in Engineering and driven network node (2) is changed correspondingly into the following form: In the issue, the proposed fixed-time synchronization scheme can be applied to the corresponding complex networks with identical nodes here, and the criteria are given in the following corollary.
Conclusions
e fixed-time synchronization problem is studied for a type of dynamic complex networks with nonidentical nodes and uncertain disturbances. By employing the Lyapunov function theory, some novel sufficient conditions are provided and further applied to some special cases, such as the identical node issue. Future work may be centered on synchronous applications of complex networks with nonidentical nodes and uncertain disturbances.
Data Availability
e data used to support the findings of this study are included within the article. No other data are used beyond this article.
Conflicts of Interest
e authors declare that they have no conflicts of interest.
|
2021-05-05T00:08:56.699Z
|
2021-03-18T00:00:00.000
|
{
"year": 2021,
"sha1": "1457a93dbc2f5581c29dd8962a4cf03bd5726726",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mpe/2021/6654193.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1518a7416ab215803020195e86bbf8c016b5fd33",
"s2fieldsofstudy": [
"Engineering",
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
225797955
|
pes2o/s2orc
|
v3-fos-license
|
HUMAN SEXUALITY AND BREAST CANCER PATIENTS SEXUALITY AND BREAST CANCER PATIENTS.”
Sexuality reflects a person’s personality. Cancer, regardless of its location can affect sexuality. Cancer and its treatment have a bio-psycho-social impact on a patient.3 Research has shown that poor physical health and emotional distress can affect sexual health 4. Cancer survivors were reported to have sexual problem after cancer therapy.5following changes in body image. Materials and Methods: Subjects taken for the study were who had come for consultation regarding their physical health including sexual health. 65 subjects with breast cancer patients were included in the study. Informed consent was taken from the cases and it was approved by an Institute Ethics review Board attached to the institute. Basson’s sexual response cycle formed the basis for formulating worksheet given to the patients to record breaks in their sexual response cycle following a sexual encounter they had with their partners (husbands). 5 takes into account the role of intimacy in understanding the women’s sexual response cycle and it is non-linear in nature. This makes the model suitable for studying sexual response cycle in women in health and disease. Based on the model the work sheet was created to understand the sexual response cycle of women with breast cancer, The Breaks in the sexual response cycle were found to due to Biological factors like body image, fatigue and drug therapy along with psychological factors like pain, anxiety and depression. The main motivator of sexual response was physical intimacy and care in these patients.
Introduction
"I hate society's notion that there is something wrong with sex. Something wrong with a woman who loves sex." -Alessandra Torre Human sexuality is a complex phenomenon that reflects our personality. According to WHO (2002), sexuality includes sexual orientation, biological instinct, and well-being of the individual. 1 Http://www.granthaalayah.com ©International Journal of Research -GRANTHAALAYAH [208] It can be influenced by biological, psychological, socio-cultural and religious factors. Even though sexuality is an important element in the health-illness continuum, little or no attention is paid sexuality during cancer care 2 .
Because sexuality reflects a person's personality, cancer, regardless of its location can affect sexuality. Cancer and its treatment have a bio-psycho-social impact on a patient. 3 Research has shown that poor physical health and emotional distress can affect sexual health 4 .Cancer survivors were reported to have sexual problem after cancer therapy. 5 Following changes in body image.
Basson's sexual response cycle 5 takes into account the role of intimacy in understanding the women's sexual response cycle and it is non-linear in nature. This makes the model suitable for studying sexual response cycle in women in health and disease. Based on the model the work sheet was created to understand the sexual response cycle of women with breast cancer. The model takes into account the role of intimacy as one of the major factors that make women appreciate and enjoy sex with the partner. Such intimacy is not felt by majority of the breast cancer patients and it was indeed one of the much neglected areas of women's health in breast cancer patients. 6 ( Figure 1)
Methods
Women who participated in the present study were from larger groups of patients with various psycho-sexual problems who had come for sex counseling at Salem, TN, India. The sex counseling center was a part of The Salem Clinical Diagnostic Center specialized in hormonal Assays. Informed consent was taken from the respective patients. 65 Patients who underwent chemo and radiotherapy after mastectomy were included in the study. A work sheet was prepared following Basson's Model of the sexual response cycle and given to the participants. 5 They were asked to recollect and reflect on a recent sex encounter they had and were instructed to address stepwise manner starting with the reasons for sex, initiating and continuing with the sex experience based on the reasons mentioned; followed by the stimuli and context helping the arousal phase helped or hindered by biological and psychological factors, which led to combined response reflected as sex arousal, leading increased sexual desire followed by the outcome of the experience resulting from the initial sex experience or encounter.
The women selected for the study were interviewed for eligibility to participate in the study, evaluated by certified counselors and psychiatrist. Written Informed consent was obtained from the patients. By email their sexual response cycle entered in the worksheets were studied for finding any break in the cycle.
Data Analyses
Breaks in the sexual response cycle was identified using conceptual content analyses. A negative or positive response was considered as a break in the cycle. 8-9 following the sexual encounter between the partners. Any other reasons to avoid to continue the cycle, or absence of arousal, if medications or tiredness hinder the response, mastectomy or the loss of hair or body weight were taken factors that hinder the continuation of sex response cycle and they were taken as breaks in the sexual response cycle. The age group of the participants were from 45 years to 60 years (M= 46; SD= 12.5). All the Participants were married (100 %).
Breaks in the Cycle
Interruptions in participants' sexual response cycle was analyzed through concept content analysis. Out of the possible 11 breaks an average of 6.4 breaks in the cycle was observed (6.4; SD 1.85). and the findings are summarized in Table.1 What reasons for sex had negative emotions or lack of desire in having sex or participating in sexual activity, the fear following diagnosis of cancer and therapy are shown in Table.2. The initiation of engaging in sex then followed a NO or YES and there was more of NO than YES (Table.1.and 2) Sexual Arousal 3 04.6 7.
Outcome 5 07.5 The stimuli like touch, cuddling or kiss and the context or the environment were not significant players in the sexual response cycle. Rather the loss of a breast and the sudden bodily changes in the patient and followed by the partner's hesitation to indulge in sex had negative impact and receptivity to such negative responses mitigated the desire to have sex. intimacy" by the patients to be cuddled and the protective embrace of the partner were the psychological factors that promoted sexual activity.
Feedbacks
The commonness of feedback different phases of the cycle was calculated and expressed in the Table.3 • Reasons to have sex was mostly for intimacy.
• Stimuli and context: touching, cuddling, sometimes kissing. The initiation of sexual response cycle brought the patient close to the partner and to some extent to make her partner happy. • Biological factors, loss of body image, body part, pain, fatigue were the major factors that hindered and broke the sexual response cycle.
Discussion
Masters and Johnson described a "sex response cycle" that included four phases, 1.Excitement, 2. Plateau, 3. Orgasm and 4. Resolution. This model showed only the physiological changes during the response cycle. It also assumed that each phase occurs one after the other without any overlap. This linear model was improved by Kaplan Model which included desire as a component along with excitement and orgasm phases. This model also was linear in nature It also envisioned that orgasm to end in the cycle. Basson's Model was found to be relevant for the present study as it incorporated intimacy as one of the major components of the cycle.
In the following Figure it is shown that receptivity to sex stimuli are hindered by depression, drugs, fatigue apart from low self-esteem due to body image. This hindrance or inhibition did not enhance subjective sex arousal with a moderate aversion to sexual activity and therefore, preventing in indulging regular sexual activity. It is said that during sexual response cycle the genital arousal and subjective cognitive appraisal of sex stimuli need to be synchronous for an enhanced sexual activity and orgasmic response. Such an orgasmic response leads to the release of oxytocin, minimizing menstrual tension, relaxation of body and reducing toxins production which may be carcinogenic. It also includes an unique sexual behavior which if it culminates ejalculatory response will result in ecstasy, sometimes spiritual communion and relaxation. If there is a desynchrony between the genital and subjective emotional response to sex stimuli may lead to loss of interest in sexual activity. The loss of genital sensations along with subjective arousal will impact the whole health of a cancer patient. Therefore, it will be worthwhile to understand the sexual responsive cycle of a breast cancer patient to suggest or intervene therapeutically to salvage the cycle. The intimacy component therefore will help propagate the sexual response.
It is suggested enhanced sexual arousal followed by orgasmic response will release oxytocin and endorphins to have sedative effect. Such a sedative effect may help a cancer patient overcome anxiety in the form of frigid behavior. Figure 4. Therefore, one must understand that human sexuality is more than a biological phenomena. It is a living experience which makes one understand how one view her personality and her body. Majority of the time health professionals spend time in treating the patient rather that the patient's sexuality. Many health professionals feel uncomfortable to discuss sexuality with the patient due to cultural issues as well as lack of information regarding human sexuality in a cancer patient.
Human sexuality is one of the major components of the well being of an individual and therefore studies related to sexuality (WHO,2000) in cancer patients need to be undertaken to promote the health of the cancer patients.
|
2020-02-05T18:02:56.122Z
|
2020-06-12T00:00:00.000
|
{
"year": 2020,
"sha1": "b6036402a1bb17e8ca84bfc339a4a8a322999871",
"oa_license": "CCBY",
"oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/20_IJRG19_A11_2884/279",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "97b8213e48b460ef9bbcb424da234a598c656ff7",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
74006393
|
pes2o/s2orc
|
v3-fos-license
|
ORIGINAL RESEARCH HEAD AND NECK SARCOMAS OUR EXPERIENCE AT A TERTIARY CARE CENTER IN RABAT, MOROCCO
Introduction : Head and neck sarcomas are rare, malignant and very heterogeneous tumors. The difficulty to manage these sarcomas requires the intervention of a multidisciplinary team to improve the prognosis. The aim of our study is to report our series (epidemiological, histological and progressive characteristics) and evaluate our treatment results. Patients and methods : This is a retrospective study on 42 cases of head and neck sarcomas, assembled at ENT and Maxillofacial Surgery department in the University Hospital of Rabat, for a period of 5 years (2010-2015). All sarcomas were confirmed by histological examination with immunohistochemical study. Results : There were 29 men and 13 women. The average age of our patients was 35.5 years (extremes of age: 13 and 70 years). All patients received a CT scan with MRI scan in 21 cases. A remote extension assessment showed lung metastases in 8 cases. The most frequently found histological type was synovial sarcoma, which was noted in 13 patients (30.9%) followed by osteosarcoma (21.2%). The osteosarcoma treatment was curative in 19 cases, based on surgery with radiotherapy. Total remission was noted in twelve patients. Conclusion : approach combines surgery and chemoradiation. However, in the absence of adequate and effective treatment protocols, it is necessary to establish a surgical indication in time to ensure excision as complete as possible.
INTRODUCTION :
Sarcomas of the head and neck are rare tumors, representing only 1% of all malignant tumors of the head and neck, and 5% of all sarcomas [1,2]. Their incidence is 3 to 4.5 on 100 000. They are a heterogeneous group of malignant tumors that share the same mesenchymal origin. They are characterized by slow growth, loco-regional aggressiveness and distant metastatic potential. The etiology of sarcoma is not yet fully elucidated.
The aim of our study is to determine, through our series of cases selected and managed within the ENT and Maxillofacial Surgery department in the University Hospital of Rabat, with a review of literature, the epidemiological, histological, clinical, and progressive characteristics, and above all treatment modalities of head and neck sarcomas and their prognosis.
MATERIAL AND METHODS
Medical records of all patients with head and neck sarcomas diagnosed and/or managed in the ENT department of Rabat from October 2010 to April 2015 were reviewed. All patients whose diagnosis was confirmed by a pathological report with immunohistochemistry were included in this study, including radiation-induced sarcomas. We recorded demographic and clinicopathological characteristics, including age, sex, symptoms, tumor site, size, histology, treatment modalities, and evolution. Histological grade was evaluated according to the classification of The French Federation of Comprehensive Cancer Centers (UNICANCER Federation) [2]. The diagnosis of radiation-induced sarcomas was based on the criteria of Arlen et al [2]. All statistical analyses were performed using SPSS software.
RESULTS
Over a period of 5 years, 42 patients were diagnosed with sarcoma in the head and neck region. The series included 23 men and 19 women, aged from 3 to 67 years. The average age was 31.6 years. The average consultation time was eight months, with extremes ranging from 1 month to 4 years. Concerning patients with medical history, 7.1% of our patients were smokers and 9.5% were treated for a previous cancer. The symptoms prompting patients to seek consultation were: the presence of a mass in 30 patients (71.4%), a limitation of mouth opening in 5 cases (11.9%), dysphonia in 3 cases (7, 1%) and epistaxis in 4 cases (9.5%) (See Figure 1 and Figure 2). The most frequently affected sites were bone, mandible sites (12 cases -28.6%), maxillary sites (4 case-9.5%), salivary glands in 7 patients (16.7% -5 parotid cases and 2 submaxillary cases), pharyngolaryngeal failure represented 7.2% -3 cases. The exact distribution of the different tumor sites is detailed in Table 1 and Graph 3. All of our patients received a CT scan, with MRI scan in 21 cases ( Figure 2). A remote extension assessment showed lung metastases at diagnosis, and this was in eight patients whose evolution was marked by quick death after diagnosis ( Figure 3). The treatment was curative in 19 cases, based on surgery followed by radiotherapy in 4 cases, preceded by neoadjuvant chemotherapy in 6 cases ( Figure 4). The remaining patients received palliative chemoradiotherapy. Most tumors were greater than 5 cm in size (75%). The most common type of management was surgery (30 cases, 71.4%), followed by radiotherapy (24 patients, 57.1%) and chemotherapy (23 patients, 54.8%). The surgical margins were clean for 17 patients, limited for 4, overgrown for 5, and not determined for 11 patients. Total remission was noted in 16 patients, 10 cases of local recurrence, 5 cases of distant metastasis, 6 deaths ( Figure 5). The average life span was 20 months. The 3-year survival was estimated at 50%, and 5-year survival at 9.5%. A statistical analysis was performed in single and multiple logistic regression to determine the most important factors in evolution. This analysis indicated that only surgery is the factor determining a favorable evolution (OR 1.8; CI 95%1.1-2.5; P = 0.01).
DISCUSSION
Head and neck sarcomas are very rare: they account for only 1% of primary tumors of the head and neck region [3] and 4-10% of sarcomas in general [4]. In most studies, the primary sarcomas of the head and neck represent only 5% to 15% of all cases of sarcoma in [5] adults. However, in the pediatric population, 35% of all sarcomas occur in the head and neck [6]. Sarcomas occur at any age. However, head and neck sarcomas occur at a young age [7]. Biphasic presentation was noted by some authors: with 80-90% affecting adults and 10-20% affecting young people [8,9]. However, in the pediatric population, one in three sarcomas occur at the level of the head and neck [10]. In our series, the average age was 31 years, which comes within the framework of young adult, which is, therefore, similar to the literature. The predominance of sex varies from one series to another. In our case, male dominance was clear (Table 4). Sarcomas have various cellular origins, but they are grouped together because of their clinical, progressive, treatment similarity as well as their prognosis [7][8][9]11]. They are characterized by slow growth, loco-regional aggressiveness and distant metastatic potential [10,11]. The etiology of sarcoma is not yet well elucidated [7]. However, some factors may be responsible for sarcoma, such as: exposure to ionizing radiation, exposure to certain chemicals and association with genetic mutations [7]. Environmental and immunological factors were also responsible [11,12]. Some point that trauma and chronic infections may play a role in the development of sarcomas [11]. Most sarcomas of the head and neck occur with nonspecific symptoms. In 65 to 95% of cases, they manifest through a palpable mass. In our series, a visible or palpable swelling was the most commonly found cause (71.4%), followed by dysphonia (7.1%), epistaxis (9.5%) and the limtation of mouth opening ( 11.9%). The lung parenchyma is the preferred metastatic site of soft tissue sarcoma: an estimated 20-38% of patients will develop pulmonary metastases during their illness. The diagnosis of primitive sarcomas is often difficult because of the rarity of these tumors, the wide variety of histogenetic types, and the existence of lesions that are benign, pseudosarcomatous, and sometimes deceitful. The majority of head and neck sarcomas are soft tissue sarcomas with only 20% of natural bone or cartilage [8][9][10]13].
In descending order, osteosarcoma, rhabdomyosarcoma, malignant fibrous histiocytoma, fibrosarcoma and angiosarcoma are the most frequently encountered histologic types in the head and neck region and represent approximately 50% of all sarcomas of such region [10][11]. Over the past ten years, many genetic disorders have been described, allowing a molecular classification [14] . Treatment depends on the histologic type, stage, location, tumor size and patient age [11,15]. It includes several means: surgery, radiotherapy and chemotherapy. The overall five-year survival is between 44% and 80% and disease-free survival varies between 45% and 62% [11]. This variability is due to the heterogeneity of these tumors and the lack of standardization of treatment modalities. For head and neck sarcomas, tumors greater than 5 cm, high histologic grade, and the limits of tumor resections correlate with increased local failure rate and decreased disease-free survival [8 , 11, 12]. In our series (see Table 3), the survival rate was too low (less than 25 years to 5 years) which can be explained by the late period of consultation making the tumor too developed with greater than 5 cm in size in 68 % of cases.
|
2019-03-12T13:08:04.857Z
|
2015-12-05T00:00:00.000
|
{
"year": 2015,
"sha1": "054ffe0552c358598251d7e4008427b74cf8643d",
"oa_license": null,
"oa_url": "https://doi.org/10.15342/ijms.v2i1.67",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "cd62d65788fd8bb28c48ee97da35ca2612edded3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253109076
|
pes2o/s2orc
|
v3-fos-license
|
PAINT I: the effect of art therapy in preventing and managing delirium among hospitalized older adults in the PAINT I study—a proof-of-concept trial
Key Summary Points Aim The aim of the study was to determine the effectiveness of art therapy as prevention and therapeutical approach in geriatric patients with high risk for delirium. Findings The study was not able to prove the hypothesis that a specific art therapy intervention was able to prevent delirium in patients of an acute geriatric ward, but it seemed to have a positive effect on the duration of delirium. No adverse events were registered in relation to art therapy. Message Art therapy might be an innovative additional non-pharmacological approach in the management of delirium. Supplementary Information The online version contains supplementary material available at 10.1007/s41999-022-00695-5.
Introduction
Delirium is one of the most common complications in hospitalized older patients. Its consequences are far-reaching with an increased risk of long-term cognitive and functional decline, as well as a 1-year mortality of up to 30% [1]. Nonpharmacological and individually tailored approaches are widely accepted to be effective in delirium prevention [2]. Contrary to pharmacological prevention strategies, which currently lack robust evidence, there is strong research evidence to support the promotion and further evaluation of nonpharmacological interventions to prevent delirium in hospital [3,4]. In clinical practice, NICE guidance recommends the provision of multicomponent interventions tailored to individual patient's needs and care setting [5]. Recommended interventions include careful evaluation of daily medication, provision of vision and hearing adaptations, hydration, nutrition, maintenance of a structured sleep rhythm, as well as stimulation, reorientation, and therapeutic activities.
Many of the studies evaluating non-pharmacological delirium interventions focus on the prevention of delirium and do not address patients with delirium on admission [6,7]. In addition, many of the proposed interventions require professional expertise and, therefore, increased staff resources. Therefore, the development of new innovative non-pharmacological delirium interventions is highly relevant to ensure age-friendly hospital care.
The WHO-report "Health Evidence Synthesis report: what is the evidence for the role of the arts in improving health and well-being in the WHO European region" emphasizes the role of arts including visual arts as effective, safe, and cost-effective in healthcare settings [8]. Nonetheless, the integration of this multifaceted therapy in European health systems is still lacking.
Currently, there is a lack of evidence regarding the effectiveness of art therapy, particularly in managing delirium [9,10]. Art therapy is facilitated by professional therapists and can be tailored to the individual patient's needs, with a low risk of adverse events. It offers a potential therapeutic option in the management of patients at high risk of delirium. This proof-of-concept trial is part of the PAINT-study (Preventive Art INtervention Therapy), a large-scale research project evaluating the effectiveness of art therapy for older adults in different care settings (PAINT I on an acute geriatric ward / PAINT II in a geriatric day clinic). The following study description only refers to PAINT I.
The aim of this study was to determine the preventive effect of a newly developed concept of art therapy on the incidence of delirium among older patients hospitalized on an acute geriatric ward. The secondary goal was to evaluate its impact on the duration of delirium in patients with existing delirium.
Study design
This single-center controlled trial was designed to determine the effectiveness of an adapted art therapy intervention in patients ≥ 70 years old admitted to an acute geriatric ward. The duration of the study was two years (09/2017-08/2019). We used a sequential assignation of the study participants: 3 months recruitment of the intervention group followed by 3 months recruitment of the control group. While the study nurses recruited the control group our art therapists did an intervention for another trial (PAINT II) that took place in a geriatric day clinic with a different study population using a different intervention (Fig. supplementary material). After obtaining informed consent, patients in the intervention group received twice-daily individually tailored art therapy intervention in addition to usual care (control group) during weekdays. The intervention followed a newly developed therapy concept, which comprised structure giving templates, theme-centered work, reduced choice of material, orientation on individual needs, and facilitation of non-verbal expression. All patients were screened daily for delirium using the German version of the Nu-DESC (Nursing delirium screening scale) [11].
The study was approved by the local Institutional Review Board and the ethical committee (Freiburger Ethikkomission International, Nr.017/1504) and registered in the German Clinical Trials Register (DRKS00012417).
Setting and selection of participants
The study was conducted in a 60-bed acute geriatric ward of a German urban university hospital. During a pilot phase, which included 10 patients, the assessments and intervention concept were tested for feasibility. All patients admitted during the given time period were screened for eligibility. Inclusion criteria were age ≥ 70 years, given informed consent, and at least one of the following three conditions: pre-existing dementia according to the patient's records or medical history including information given by relatives and caregivers, delirium in the past medical history, or any formal care or dependency in activities of daily living. An initial positive screening for delirium either conducted in the Emergency department 4-AT, cut-off ≥ 4) or on the geriatric ward (Nu-DESC, cut-off ≥ 2) was not a contraindication for participation. Patients were excluded from the study if informed consent could not be obtained, the patient did not speak German, was isolated for infection control reasons, required end of life care, was admitted from other hospital wards than A&E department, or if art therapy was not 1 3 feasible-for example, during an acute psychotic episode. Poor vision or blindness were no exclusion criteria.
Interventions
Following a comprehensive geriatric assessment conducted by a trained multidisciplinary team, participant baseline characteristics including sociodemographic data, comorbidities (CIRS-G), frailty (Clinical Frailty Scale), mobility before admission (Parker mobility score), ability to perform basic activities of daily living (Barthel index) and cognitive status (Mini Mental State Examination (MMSE) were recorded. Presence of delirium on admission was screened with the 4-AT (Emergency Department) and the Nu-DESC (geriatric ward, study nurses and geriatric nurses) [11,12]. In case of dissent or doubt, a geriatrician re-evaluated the diagnosis using the DSM-IV criteria. All participants were screened daily for delirium by a trained study nurse using the Nu-Desc (Monday-Friday). Nu-Desc was conducted as an evaluation of the last 24 h by consulting patients as well as nurses of different shifts to obtain the relevant information. On weekends, the Nu-Desc was conducted by trained ward staff and retrospectively verified by a geriatrician (KS) following a review of patients´ medical records. Neither the study nurses nor the ward staff were blinded as many patients addressed the interventions in their communication with the ward staff. Both the control and the intervention groups received usual care, which included delirium preventive elements such as avoidance of dehydration, nutritional support, regular mobilization, and cognitive stimulation. These care aspects were delivered by geriatric nurses, physiotherapists and occupational therapists.
In the intervention group, additional individual art therapy took place twice daily for 25 min using a mobile studio. The intervention followed a study-specific adapted concept of art therapy as described above. To facilitate orientation, enable creative work, and serve as a recognition factor, the patient chose from two templates (circle or square) at the beginning of each intervention. The therapeutic approach was tailored individually to patients' medical condition and resources (stimulating, stabilizing, reducing anxiety and relaxing), but followed an underlying structure of: (1) description of patients' mood, (2) creative activity, and (3) discussion of the artwork including patients choosing a title for it. Observations made by the art therapists as well as feelings and thoughts expressed by the participants were recorded throughout every intervention.
The intervention took place at the bedside using a mobile studio and a defined set of material ( Fig. 1). Art therapy intervention was suspended if the patient declined to participate, the present medical condition did not allow participation, or if the patient required urgent medical intervention that could not be delayed. The art therapy intervention ended at time of patients' discharge.
Outcomes and data analysis
The primary outcome measure was the incidence of delirium. The secondary outcome measure was the duration of delirium in patients with delirium during hospitalization. Statistical analysis was conducted by statisticians not involved in the data collection process.
Due to the lack of similar estimates in literature (exclusion of patients transferred from another ward, infectious patients), we mainly considered the feasibility of the project when deciding on the sample size. We aimed to include 360 patients, 180 in the intervention group and 180 controls.
Data were excluded from final analysis if endpoints were not reported due to transfer to another ward and the length of hospitalization was exceptionally short or long (≤ 4 days or ≥ 21 days; Fig. 2) as it was deemed that during short-term stays the art therapy interventions were too few to influence the outcome. On the other hand, patients with longer stays are associated with severe illness and complications, which may affect both outcome and intervention. Hence patients with exceptionally short Continuous variables are presented as means or medians and categorical variables as numbers and percentages. A Mann-Whitney U test was performed to determine the effect of art therapy on the numbers of days spent with delirium. In a later step, we implemented Mann-Whitney U test after stratifying by dementia diagnoses.
Results
During the study period, 906 patients aged ≥ 70 years were admitted to the acute geriatric ward and were screened for eligibility. 655 did not meet the inclusion criteria and 113 declined to participate in the study. 138 patients were enrolled in the study, with 72 patients in the intervention group and 66 in the control group. 6 patients in the intervention group had to be excluded as the endpoints were not reported due to unplanned transfers to other departments or new onset of an infectious disease and isolation due to infection control. An additional 25 patients were excluded as they had an exceptionally short or long length of stay (13 patients in the intervention and 12 patients in the control group). Therefore, 53 patients in the intervention and 54 patients in the control group were included in the main analysis (Fig. 2).
The median age of the study cohort was 86 years (Interquartile range (IQR 81-90) years. 75 participants (70.1%) were female. During the initial comprehensive assessment, the median clinical frailty scale was 6.0 (IQR 5.0-6.0), the median Barthel Index was 65 (IQR 45-75) and the median Parker Mobility Score was 4.0 (IQR 3.0-6.0). Patients in the intervention group participated on average in 9.8 (SD 4.8) art therapy sessions. Participants' characteristics as displayed in Table 1 were well balanced between the intervention and the control group.
Incidence of delirium and length of delirium (days with delirium)
Of the 107 included in the final analysis, 17 participants (15.9%) had delirium during first screening. Of the 90 participants who did not have delirium on admission, 8 (7.5%) participants subsequently developed delirium during their hospital stay. Those were equally distributed between the intervention (n = 4, 7.5%) and the control group (n = 4, 7.4%). Most of our study population (n = 82, 77%) did not spend any days with delirium. Due to the very low incidence of delirium in both groups, statistical analysis for the incidence of delirium was not performed.
We were able to see a statistically non-significant reduction in the number of days patients spent with delirium among the intervention group compared with the control group. Among patients with delirium, the median duration of delirium was 7 days (IQR 5-10, n = 11) in the control group vs. 4 days (IQR 2.25-8.75, n = 14) in the intervention group (Mann-Whitney U Test, p value = 0.26, (rank biserial) = − 0.27; see Fig. 3).
In the sensitivity analyses, there was no significant difference in the number of days with delirium in the intervention group compared to the control group (see supplementary material).
Discussion
Delirium is a common neuropsychiatric syndrome among hospitalized older people and is associated with adverse outcomes including prolonged hospital admission and increased risk of mortality [16].
Non-pharmacologic strategies, frequently implemented by nursing staff, have been proven to be effective in the primary prevention of delirium and typically comprised of multicomponent interventions [17]. To our knowledge, no data exist on the effectiveness of art therapy as part of a tailored multicomponent intervention on delirium prevention. Our proof-of-concept trial addresses this research gap by determining the preventive effect of art therapy on the development of delirium among hospitalized older adults on an acute geriatric ward, who are a high-risk group, and on the incidence and duration of delirium.
Our key study findings are as follows: (1) The study was not able to provide evidence on hypothesis that the art therapy intervention was able to lower the incidence of delirium in patients of an acute geriatric ward. (2) The adapted art therapy intervention seems to have a positive effect on the duration of delirium and (3) No adverse events could be registered in relation to art therapy in this patient group.
Despite the statistically non-significant results (most likely due to the small number of patients with delirium), our study suggests that supplementing usual care in the acute geriatric setting with art therapy may have a positive effect on the duration of delirium in patients. This association was more present in patients without dementia.
Multicomponent interventions have been proposed to be included in delirium management strategies and its implementation has been recommended in several practice guidelines [5,18]. Our study adhered to the NICE recommendations of assessing for risk of delirium within 24 h of admission and administration of individually adapted multicomponent interventions. Both the control and the intervention group received comprehensive geriatric care that included delirium preventive elements. The additional intervention of art therapy as a psychotherapeutic treatment enabled an individually tailored intervention, which focuses on stimulation, (re)focusing, as well as relaxation and reducing anxiety. Art therapy forms part of the arts therapies together with music-, dance-and dramatherapy, but scientific research on art therapy for older people is scarce.
Although delirium research has exponentially increased over the last decade, RCTs on non-pharmacological delirium interventions are still lacking, with many of the studies showing moderate quality evidence. Several of the studies randomized less than 100 participants [3,19]. In our trial, 907 patients were assessed for eligibility, but only 107 complete data sets could be analyzed. This high exclusion rate can be explained by the applied exclusion criteria such as missing consent, prior hospital admission and secondary transfer to the geriatric ward, or an acute infectious disease that ruled out art therapy intervention due to infection control reasons. The high exclusion rate also illustrates one of the main challenges of our study, the inclusion of participants. So, we failed to reach the expected sample size.
Incidence of delirium during the study period was observed in only 7.5% (n = 8) of patients, so we were not able to show a primary preventive effect of art therapy in this patient group. The finding of low new onset delirium in both groups (control group n = 4 and intervention group n = 4) can additionally be explained by the comprehensive usual care, which included other elements of delirium prevention measures received by participants in both groups. Our study was conducted on an acute geriatric ward with skilled nurses, doctors, and therapists. Usual care included multicomponent intervention such as hydration, regular mobilization, nutritional support, and basic cognitive stimulation. Art therapy was implemented as an additional intervention. Various of the interventions that reported 1 3 a decrease in delirium incidence were conducted in orthopedic/orthogeriatric settings and only a few in general medical or geriatric medical hospital environment [17]. Furthermore, most of the interventions were compared to usual care that did not include any evidence-based approach targeted to delirium risk factors. Nevertheless, the overall occurrence rate of delirium among our study group was 23% (n = 25), which corresponds with existing literature [3]. Among other variables such as comorbidities and severity of the underlying disease, the duration of delirium is associated with adverse consequences . Morandi et al. described a 10% increase in in-hospital mortality among older SARS-CoV2 patients with each day with delirium [21]. Therefore, non-pharmacological delirium interventions play a vital role in delirium management. Only few interventional studies in delirium focused on length of delirium, most of which were pharmacological interventions. Among studies investigating multicomponent non-pharmacological interventions, Jeffs et al. were not able to show a positive effect on incidence and length of delirium after implementing an enhanced exercise and cognitive program [22].
Another non-pharmacological intervention study on delirium that included the provision of clocks, calendars, glasses, hearing aids, familiar objects, and reorientation provided by family members in acute medical wards, did not shorten duration of delirium [7].
In our study cognitive stimulation, reorientation and assistance in concentration were important elements of art therapy intervention. The median duration of delirium was 7 days (IQR 5-10) in the control group and 4 days (IQR 2.25-8.75) in the intervention group. Length of delirium varies considerably in the few studies addressing this topic [7]. Assessment methods and frequency of assessment application have an influence on this parameter. In our study we assessed days of delirium using the Nu-DESC as an evaluation of the last 24 h prior to the assessment. This might be the reason that duration of delirium was longer than in other studies. The median length of stay (LOS) of our study population was 10 (IQR 7-16) days in the intervention as well as 10.5 (IQR 7-16.8) in the control group. This might appear quite long, but is in accordance with the mean LOS on the ward including early rehabilitation programs for severely ill patients.
Art therapy focusses on the process and not the finished art product. The underlying emotional experience during the intervention, influenced by the individual patients´ background, is at the center of the therapeutic approach. Whilst the provision of therapeutic interventions, such as art therapy, for hospitalized older people is often logistically a challenge, we have shown that the provision of art therapy for older inpatients at the bedside is feasible. Art therapy enables patients to expand their communication options and express their experiences during delirium, which is essential for people with delirium [23]. Furthermore, the documentation of the art therapists registered no adverse events caused by the intervention.
Limitations of the study
Our study shows many limitations that have to be considered. Due to the underestimated number of participants in our study and therefore included in the final analysis, we failed to make a statement on the effect of our intervention on the incidence of delirium.
Nonetheless, findings from this study will help to inform a future multicentre study to determine the effectiveness of the intervention and increase the generalizability of the findings. Another limitation of the study was the exclusion of infectious patients due to infection control reasons. Infection is one of the major triggers of delirium. Excluding this patient group (n = 79) may have impacted on the results for both incidence and duration of delirium. Only medical geriatric patients were included in the study. Postoperative older patients are also at a high risk of developing delirium and would benefit from the intervention.
Diagnosis of dementia was taken from the patient's records, but often is not verified by a proper assessment. An assessment at the moment of admission on our ward was not possible due to the underlying acute illness. So, there is a possibility that patients without dementia or a former diagnosis of delirium have a diagnosis of dementia in their records.
Furthermore, art therapy is a resource that is not widely available and will be limited to places where interprofessional co-management is available.
Conclusion
Although the study was not able to allow a statement on the preventive effect of art therapy in this acute geriatric setting, findings from this study showed that art therapy as part of a multicomponent intervention in delirium management can help to reduce duration of delirium among hospitalized older adults. The intervention was feasible and showed no adverse events, but gave additional insight into delirium experiences and enabled patients to communicate non-verbally. Future studies evaluating the effectiveness of art therapy in different clinical settings are needed (e.g., postoperatively).
Author contributions KS, JM study concept and design, literature search, drafting the manuscript. BH: data extraction and synthesis, statistics, JM, SL, BH, MG: study concept, critical revision of manuscript for intellectual content. All authors read and approved the final manuscript.
Declarations
Conflict of interest/competing interests On behalf of all authors, the corresponding author states that there is no conflict of interest.
|
2022-10-26T06:19:35.539Z
|
2022-10-25T00:00:00.000
|
{
"year": 2022,
"sha1": "9f54be2994bcfdd2eaa389f9b75d52f452b289f0",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s41999-022-00695-5.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "ddb3ef5ccbe7e5129dd90de1d74b5045caed46bf",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
46142510
|
pes2o/s2orc
|
v3-fos-license
|
Technology roadmap for development of SiC sensors at plasma processes laboratory
Recognizing the need to consolidate the research and development (R&D) activities in microelectronics fields in a strategic manner, the Plasma Processes Laboratory of the Technological Institute of Aeronautics (LPP-ITA) has established a technology roadmap to serve as a guide for activities related to development of sensors based on silicon carbide (SiC) thin films. These sensors have also potential interest to the aerospace field due to their ability to operate in harsh environment such as high temperatures and intense radiation. In the present paper, this roadmap is described and presented in four main sections: i) introduction, ii) what we have already done in the past, iii) what we are doing in this moment, and iv) our targets up to 2015. The critical technological issues were evaluated for different categories: SiC deposition techniques, SiC processing techniques for sensors fabrication and sensors characterization. This roadmap also presents a shared vision of how R&D activities in microelectronics should develop over the next five years in our laboratory.
INTRODUCTION
Silicon carbide (SiC) has been widely studied as an electronic material since 1959, when Shockley, the inventor of the bipolar transistor, recognized this material as essential to enable the development of microelectronic devices that can withstand harsh environmental conditions where silicon cannot be used or have limited applications such as high temperatures and intense radiation (Shockley, 1959).The potential of SiC for these applications is due to its inherent properties as excellent thermal stability, high resistance to chemical attack, high hardness, high bandgap, high electric field breakdown and high saturation current of electrons (Rajab, 2005).
Several techniques for obtaining thin films and bulks of SiC have been developed.Some companies that manufacture crystalline silicon wafers also offer SiC bulk wafers up to 3 inches in diameter.However, a SiC wafer has an average price fifteen times more than the Si wafer with the same dimensions (Muller et al., 2001).Besides the high cost, another problem of the use of SiC substrates is the difficult micromachining process and high density of defects (Wu et al., 2001).In this context, there is a crescent interest in deposition techniques of SiC films on Si or SOI (Silicon-On-Insulator) substrates.These films can be produced in crystalline and amorphous forms.
Crystalline SiC films are produced by techniques that use temperatures higher than 1000°C such as Chemical Vapor Deposition (CVD), Molecular Beam Epitaxy (MBE) and Electron Cyclotron Resonance (ECR) (Sarro, 2000).The high temperatures involved in these techniques generally become impracticable for the processing of these films in conjunction with conventional microelectronics processes.Hence, the plasma-assisted techniques such as Plasma Enhanced Chemical Vapor Deposition (PECVD) and sputtering, that allow obtaining SiC films at temperatures below 400°C, are very attractive (Prado, 1997).However, SiC films produced at low temperatures are amorphous and their properties are different from those observed in crystalline structures.In general, amorphous films have lower elasticity modulus and higher electrical resistivity.
Since the 1970s, many studies have been performed on doping of amorphous SiC films in order to obtain properties near to crystalline for applications at different types of devices such as photovoltaic cells, optical sensors, diodes and thin film transistors (TFTs) (Spear and LeComber, 1975;Kanicki, 1991;Tawada et al., 1982).Nowadays, the processes most used to doping of SiC films are in situ doping (during film growth) and ion implantation.
In the 1990s, due to emerging MEMS (Micro Electro Mechanical Systems) technology and the increasing demand for sensors operating at temperatures above 300ºC for different applications, SiC films and substrates started to be used as alternatives to silicon in the fabrication of sensors to operate in severe environments as combustion processes or gas turbine control, oil industry, nuclear power and industry process control (Cocuzza, 2004).Some sensors and electronic devices based on SiC that are currently commercially available are showed in Fig.1 (Nowak, 2005).
the laboratory by RF magnetron sputtering technique had appropriate characteristics for applications in electronics and MEMS (Micro Electro Mechanical Systems) devices (Rajab et al., 2006).
In this context, in 2005 a PhD thesis on development of piezoresistive sensors based on SiC films was started with support from CNPq/Microelectronics National Program (PNM) (Fraga, 2009).In this thesis, besides the RF magnetron sputtering, the PECVD technique was used to produce the SiC films.This allowed comparing the properties of SiC films produced by both deposition processes.In addition, the influence of nitrogen doping on SiC film characteristics was also investigated (Fraga et al., 2008a;Fraga et al., 2008b).
The reactive ion etching (RIE) of SiC films using SF 6 / O 2 gases mixtures was another process studied, because this step is very important in the fabrication of devices.The etching rate was investigated as a function of film composition and O 2 concentration.The influence of thermal annealing on etching characteristics was also evaluated (Fraga et al., 2007a;Fraga et al., 2007b).
The evolution of R&D activities related to the development of SiC films at Plasma and Processes Laboratory is summarized in Fig. 2.
In 2008, in order to make possible the development of devices based on SiC films, a collaboration project was established with the Microfabrication Laboratory of the Brazilian Synchrotron Light Laboratory (LNLS).The first devices developed through this project were strain gauges based on SiC films.The structure of these strain gauges consists of a SiC thin-film resistor with Ti/Au electrical contacts (Fraga et al., 2010a).Subsequently, a prototype of piezoresistive sensor based on SiC film was designed, fabricated and characterized (Fraga et al., 2010b).
The development cycles of the SiC sensors are shown in Fig. 3.As it can be observed, two steps have not been performed in LPP-ITA yet: pattern transfer by photolithography and wire bonding process.
CURRENT STAGE OF R&D ACTIVITIES
The current stage of R&D activities at LPP-ITA aims to implement a technology roadmap for development of SiC sensors (Fig. 4).In this section, the roadmap development process is explained.
The development process is divided into the following stages.
As there is a great interest in the use of SiC in high temperature devices, especially for applications in aerospace and aeronautics fields, LPP-ITA has established a R&D line oriented to the development of SiC sensors as presented in the next sections.
ANTECEDENTS OF R&D ACTIVITIES IN MICROELECTRONICS
Since 1988, LPP-ITA has carried out research projects on plasma technology applications.One of the main research lines in this field is directed to synthesis and modification of semiconductor thin films through low temperature plasma processes such as radiofrequency (RF) magnetron sputtering, plasma enhanced chemical vapor deposition (PECVD), reactive ion etching (RIE) and inductively coupled plasma (ICP).
The R&D activities in microelectronics were intensified in 2001, when a clean room environment was implemented through a financing of the São Paulo Research Foundation (FAPESP).The development of specific researches related to growth and characterization of SiC thin films were started in 2003 leading to a master thesis about the effect of thermal annealing on physical and electrical properties of SiC films (Rajab, 2005).This project was supported by a grant from CNPq/Microelectronics National Program (PNM).The results obtained during this thesis work showed that the SiC films produced in
Needs identification
This is the first stage of the process in which occurs the identification of the needs related to the SiC sensors technology development.These needs are grouped in three main categories: infrastructure, financing and human resources.
Nowadays, the Plasma and Processes Laboratory counts on financing of the Brazilian Space Agency (AEB) to assemble a room for characterization of electronics and MEMS devices.Besides, the clean room facilities have been amplified with the recent acquisitions of an oxidation furnace, a KOH etching system and a hot plate through the financial support of the National Council for Scientific and Technological Development (CNPq).Additionally, a dual dc magnetron sputtering system for the growth of SiC films, from targets of silicon and carbon, is being implemented.The idea of this system is to control the stoichiometry and improves the quality/functionality of the films through use of pulsed dc power sources.The main needs associated with infrastructure are the clean room area enlargement and the acquisition of a mask aligner in order to perform all steps of sensors fabrication in the laboratory.
In relation to human resources, since December 2009 the National Post-Doctoral Program (PNPD)/CAPES finances two grants on development of SiC sensors.
Form working group to the development of roadmap
Due to the interdisciplinary nature of SiC sensors technology, researchers from a wide variety of backgrounds are required to form roadmap working groups.The staff of Plasma Processes Laboratory consists of 42 members, and this interdisciplinary background has degrees in physics, material science, microelectronics and engineering.Five PhDs and one PhD student of these staff are working at the moment on researches related to SiC sensors.This working group discussed the framework roadmap and, subsequently, a methodology was adopted considering the itemization of issues and responses to each critical step and identification of the key technologies.The determination of a realistic timeline and of a cost range for the processes implementation was also required.
In order to define an action plan roadmap, the working group divided the critical technologies into three categories: For each category, the working group will define goals, the impact of the technology, the timeframe for development and the execution plan.
Execution action plan
A detailed project plan with indication of roles and responsibilities of each working group member is being finalized.A funding strategy will be developed to overcome critical infrastructure issues.The progress of roadmap execution action plan will be evaluated by regular review of the project status and deliverables.The expectative is that the implementation of this roadmap raises the level of sharing and integration among staff, facilities and services of the laboratory.This allows that the researchers quickly define the key services and that they focus on the technical challenges.
To help its staff keep pace with the changes in science and technology, the laboratory have formed masters and PhDs in plasma physics, materials science and microelectronics.
PERSPECTIVES UP TO 2015
The development of the SiC sensors is based on progress in the following technologies: 1) improved electrical and mechanical properties of SiC films produced (optimization of SiC deposition process), 2) SiC film processing (optimization of etching process and metallization appropriate for high temperature applications), 3) microfabrication technology to fabricate miniaturized sensors and 4) sensors packaging for harsh environments.
The R&D activities of the Technological Institute of Aeronautics have been focused on aerospace and aeronautical fields.In this manner, the goal of Plasma and Processes Laboratory is to develop SiC sensors with potential for use in a range of these applications.The sensor types of main interest are capable of measuring pressure, strain and acceleration under high temperatures and in the presence of corrosive media or intense radiation.
Figure 5 shows the types of sensors that are being developed and the technological evolution that we intend to follow till 2015.The main technologies involved and some possible applications also are shown.In the next years, our goals will be concentrated in improving the performance of the SiC pressure sensors and strain gauges developed, besides making possible the development of accelerometers and SAW sensors based on the aluminum nitride (AlN) films deposited on SiC.
CONCLUSIONS
The vision expressed in this roadmap is to use the knowhow of Plasma and Processes Laboratory staff to develop SiC sensors.We believe that the way to do this is developing technologies, which enable science, engineering and manufacturing.Close cooperation between the laboratory and other research centers will always be necessary because this cross-disciplinary development will bring broad benefits through ideas, instruments and techniques that will result from developing and consolidating the required base technology.
Figure 2 :
Figure 2: Evolution of R&D activities related to development of SiC films at Plasma Processes Laboratory.
Figure 3 :
Figure 3: Current development cycles of SiC sensors.
a) SiC deposition techniques; b) SiC processing techniques for sensors fabrication; c) SiC sensors characterization.
|
2017-11-30T00:04:04.648Z
|
2010-08-01T00:00:00.000
|
{
"year": 2010,
"sha1": "ed86094be3aa421f93d3fb4ae71d79ceda6a568e",
"oa_license": "CCBY",
"oa_url": "https://www.scielo.br/j/jatm/a/gcJLDgSMyPcqHD3Cb5rntLn/?format=pdf&lang=en",
"oa_status": "GREEN",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ed86094be3aa421f93d3fb4ae71d79ceda6a568e",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
264072226
|
pes2o/s2orc
|
v3-fos-license
|
Diagnosis and Management of Drug-Induced Interstitial Lung Disease Associated with Amikacin Liposome Inhalation Suspension in Refractory Mycobacterium Avium Complex Pulmonary Disease: A Case Report
Abstract Amikacin liposome inhalation suspension (ALIS) is a key drug for the treatment of refractory Mycobacterium avium complex pulmonary disease (MAC-PD). Although cases of drug-induced interstitial lung disease (DIILD) by ALIS have been reported, its diagnosis is challenging due to overlapping existing pulmonary shadows, airway bleeding, exacerbation of underlying conditions, and the potential for various concurrent infections. A 72-year-old woman started treatment with ALIS for refractory MAC-PD. Three weeks later, she had a fever, cough, and appetite loss. She was hospitalized because multiple infiltrative opacities were observed on chest X-ray and chest computed tomography. Because the opacities worsened after empiric antibiotic therapy with broad-spectrum antibiotics, we initiated corticosteroid therapy, suspecting DIILD caused by ALIS, although drug lymphocyte stimulation tests for ALIS and amikacin were negative. Three days later, we found signs of improvement and quickly tapered the corticosteroids. After obtaining informed consent, we performed a drug provocation test of ALIS. Seven days later, she exhibited fever, an increased peripheral white blood cell count, and elevated serum C-reactive protein level, all of which returned to baseline 4 days after stopping ALIS, leading to a diagnosis of DIILD caused by ALIS in this patient. DIILD caused by ALIS is rare but should be carefully diagnosed to ensure that patients with refractory MAC-PD do not miss the opportunity to receive ALIS treatment.
Introduction
][7] DIILD is caused by dose-dependent toxicity or immune-mediated inflammation.The diagnosis of DIILD is based on clinical findings consistent with ILD: a temporal relationship between the onset of symptoms and drug exposure; the exclusion of other possible causes such as infection, pulmonary edema, and radiation-induced lung injury; progression of the underlying disease; improvement upon withdrawal of the suspected causative agent with or without corticosteroid therapy; and, in some cases, deterioration upon re-challenge. 8][7] Amikacin liposome inhalation suspension (ALIS) was recently developed for the treatment of refractory Mycobacterium avium complex pulmonary disease (MAC-PD).Adding ALIS to the guideline-based therapy improved the rate of culture conversion among patients with refractory MAC-PD in a Phase III randomized controlled trial. 9In that trial, 3% of participants who received ALIS experienced hypersensitivity pneumonitis. 9,10However, descriptions of this adverse event have been limited, with only two case reports describing the clinical and imaging course of the event. 11,12n this case report, we describe ground-glass opacity in a patient with refractory MAC-PD that was discovered 3 weeks after starting ALIS, which improved upon the withdrawal of ALIS with corticosteroid therapy.ALIS therapy was re-started, and the patient exhibited a deterioration after 1 week of treatment.
Case Report
A 72-year-old woman (height 148 cm, body weight 33 kg) was referred to our hospital for the treatment of refractory Mycobacterium intracellulare pulmonary disease, which was diagnosed when she was 60 years old.She had been under observation without treatment until 4 years ago, when a cavitary lung nodule was detected.In the past 4 years, the patient has repeatedly received macrolide-containing multidrug therapies; however, they were not successful.Chest X-ray and chest computed tomography on the first visit to our hospital showed that the patient had fibrocavitary-type pulmonary disease (Figure 1A-D).She had a history of pulmonary tuberculosis.Although macrolide-resistant M. intracellulare was recently cultured from her sputum, the minimum inhibitory concentrations of clarithromycin and amikacin were >32 μg/mL and 8 μg/mL, respectively, clarithromycin was continued at an immunomodulatory dose (400 mg/day).Ethambutol (500 mg/day) and moxifloxacin (400 mg/day) were continued, and ALIS (590 mg/day) was added to these drugs.
Three weeks later, she had a fever, cough, and appetite loss.Chest X-ray and chest computed tomography revealed multiple infiltrative opacities in the left upper lobe and bilateral lower lobes (Figure 1E-H).ALIS was stopped, although the cavity wall and bronchial wall thicknesses decreased, suggesting that ALIS had a high likelihood of being effective against M. intracellulare-pulmonary disease in this patient (Figure 1F and G).A nose swab polymerase-chain reaction test for coronavirus disease 2019 (COVID-19) was negative.Serum (1,3)-beta-D-glucan was also negative.Because these opacities worsened even after empiric antibiotic therapy featuring ceftriaxone for 5 days followed by tazobactam/ piperacillin for 11 days, we started methylprednisolone (125 mg/day) based on a suspicion of DIILD caused by ALIS, and after 3 days, we observed signs of improvement and quickly tapered the steroid dose (Figure 1I-L).
ALIS was the key drug affecting the long-term outcome for this patient; therefore, we performed a drug provocation test of ALIS after obtaining informed consent.Before starting the drug provocation test, the patient's body temperature was normal and her white blood cell count in peripheral blood was 5990/μL.Her serum C-reactive protein (CRP) concentration was 0.76 mg/dL.Seven days after ALIS re-challenge, the patient's fever returned.The white blood cell count and CRP concentration increased to 7910/μL (normal range: 4500-11,000 cells/μL) and 5.08 mg/dL (normal range: 0-0.14mg/dL), respectively.Furthermore, her body temperature, white blood cell count, and CRP concentration returned to baseline 4 days after stopping ALIS.All these findings supported a diagnosis of DIILD induced by ALIS in this patient.
Two years later she had experienced four hospitalizations.One was due to COVID-19 infection and the other three were related to fevers requiring intravenous antibiotic therapy.Currently, the patient exhibits shortness of breath attributed to a decrease in lower limb muscle strength and restrictive ventilatory impairment.Nevertheless, the patient continues to attend outpatient appointments to maintain the multi-drug chemotherapy for her refractory MAC-PD.
Discussion
During the disease course of refractory NTM-PD, patients experience several clinical and radiographic exacerbations.Sometimes, the exacerbation is caused by NTM-PD itself; however, it is frequently caused by other complications, including viral, fungal, and bacterial pneumonia; airway bleeding; and DIILD.For physicians treating NTM-PD, DIILD has not been a significant concern, because the frequency of DIILD caused by antimicrobial agents, including antihttps://doi.org/10.2147/IDR.S427544
DovePress
Infection and Drug Resistance 2023:16 tuberculous drugs, was quite low. 13However, differential diagnosis has become more challenging since ALIS became available for use in clinical settings.In this case, it required 8 days to start steroid therapy after carefully excluding COVID-19 and bacterial and fungal pneumonia (Figure 2).For a diagnosis of DIILD caused by ALIS, it is important to avoid ambiguity as much as possible because patients have very limited other treatment options and may lose the benefit of ALIS administration in the future, once diagnosed with DIILD by ALIS.Recently, a case of DIILD caused by ALIS was reported, in which a transbronchial lung biopsy sample showed findings of organizing pneumonia. 12The accumulation of such studies may reveal histological findings specific to DIILD caused by ALIS.However, there is a concern regarding the spread of pathogens through the airways during bronchoscopy, especially in patients with conditions such as cavitary lesions, where a high bacterial burden is anticipated.In our case, the patient presented with cavitary lesions and was additionally undergoing oxygen therapy via a nasal cannula.Therefore, bronchoscopy was deemed risky and not performed.
The possibility of a paradoxical response, which has been defined among patients with tuberculosis but never reported among patients with NTM-PD, could not be completely excluded in this case.Paradoxical response describes the transient worsening of tuberculosis following anti-tuberculosis treatment.The median onset time of paradoxical response was reported as 26 days after the initiation of anti-tuberculosis treatment. 14In our case, a drug-induced lymphocyte stimulation test for amikacin and ALIS, which were performed before steroid administration, were negative, suggesting the immune response was not directly targeted by amikacin or ALIS.It was speculated that necrotic tissue, which sloughed off the thickened cavitary or bronchial wall, might have been aspirated to other parts of the lungs, where it could have stimulated an immune reaction causing infiltrative opacity in this patient.
Since the approval of ALIS by the Japanese government in July 2021, we have started to prescribe ALIS to patients with refractory MAC-PD and experienced seven cases of chest radiographic deterioration during ALIS administration (Table 1).The median age of the patients was 79 years (interquartile range [IQR], 73-80), and the median disease duration was 12 years (IQR, 10.5-14.0).The deterioration was caused by the exacerbation of MAC-PD itself in four patients, pneumonia complications caused by other bacteria in two patients, and DIILD caused by ALIS in one patient.In this study, we demonstrated that patients with refractory MAC-PD for whom ALIS was prescribed may experience radiographic exacerbations for a variety of reasons.Because ALIS is a key drug for patients with refractory MAC-PD, an incorrect diagnosis of DIILD can have negative consequences for patients.Therefore, it is important to make a careful and accurate differential diagnosis.The participants of a phase III randomized controlled trial had a younger age (median, 65 years ; IQR, 40-87) and shorter disease duration (median, 4.6 years ; IQR, 0.8-32.4)than our cases with chest radiographic deterioration during ALIS administration. 9Because the importance of achieving early culture conversion was recently emphasized, 15,16 ALIS should be prescribed to patients with early-stage NTM-PD.
Conclusion
Patients with refractory NTM-PD who are treated with ALIS may experience radiographic exacerbations for various reasons, including exacerbation of NTM-PD itself, pneumonia caused by other bacteria, and DIILD caused by ALIS.It is crucial to accurately differentiate between these conditions to ensure appropriate management and prevent potential harm to patients.It is important to perform a drug provocation test after obtaining sufficient informed consent.
Figure 1
Figure 1 Chest X-ray and computed tomography of the upper, middle, and lower lungs, before the start of amikacin liposome inhalation suspension (ALIS) therapy (A-D), 3 weeks after ALIS initiation (E-H), and 9 weeks after the withdrawal of ALIS (I-L).Cavity wall (arrows) and bronchial wall (arrowhead) thinning was observed 3 weeks after starting ALIS treatment.
Figure
Figure Proposed workflow in a case of radiographic exacerbation during amikacin liposome inhalation suspension (ALIS) therapy.Abbreviations: MAC-PD, Mycobacterium avium complex pulmonary disease; DIILD, drug-induced interstitial lung disease; AFB, acid-fast bacillus; DLST, drug lymphocyte stimulation test.
|
2023-10-14T15:56:37.417Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "3cf98cae8ab52024a760669aace5b90b5b6172f7",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=93388",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d50661e0d956d8944f397ed03078b020a448e8e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
4908028
|
pes2o/s2orc
|
v3-fos-license
|
Once daily administration of the SGLT2 inhibitor, empagliflozin, attenuates markers of renal fibrosis without improving albuminuria in diabetic db/db mice
Blood glucose control is the primary strategy to prevent complications in diabetes. At the onset of kidney disease, therapies that inhibit components of the renin angiotensin system (RAS) are also indicated, but these approaches are not wholly effective. Here, we show that once daily administration of the novel glucose lowering agent, empagliflozin, an SGLT2 inhibitor which targets the kidney to block glucose reabsorption, has the potential to improve kidney disease in type 2 diabetes. In male db/db mice, a 10-week treatment with empagliflozin attenuated the diabetes-induced upregulation of profibrotic gene markers, fibronectin and transforming-growth-factor-beta. Other molecular (collagen IV and connective tissue growth factor) and histological (tubulointerstitial total collagen and glomerular collagen IV accumulation) benefits were seen upon dual therapy with metformin. Albuminuria, urinary markers of tubule damage (kidney injury molecule-1, KIM-1 and neutrophil gelatinase-associated lipocalin, NGAL), kidney growth, and glomerulosclerosis, however, were not improved with empagliflozin or metformin, and plasma and intra-renal renin activity was enhanced with empagliflozin. In this model, blood glucose lowering with empagliflozin attenuated some molecular and histological markers of fibrosis but, as per treatment with metformin, did not provide complete renoprotection. Further research to refine the treatment regimen in type 2 diabetes and nephropathy is warranted.
Diabetic nephropathy accounts for 35-40% of new cases of end-stage renal disease in the developed world 1,2 . A major risk factor for the vascular complications of diabetes is chronic elevations in blood glucose concentrations (hyperglycemia) but there is no guarantee that glycemic control will prevent the onset and progression of micro-and/or macrovascular diseases [3][4][5][6] . At the first clinical sign of renal impairment (albuminuria), inhibitors of the renin-angiotensin system (RAS) are administered but they only slow progression of the disease 4 . Therefore, anti-diabetic strategies that effectively control blood glucose levels and prevent the onset and progression of diabetic nephropathy are in great demand. Fig. 2). For db/m mice, both vehicle-(P = 0.07) and empagliflozin-treated groups gained 1.0 g between weeks five and 10 of treatment (Table 1). At 10 weeks of treatment (20 weeks of age), db/db mice treated with empagliflozin were 8 to 11 g heavier than vehicle-and metformin-only-treated mice (Table 1). Blood and plasma glucose concentrations were measured 20-24 h after the previous day's therapy. Three days after the commencement of treatment, all mice treated with empagliflozin had lower fasting blood glucose levels compared to db/db vehicle and all intervention arms had considerable reductions from baseline (Fig. 1a,b). By the study end, SGLT2 inhibitor monotherapy lowered fasting plasma glucose levels to 18 mmol/L compared to 27 mmol/L in db/db vehicle (Fig. 1c). In addition, glycated hemoglobin was restored to non-diabetic levels, achieving 6.6% compared to 9.6% in db/db vehicle (Fig. 1e). Co-therapy with metformin provided modest incremental benefits, lowering fasting plasma glucose to 16 mmol/L, which was also reduced from baseline, and glycated hemoglobin to 5.7% (Fig. 1c-e). Metformin monotherapy tended to reduce fasting plasma glucose (20 mmol/L) and glycated hemoglobin (8.0%) compared to db/db vehicle (P = 0.079 and P = 0.062, respectively, Fig. 1c,e).
All db/db mice consumed more food and water, and produced more urine compared to db/m when assessed at two and six weeks into the treatment period (Table 1). There were no effects of treatment on food intake but, in the db/m mice, empagliflozin increased water consumption at six weeks (P = 0.066) and urine output at both ages (two weeks P = 0.055, Table 1). Fasted blood glucose at baseline and three days after treatment start, (c,d) fasted plasma glucose at baseline and treatment end, and (e) glycated hemoglobin at treatment end in db/m (open) and db/db (grey) mice. Circles (ο ) vehicle-treated; squares (◽) empagliflozin-treated; triangles (▵) metformin-treated; and diamonds (⬨) empagliflozin + metformin co-treated. Data are means ± SEM (n = 5-11). (a,c,e): *P < 0.05 vs db/m vehicle, † P < 0.05 vs db/db vehicle, δ P < 0.05 vs db/db metformin within a time point by one-way ANOVA and Tukey's post hoc. Comparisons by Student's unpaired t-test: significance denoted by solid lines and trends denoted by dashed lines. (b,d): # P < 0.05 delta change (▵) from baseline by Student paired t-test. N.B. (a,b): Some mice exceeded the upper limit of the glucometer, in which case the recorded blood glucose value was 33.3 mmol/L. Differences from baseline may therefore be underestimated. AUCglucose, (c) plasma insulin concentrations over time, (d) area under insulin curve; AUCinsulin, and (e) insulinogenic index; AUCinsulin:glucose 0-30 mins in response to an oral glucose bolus (2 g/kg body weight) and (f) fasted plasma glucose-to-insulin ratio (t = 0 mins) in db/m (open) and db/db (grey) mice. Circles (ο) vehicle-treated; squares (◽) empagliflozin-treated; triangles (▵) metformin-treated; and diamonds (⬨) empagliflozin + metformin co-treated. Data are means ± SEM (n = 6-11). *P < 0.05 vs db/m vehicle, † P < 0.05 vs db/db vehicle, δ P < 0.05 vs db/db metformin, Ψ P < 0.05 vs all other db/db groups by one-way ANOVA and Tukey's post hoc. Comparisons by Student's unpaired t-test: significance denoted by solid lines and trends denoted by dashed lines. reduced plasma glucose levels at baseline and at 60 and 120 mins after the glucose bolus compared to vehicle db/db mice, resulting in a reduced AUC glucose (Fig. 2a,b). Co-therapy decreased plasma glucose concentrations at baseline, and at 15, 60, and 120 mins of the OGTT, and reduced AUC glucose compared to vehicle-and metformin-treated db/db mice (Fig. 2a,b). Metformin monotherapy did not improved glucose tolerance during the OGTT (Fig. 2a,b).
Plasma insulin concentrations were increased throughout the OGTT in co-treated mice and at specific time points (0, 15, 30, and 60 mins) in empagliflozin mono-treated mice compared to db/m (Fig. 2c). Co-therapy also resulted in elevated plasma insulin levels throughout the OGTT when compared to diabetic vehicle-and metformin-treated mice, whilst empagliflozin increased insulin at baseline and 15 mins only (Fig. 2c). Vehicle-, empagliflozin-, and co-treated diabetic mice had elevated area under the insulin curve (AUC insulin ) compared to non-diabetic and this difference was most pronounced for the latter, which also had enhanced insulin response when compared to all other db/db groups (Fig. 2d). Vehicle-and metformin-treated db/db mice exhibited a 50-60% reduction in the insulinogenic index (AUC insulin:glucose 0-30 mins ), and empagliflozin, as a single and dual therapy with metformin, restored and further increased this index compared to db/m levels, respectively (Fig. 2e). Fasting plasma glucose-to-insulin ratio was reduced in the co-treated mice when compared to vehicle-and metformin-treated diabetic mice as well as non-diabetic db/m mice (Fig. 2f), indicative of reduced insulin sensitivity. Empagliflozin mono-therapy also tended to reduce this ratio when compared to the metformin-treated diabetic arm (P = 0.089, Fig. 2f). HOMA-IR was elevated in all diabetic mice and exacerbated in the co-treated group (see Supplementary Fig. S1a). Insulin positivity within pancreatic islets was not different between non-diabetic and diabetic vehicle-treated mice (see Supplementary Fig. S1b). Co-therapy, however, increased insulin positive staining in db/db mice compared to db/m vehicle and other diabetic groups, whilst metformin reduced insulin positivity compared to vehicle-treated counterparts (see Supplementary Fig. S1b). Empagliflozin increased islet insulin content when administered to db/m mice (see Supplementary Fig. S1b).
Renal glucose handling, and expression of glucose transporters and gluconeogenic enzymes. The predicted filtered glucose load, assessed after eight weeks of treatment, was increased in all db/ db mice (Fig. 3a,b). Empagliflozin (mono-and co-therapy) reduced the filtered glucose load by 45% compared to db/db vehicle, owing to a similar reduction in fasted plasma glucose levels (Fig. 3a). In db/db mice treated with empagliflozin monotherapy, reduced filtered glucose load was also mediated by a modest reduction in glomerular filtration rate (GFR; P = 0.059, Fig. 3b). All db/db mice had glucosuria, excreting ~1500 mg glucose into their urine each day, compared with < 0.2 mg in db/m vehicle (Fig. 3c). Empagliflozin in non-diabetic mice increased urinary glucose excretion to > 80 mg per day (Fig. 3c) without affecting circulating glucose concentrations compared to vehicle counterparts (see above; Fig. 1). In mice with diabetes, empagliflozin monotherapy caused a left-ward shift in the relationship between plasma glucose level and urinary glucose, indicating that urinary glucose excretion was greater for any given concentration of plasma glucose (see Supplementary Fig. S2a). This treatment effect was lost when empagliflozin was co-administered with metformin and absent in metformin-treated mice (see Supplementary Fig. S2b,c). These observations with respect to glucosuria, measured after six weeks of treatment, were also evident after two weeks of treatment (data not shown). Cytosolic glucose concentrations within renal cortices, determined in tissue that was harvested from fasted mice ~24 h after the last dose, were increased in all db/db mice compared to db/m, and exacerbated by metformin mono-therapy (Fig. 3d). Empagliflozin, either as a mono-therapy or co-therapy with metformin, did not reduce cortical glucose content (Fig. 3d).
Compared with db/m mice, the expression of genes encoding SGLT1 (Slc5a1), SGLT2 (Slc5a2), and GLUT2 (Slc2a2) were elevated in all diabetic mice, except for those administered co-therapy ( Fig. 4a-c). Co-treatment substantially reduced the diabetes-induced upregulation of Slc2a2, but not Slc5a1 and Slc5a2, mRNA levels when compared to all other diabetic arms ( Fig. 4a-c). In non-diabetic db/m mice, empagliflozin tended to increase gene expression of Slc5a1 (P = 0.091), but did not affect Slc5a2 and Slc2a2 expression ( Fig. 4a-c). Renal cortical protein concentration of SGLT2, in total cell membranes, was not different between groups (Fig. 4d,e).
Renal function.
All db/db mice had albuminuria at two, six and 10 weeks of treatment which remained unaffected by any treatment regimen (Fig. 6a-c). Empagliflozin in the non-diabetic db/m mice increased urinary albumin excretion by > 2-fold at 6 weeks of treatment (Fig. 6b), due to a similar increase in urine production (see above; Table 1). Urinary excretion of renal tubule damage markers, KIM-1 and NGAL, were ~40 times higher in db/db vs db/m, without an effect of treatment (Fig. 6d,e). Urinary NGAL in non-diabetic mice treated with empagliflozin was also increased; attributed to increased urine production (see above; Table 1). In all db/db mice, except those treated with the combination therapy, plasma cystatin C was reduced by ~22%, indicative of glomerular hyperfiltration (Fig. 6f). In db/m mice, empagliflozin tended to increase plasma cystatin C levels (+ 17%, P = 0.064, Fig. 6f), suggesting a small decrease in GFR.
Renal morphology and expression of profibrotic genes. Compared to db/m, kidney weight was ~23% greater in db/db vehicle-, empagliflozin-, and co-treated mice but 43% greater in metformin-treated mice (Table 1). Glomerulosclerosis (PAS-positive staining) was elevated in all db/ db mice and unaffected by treatment (Fig. 7a,b). Glomerular collagen IV and fibronectin accumulation tended to be higher in vehicle-treated diabetic vs non-diabetic mice (P = 0.054 and P = 0.095, respectively, Fig. 7c-f). This diabetes-induced deposition of glomerular collagen IV and fibronectin was absent in all treated arms, except for the latter which remained elevated in metformin-treated mice ( Fig. 7c-f). Total collagen accumulation within renal cortical/outer medullary regions was increased in all db/db mice compared to db/m when quantified using Masson's trichrome (P < 0.05), but not Sirius Red, staining ( Fig. 8a-d). Co-treatment with empagliflozin and metformin tended to reduce Masson's trichrome positivity (P = 0.088) and significantly decreased Sirius Red staining when compared to db/db vehicle ( Fig. 8a-d). The degree of tubulointerstitial collagen IV and fibronectin staining, however, was not different among groups (Fig. 8e-h).
Renal cortical expression of ColIVα1, Fn1, Ctgf, Tgfβ1, and the cell surface macrophage marker, Cd14, were elevated in vehicle-treated diabetic vs non-diabetic mice (Fig. 9a-e). In mice treated with empagliflozin, the diabetes-induced upregulation of Tgfβ1 and Cd14 was absent and there was a trend for reduced Fn1 expression compared to vehicle-treated db/db mice (P = 0.059, Fig. 9b,d,e). ColIVα and Ctgf, however, remained elevated in empagliflozin-treated diabetic vs non-diabetic mice (Fig. 9a,c). Metformin did not restore the diabetes-induced upregulation of any genes but tended to decrease Fn1 (P = 0.095) and Tgfβ1 (P = 0.068) compared to db/db vehicle ( Fig. 9a-e). Co-administration of empagliflozin and metformin provided the greatest benefits, such that the diabetes-induced upregulation of all genes was no longer present in this group, except for Ctgf expression which remained elevated compared to non-diabetic levels ( Fig. 9a-e). Ctgf expression was, however, reduced when compared to other db/db groups (P = 0.096 vs vehicle, P < 0.05 vs empagliflozin, P = 0.070 vs metformin, Fig. 9c). Of note, in non-diabetic mice, empagliflozin increased the renal cortical expression of ColIVα1 but did not affect the expression of any other genes (Fig. 9a-e).
Plasma renin activity, and intra-renal renin activity and angiotensin II content. Plasma renin activity tended to increase in diabetic vs non-diabetic vehicle-treated counterparts (P = 0.058, Fig. 10a). Empagliflozin treatment increased plasma renin activity in non-diabetic mice, and both empagliflozin (P = 0.059) and metformin mono-therapies exacerbated this diabetes-induced increase (Fig. 10a). Renin activity in renal cortices was not different between non-diabetic and diabetic mice administered with vehicle (Fig. 10b). However, empagliflozin increased intra-renal renin activity levels in both non-diabetic and diabetic mice (Fig. 10b). Combination therapy exacerbated this increase in cortical renin activity but metformin mono-therapy had no effect compared to vehicle-treated diabetic mice (Fig. 10b). Renal cortical levels of angiotensin II were not different between db/m and db/db vehicle-treated mice, however, an increasing trend was seen in the metformin-treated group vs db/m vehicle (P = 0.096) and db/db empagliflozin (P = 0.055, Fig. 10c).
Discussion
In the present study, we demonstrate in the db/db mouse model of type 2 diabetes that upregulation of some profibrotic genes in the kidney was ameliorated upon SGLT2 inhibition, parallel to the effects of metformin. When empagliflozin and metformin were co-administered, additional molecular and histological markers of kidney fibrosis were attenuated. Diabetes-induced upregulation of renal Cd14 was also no longer present in the mice treated with empagliflozin (mono-and co-treatment with metformin), but not metformin, suggesting that the aforementioned benefits with SGLT2 inhibition may have been mediated through reduced inflammation. However, empagliflozin did not improve diabetes-induced albuminuria, increased urinary markers of tubule damage (KIM-1 and NGAL), renal hypertrophy, or glomerulosclerosis, when administered alone or in combination with metformin. These partial benefits occurred in line with a modest lowering of blood glucose that remained above non-diabetic levels. Thus, in light of our findings and others 26 , who have failed to observe complete restoration of kidney function, the determinants of renoprotection with SGLT2 inhibition in diabetes warrant further consideration. Previously, in db/db mice, the administration of an SGLT2 inhibitor; dapagliflozin in males 23 and tofogliflozin in females 25 , prevented progressive albuminuria, parallel to the effect of losartan in the latter study, and lowered plasma glucose levels to < 15 mmol/L. In male db/db mice treated with empagliflozin, Lin et al. similarly demonstrated reduced albuminuria and glomerulosclerosis, associated with complete amelioration of hyperglycemia 24 . Vallon et al., also demonstrated that empagliflozin administered to male Akita/+ mice, a model of type 1 diabetes, reduced albuminuria, renal hypertrophy, and markers of inflammation, which was in proportion to blood glucose lowering (average ~11 mmol/L) 21 . The timing and degree of blood glucose lowering, and dose of therapy are the major differences between the current study and previous reports of significant renoprotection. We commenced treatment two to three weeks later than that of Terami et al. 23 , Lin et al. 24 , and Nagata et al. 25 , and plasma glucose remained > 15 mmol/L in our study. The dose of empagliflozin administered to Akita/+ mice in Vallon et al. and db/db mice in Lin et al. equates to four to six times that of the current study, which was provided ad libitum in food, and likely contributes to the pronounced blood glucose lowering and renoprotection seen in those studies 21,24 . Despite restoration of glycated hemoglobin levels with empagliflozin in our study, it is likely that plasma glucose levels fluctuated around the single daily administration, reaching spikes of 18 and 16 mmol/L in mono-and co-treated groups, respectively, compared to 6.5 mmol/L in non-diabetic vehicle-treated mice. We determined the circulating glucose level 20-24 h after the previous day's gavage; likely at a peak glucose concentration given that empagliflozin half-life is ~5.6 h in the male mouse. Indeed, the benefits of early and intensive blood glucose lowering for microvascular complications are well established 27,28 and fluctuations in glucose level are known to increase the risk of complications independent of average glucose exposure 29 . It is therefore likely that renal structures remained exposed to hyperglycemia, at least temporally, in this study. We did not assess blood glucose variation by continuous monitoring in the present study. In type 2 diabetic patients, twice daily treatment with empagliflozin as an add-on to metformin was not superior to once daily treatment in terms of blood glucose lowering 30 . However, long-term microvascular outcomes from this treatment regimen are yet to be determined. Future pre-clinical and clinical studies on renal outcomes would benefit from early and multiple daily dosing of empagliflozin, continuous glucose monitoring, and the risk to benefit ratio of higher doses closely monitored.
We observed that SGLT2 inhibition in db/db mice reduced the filtered glucose load which, based on previous in vitro findings, was expected to downregulate markers of proximal tubular damage 19,[31][32][33] , and translate into functional improvements for the kidney. However, clinical features of diabetic nephropathy, including albuminuria, urinary markers of tubule damage (KIM-1 and NGAL), kidney growth, and glomerulosclerosis were not improved in this study, even when empagliflozin was co-administered with metformin. Empagliflozin is expected to reduce the tubular max (T max ) for glucose reabsorption which, together with reduced filtered load, would theoretically reduce glucose content within proximal tubule cells. However, in non-diabetic and diabetic mice, empagliflozin did not reduce cortical glucose content when compared to vehicle. Whilst the establishment of an inward glucose gradient at the basolateral surface may occur with SGLT2 inhibition 13 , the reduction in plasma glucose levels with empagliflozin as well as reduced cortical expression of Slc2a2 (encoding GLUT2) with co-therapy, argues against this possibility. Although unlikely to fully account for unchanged cortical glucose content, an explanation may be enhanced SGLT1-mediated glucose reabsorption, as seen previously under SGLT2 blockade 34 . In this study, kidney mRNA levels of three key gluconeogenic enzymes, Pck1, Fbp1, and G6pc, were not enhanced by empagliflozin suggesting there was no compensatory increase in renal gluconeogenesis. This is in agreement with another study where kidney Pck1 levels were reduced in Akita/+ mice treated with empagliflozin 21 . Endogenous glucose production (EGP) is enhanced by SGLT2 inhibition in humans 35,36 and, given the abovementioned findings in mice, this may primarily be of hepatic origin. We observed that the diabetes-induced increase in renal gluconeogenic gene expression (Pck1 and Fbp1) was exacerbated by metformin. This may be explained by compensatory EGP from renal sources, if metformin specifically suppresses hepatic gluconeogenesis 37,38 , although this remains to be tested in future studies. SGLT2 inhibition increased both plasma and intrarenal renin activity in diabetic and non-diabetic mice. In line with this, eight-week treatment with empagliflozin in individuals with type 1 diabetes increased circulating angiotensin II levels 9 , albeit, in our study, intrarenal angiotensin II content remained unchanged. Increased RAS activity with SGLT2 inhibition is explained by the expected volume depletion with this class of therapy 39 . Despite a tendency for reduced GFR with SGLT2 inhibition, likely due to increased afferent tone via tubuloglomerular feedback 40 , we cannot rule out relevant increases in efferent tone via activation of the RAS; which could increase intraglomerular pressure. RAS inhibition may also restore glomerular function independent of increased capillary pressure 41 and thus the benefits afforded by dual RAS-SGLT2 inhibition warrant further study. Indeed, in Dahl salt-sensitive diabetic rats, maximal renoprotection from glomerular injury, renal fibrosis, and proteinuria was achieved when luseogliflozin was combined with the ACE inhibitor, lisinopril 42 .
In our studies, SGLT2 inhibition increased fasting and glucose-stimulated plasma insulin levels, which was most profound in mice that received dual therapy with metformin, and may account for their weight gain over time. This is contrary to the weight loss reported in obese individuals treated with an SGLT2 inhibitor 7 , however, Lin et al. demonstrated similar increases in plasma insulin and body weight in db/db mice, along with improvements in albuminuria and glomerulosclerosis 24 . Thus, increased body weight cannot account for the only modest renoprotective effects of SGLT2 inhibition seen in our study. Of note, as per empagliflozin, the anti-hyperglycemic agent metformin was unable to prevent albuminuria but attenuated kidney Fn1 and Tgfβ expression. Interestingly, treatment with metformin exacerbated diabetic kidney growth which warrants additional study. We demonstrated efficacy in our studies by the presence of glucosuria in non-diabetic mice that were treated with empagliflozin. As seen previously in STZ-diabetic, Akita/+ and db/db mice 20,21,24,25 , SGLT2 inhibition did not exacerbate diabetes-induced glucosuria, which is explained by the reduction in filtered glucose load equalling the degree of SGLT2 inhibition. Also, given that GFR remained sufficient, empagliflozin was able to reach the brush border membrane of the proximal tubule and exert its intended effects 43 . The difference in blood glucose lowering between empagliflozin mono-and co-treated groups was modest, suggesting that other factors beyond glucose lowering may contribute to the superior renal outcomes seen with dual therapy in this study. We observed that fasting and glucose-stimulated insulin secretion was considerably increased in the co-treated mice, along with increased pancreatic insulin content. Further, the leftward shift in the relationship between plasma glucose levels and urinary glucose excretion with empagliflozin was absent in the co-treated mice. The mechanism(s) underlying these findings and their relationship, if any, to kidney fibrosis requires additional study.
SGLT2 inhibitors are a new class of anti-diabetic agent and human studies in type 1 and 2 diabetes have demonstrated efficacy in blood glucose lowering and acute hemodynamic changes in kidney function 9,44 . Long-term clinical studies on the incidence of microvascular complications, such as diabetic nephropathy, are ongoing and will determine whether SGLT2 inhibitors exert benefits that are superior to traditional agents 13 . While we were unable to show considerable improvements in renal function, the expression of some profibrotic genes was reduced with empagliflozin, in line with the effects of first line anti-diabetic agent, metformin. Additional molecular and histological benefits where offered when empagliflozin and metformin were co-administered. Persistent hyperglycemia and albuminuria-onset prior to commencement of treatment in this model rendered our study an interventional rather than a preventative approach. We suggest that a threshold of blood glucose lowering may be required to achieve renoprotection in diabetes, which may differ for individual parameters, as evidenced by some, but not all, features of kidney disease improving with SGLT2 inhibition in our study. Thus, taken together with previous work, we suggest that early, sufficient, and stable blood glucose lowering, possibly with multiple agents, including higher-and/or multiple daily-dosing of SGLT2 inhibition in combination with RAS blockade, may be required to achieve maximal renoprotection in diabetes. Bar Harbor, ME, USA). Mice were housed in an environmentally controlled room (constant temperature 22 °C), with a 12:12 h light-dark cycle and access to standard chow and tap water ad libitum. At 10 weeks of age, db/db and db/m mice were randomized to receive empagliflozin (10 mg/kg/day; provided by Boehringer-Ingelheim, Germany) or vehicle (0.5% hydroxyethylcellulose, Sigma-Aldrich, St. Louis, MO, USA) by oral gavage for 10 weeks, between the hours of 14:00 and 16:00. Additional db/db mice were administered the anti-hyper glycemic agent, metformin (250 mg/kg/day; Sigma-Aldrich), or empagliflozin + metformin co-therapy (as per mono-therapy dosages). Body weight and fasting blood glucose were monitored throughout the study. Approximately 24 h after the last treatment (20 weeks of age), mice were fasted for ~4 h and anesthetized with sodium pentobarbital (150 mg/kg ip; Virbac, Milperra, NSW, Australia). Kidneys and pancreata were excised, snap-frozen in liquid nitrogen, or fixed in 10% neutral buffered formalin.
Animals
Food and water intake, and blood and urine collection. At two and six weeks of the treatment period, mice were weighed and placed individually into metabolic cages for 24 h measurements of food and water intake, and urine collection 45,46 . Blood samples were collected via tail tipping immediately upon removal from metabolic cages. Mice were acclimatized to the metabolic cages by placing them in for short daylight periods on two separate occasions prior to the 24 h collection.
Glomerular filtration rate. At week eight, GFR was estimated in conscious mice using the transcutaneous decay of retro-orbitally injected FITC-sinistrin (10 mg/100 g body weight dissolved in 0.9% NaCl), as previously described 47 . Background signal was recorded for one minute, mice were injected under brief inhaled isoflurane anesthesia, and the signal was recorded for 60 min. GFR was calculated using the half-life derived from the rate constant (α 2 ) of the single exponential, excretion phase of the curve and a semi-empirical factor. Plasma cystatin C was also measured at the study end, after 10 weeks of treatment (ELISA, BioVendor, Brno, Czech Republic).
Oral glucose tolerance test. At week eight, an oral glucose tolerance test (OGTT) was performed following a 6 h fast between 08:00-14:00 h 48 . Blood samples were taken via tail tipping prior to (0 min) and following an oral glucose bolus (2 g/kg body wt of 50% w/v D-glucose solution) at 5, 15, 30, 60, and 120 min for determination of plasma glucose and insulin concentrations. The efficacy of insulin secretion was calculated (insulinogenic index; Insulin 30 -Insulin 0 /Glucose 30 -Glucose 0 ) 49 . The ratio of fasting plasma glucose-to-insulin ratio and
|
2018-04-03T01:06:56.285Z
|
2016-05-26T00:00:00.000
|
{
"year": 2016,
"sha1": "555ddf800f486725a3fc594e7ee848c3fed3d24e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1038/srep26428",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "555ddf800f486725a3fc594e7ee848c3fed3d24e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15389803
|
pes2o/s2orc
|
v3-fos-license
|
Flaviviruses as a Cause of Undifferentiated Fever in Sindh Province, Pakistan: A Preliminary Report
Arboviral diseases are expanding worldwide, yet global surveillance is often limited due to diplomatic and cultural barriers between nations. With human encroachment into new habitats, mosquito-borne viruses are also invading new areas. The actual prevalence of expanding arboviruses is unknown in Pakistan due to inappropriate diagnosis and poor testing for arboviral diseases. The primary objective of this study was to document evidence of flavivirus infections as the cause of undifferentiated fever in Pakistan. Through a cooperative effort between the USA and Pakistan, patient exposure to dengue virus (DENV), West Nile virus (WNV), and Japanese encephalitis virus (JEV) was examined in Sindh Province for the first time in decades. Initial results from the 2015 arbovirus season consisting of a cross-sectional study of 467 patients in 5 sites, DENV NS1 antigen was identified in 63 of the screened subjects, WNV IgM antibodies in 16 patients, and JEV IgM antibodies in 32 patients. In addition, a number of practical findings were made including (1) in silico optimization of RT-PCR primers for flavivirus strains circulating in the Middle East, (2) shipping and storage of RT-PCR master mix and other reagents at ambient temperature, (3) Smart phone applications for the collection of data in areas with limited infrastructure, and (4) fast and reliable shipping for transport of reagents and specimens to and from the Middle East. Furthermore, this work is producing a group of highly trained local scientists and medical professionals disseminating modern scientific methods and more accurate diagnostic procedures to the community.
Undifferentiated Fever in Sindh Province, Pakistan Frontiers in Public Health | www.frontiersin.org occurs in a politically neutral atmosphere that can have rapid positive and sustainable impacts on human and animal health and the control of emerging diseases (2,3). The growth in scientific personnel and infrastructure is essential to decrease the movement of diseases that threaten public health (1,(4)(5)(6)(7). Many outbreaks since 1990 like prion disease in the UK, West Nile virus (WNV) in the America's, and avian H5N2 in Canada and the US have been economically destabilizing and highlight the need for transboundary collaboration (8).
In the past decade, mosquito-borne viral diseases have emerged in many new locales, rapidly attaining endemic status. In Pakistan, arboviral diseases are frequently overlooked or misdiagnosed because of the vague symptoms and extensive differential diagnoses, which overlap with many other pathologies, such as Crimean-Congo Hemorrhagic fever (CCHF), malaria, hepatitis C virus infection, Alkhurma virus, Kyasanur forest virus disease, rickettsiosis, ehrlichiosis, leptospirosis, typhoid fever, meningococcemia, borreliosis, Q fever, and influenza. Furthermore, manifestations of arboviral disease mimic other febrile diseases and severe disease can present as a hemorrhagic illness [dengue virus (DENV), yellow fever virus, Zika virus, and Lassa fever virus], neurological disease (WNV, DENV), or arthritis (chikungunya virus, Zika virus, and DENV) (9,10). Because vaccines or antivirals do not exist for most of these viruses, surveillance becomes an essential part of control via detection and communication. The cornerstone of active and passive surveillance is accurate diagnostic assessment (9, 10).
There has been limited published data for arbovirus surveillance in Pakistan. Historically, only the presence of DENV subtypes 1 and 2 were detected in isolated outbreaks in Pakistan in the twentieth century (11,12). Since 2005, all four subtypes of DENV have spread throughout the country (12)(13)(14)(15). In neighboring Punjab province, the seroprevalence of DENV in patients was 42.63% in 2013 (16). The WHO lists Japanese encephalitis virus (JEV) as active in Pakistan (17), although most reports still indicate JEV activity mostly along the northern Pakistan-India border (10,(18)(19)(20). JEV is likely circulating in Pakistan at this time, but limited information exists regarding the actual disease burden JEV contributes to human health in Pakistan. In the early 2000s, 25% of the Pakistani military personnel who tested seropositive for JEV demonstrated cross-reactivity with WNV and thus a true determination of infection could not be verified (21). WNV has been detected in Pakistan since 1980s. Epidemiological work performed 20 years ago indicated that WNV antibodies were present in over 40% of the human population in Punjab province (21). Recently, a 55% seropositivity rate was detected in horses in Punjab province (18,(20)(21)(22)(23)(24). This high seroprevalence in horses suggests that WNV is also circulating in humans.
Described here are the initial data of a biological engagement program (BEP) implemented between Pakistan (Aga Khan University, Karachi, Pakistan) and the US (University of Florida, Gainesville, FL, USA) to perform a multisite study examining possible arboviral causes of febrile disease in citizens of Sindh province. Via a "train the trainer" format, this project aimed to provide Pakistani collaborators with training for virus surveillance and diagnostics in order to assess the prevalence of flaviviruses (DENV, WNV, and JEV) in Pakistan. The primary objective of this study was to document evidence of the above mentioned viral infections as causes of undifferentiated fever in order to build capacity for laboratory diagnosis and surveillance within Pakistan.
MaTerials anD MeThODs
A cross-sectional, observational study was performed to identify which arboviruses (DENV, WNV, and JEV) were the cause of acute undifferentiated febrile illness in selected basic health units and/or district hospitals of the Sindh region of Pakistan. A total of 1,000 patients (250/year) patients were targeted for enrollment under informed consent procedures that were reviewed and approved by the Ethics Review Committee, Aga Khan University (#3183-PAT-ERC-14) and the Institutional Review Board, University of Florida (#201500908). All enrolled subjects gave written informed consent in accordance with the Declaration of Helsinki. Patients were recruited with a case definition developed by the WHO and modified by the Pakistan Ministry of Health to incorporate syndromic findings of acute hemorrhagic fever, acute flaccid paralysis, and unexplained fever (25). Patient enrollment was performed during the monsoon season (May-October) during 2015. All patients, males and females between 10 and 50 years age meeting the case definition on the day of enrollment, were eligible for the study. Patients younger than 10 and older than 50 years of age and patients who tested positive for CCHV, influenza, malaria, tuberculosis, and bacterial septicemia during routine hospital admittance procedures were excluded (Figure 1). Briefly, all patients were tested for DENV antigen unless affected primarily by neurological abnormalities. If positive, serum was tested for DENV subtype by RT-PCR. All negative sera were tested via IgM capture ELISA for JEV and WNV.
study sites
Five study sites were established and personnel trained throughout the Sindh province in Pakistan (Figure 2). These sites included four medical colleges including Ghulam Mohammad Mahar Medical College (Sukkur, Pakistan), CMC Teaching Hospital (Larkana, Pakistan), and Muhammed Medical College Hospital (Mirpurkhas, Pakistan). Enrollment of study subjects was also established at a civil hospital in Hyderabad, Pakistan.
Data collection and Processing Procedures
Originally, for communication within Pakistan between sites, networked computers were planned as the primary mode of reporting of test results. Connectivity was found to be a major issue; even if access was available, there were frequent interruptions and limited technological support. Android mobile phones provided an alternative for surveillance data collection and transmission (Epicollect ® , http://www.epicollect.net/, Wellcome Trust, Imperial College London). At the study sites, patient information was collected on hard copy forms and de-identified
real-Time Pcr
For detection of nucleic acids of DENV, primer sequences were constructed for stains circulating in Pakistan via addition of degenerate nucleotides ( Table 1) (26). Primers, standards, and controls were developed using synthetic DNA targets of the various portions of the viral genomes ( Table 3). A high signal:noise ratio and cross-reactivity were factors that prevented adequate interpretation. In 16 of the 414 patients screened for JEV, the ELISA results fell above background noise and just below IgM positive. The WNV assays resulted in 14 of 241 samples with similar inconclusive readings. Cross-reactivity between the WNV and JEV ELISA assays was also an issue with 32 (up to 13%) samples that tested positive for both WNV and JEV exposure.
real-Time Pcr
Synthetic targets were developed for use as standards and controls for the RT-PCR platform ( Table 2). Targets were optimized to perform as well as or better than conventional plasmids (Figure 3) and the difference in percent efficiency was 10% or less for DENV1 and 3 and <20% for DENV2 and DEN4. In addition, we found that our plasmid controls were frequently >100% in efficiency (slope <−3.5).
DiscUssiOn
Arboviral infections have a global distribution; however, the burden of viral agents varies in different geographical regions. The true burden and epidemiology of arboviruses in Pakistan are not known as many of these infections, which present initially as a vague febrile illness, are often misdiagnosed. The expansion of DENV in Pakistan has been notable in its intensity. Recently, DENV emerged in Karachi in Sindh, Pakistan affecting 3,640 patients with an estimated 40 deaths (12)(13)(14). WNV is an arbovirus undergoing expansion throughout the world. While it is similar to JEV in terms of syndromes, most human JEV exposure and illness is described in children (27). For WNV, people over the age of 55 years have the highest risk factors for neurological syndromes during the virus' recent expansion throughout Europe (28). Comparative analysis of risk for concomitantly circulating JEV and WNV has not been performed. Recent evidence demonstrates a high amount of WNV activity in Pakistan in horses (20); however, there is limited information regarding recent human exposure and, to the best of our knowledge, the most commonly reported cause of neurological arboviral disease would likely be JEV (20).
Dengue virus was the most frequently identified and widespread flavivirus detected in enrolled patients. These data show that DENV was detected in nearly one-third of all patients in Karachi while it was found at much lower rates in other locations. This is most likely due to the fact that Karachi is an expansive urban environment with the ideal climate for the DENV vector Aedes aegypti. The other four sites displayed a much lower human exposure of arboviral diseases than Karachi, most likely because suitable conditions for the vector are absent. Flavivirus exposure was detected in only one patient living in Sukkur. This may be a function of vector biology or the low number of screened patients. Sukkur is a small, sparsely populated district where the climate is very hot, windy, and dry, which is not suited to most mosquitoes.
Commercial assays make field work in limited areas feasible. They are easy to use and results are easy to interpret. However, especially for WNV, the assay is expensive. As expected, there was significant cross-reactivity between WNV and JEV when using IgM ELISA kits. If JEV and WNV ELISA data are grouped as "flavivirus exposed, " roughly 13% of screened patients were positive. Consistent with the algorithm, all of the WNV and JEV samples will be tested by plaque reduction neutralization test (PRNT) to confirm which disease agent was present. This cooperative BEP with Pakistan shows that scientific and medical projects can be successful and rapidly established between academic professional of countries that have limited political relations. Relationships were enhanced through early discussions between partners that included cultural and scientific barriers to health to optimize scientific training and develop methods for clinical surveillance. Barriers to recruitment of patients included lack of understanding of basic clinical signs of arboviral diseases on the part of health-care professionals. This was remedied via focused training study personnel who were employed locally.
Supply and communication lines also needed to be addressed by investment of research time into questions that challenge issues of data transfer and cold-chain supply without the need for development of entirely new technologies. For assay development, reagents and assays were developed to obviate the need for a cold-chain. In particular, it was determined that the BioRad chemistries could be shipped at ambient temperature and was confirmed by sending the reagents from the UF laboratory to the Aga Khan laboratory. When using plasmid technology, problems arose with reagent stability if shipping was interrupted (>3 weeks); thus, commercially manufactured real-time targets were used. These targets proved to be highly stable and exceptionally cost-effective. This also decreased the need to send supplies on dry ice that offered a substantial decrease in shipping costs and increased the availability of other forms of freight.
One of the most common challenges faced in the establishment of this project centered on limited communication to establish the needs of the US-Pakistan collaborators themselves. Most communications were written and did not include face-to-face fact finding before embarking on training and teaching sessions. In addition, problems with computer-based communication and travel (both local and abroad) delayed development of laboratory expertise. At the study sites, basic mobile phone technology was relied upon to share patient cases with infectious disease experts and the availability of a freely hosted website greatly facilitated this. Differences in compliance requirements at both University and government levels posed significant obstacles for transfer of medical technology.
Despite these issues, many goals have been attained within the first year of establishment of this project; a collaborative environment between a US-based University and a Pakistan-based University for the purposes of research and training exchange and a multisite network for arbovirus surveillance in humans across one of the largest and most populous provinces of Pakistan were established. US partners gained an understanding of the climatic, geographic, and cultural landscape of Pakistan and how this may contribute to arbovirus expansion. Finally, for the first time, preliminary assessment of the variety of several important arboviruses was determined indicating the need for continued surveillance and testing.
aUThOr cOnTriBUTiOns
The following authors contributed to this manuscript in the following ways: contribution to the conception and design of the work: ML, EK, and KB; acquisition, analysis, and interpretation of data: ML, EK, DP, KB, AK, AN, JF, SS, FM, RH, and JL; drafting, editing, revising, and approving drafts: ML, KB, EK, DP, AK, AN, JF, SS, FM, RH, JL. ML, EK, and JL. All agreed to be accountable for all aspects of the work.
acKnOWleDgMenTs
We are grateful to Ms. Sally Beachboard, who has spent many hours determining supply routes for private vendors and in negotiating costs of supplies for our work in both the USA and Pakistan. We also thank Greg Gray for his initial work with Aga Khan University and epidemiological training.
FUnDing
This work was supported by the Defense Threat Reduction Agency, Basic Research Award # HDTRA1-14-1-0022, to the University of Florida. The contents do not necessarily reflect the position or the policy of the federal government, and no official endorsement should be inferred.
|
2016-06-18T03:20:35.911Z
|
2016-02-16T00:00:00.000
|
{
"year": 2016,
"sha1": "f821d7a19ac23233667d33c80f2e056f4984b5ff",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpubh.2016.00008/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f821d7a19ac23233667d33c80f2e056f4984b5ff",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240070711
|
pes2o/s2orc
|
v3-fos-license
|
Leverage on small-scale primordial non-Gaussianity through cross-correlations between CMB $E$-mode and $\mu$-distortion anisotropies
Multi-field inflation models and non-Bunch-Davies vacuum initial conditions both predict sizeable non-Gaussian primordial perturbations and anisotropic $\mu$-type spectral distortions of the cosmic microwave background (CMB) blackbody. While CMB anisotropies allow us to probe non-Gaussianity at wavenumbers $k\simeq 0.05\,{\rm Mpc^{-1}}$, $\mu$-distortion anisotropies are related to non-Gaussianity of primordial perturbation modes with much larger wavenumbers, $k\simeq 740\,{\rm Mpc^{-1}}$. Through cross-correlations between CMB and $\mu$-distortion anisotropies, one can therefore shed light on the aforementioned inflation models. We investigate the ability of a future CMB satellite imager like LiteBIRD to measure $\mu T$ and $\mu E$ cross-power spectra between anisotropic $\mu$-distortions and CMB temperature and $E$-mode polarization anisotropies in the presence of foregrounds, and derive LiteBIRD forecasts on ${f_{\rm NL}^\mu(k\simeq 740\,{\rm Mpc^{-1}})}$. We show that $\mu E$ cross-correlations with CMB polarization provide more constraining power on $f_{\rm NL}^\mu$ than $\mu T$ cross-correlations in the presence of foregrounds, and the joint combination of $\mu T$ and $\mu E$ observables adds further leverage to the detection of small-scale primordial non-Gaussianity. We find that LiteBIRD would detect ${f_{\rm NL}^\mu}=4500$ at $5\sigma$ significance after foreground removal, and achieve a minimum error of ${\sigma(f_{\rm NL}^\mu=0) \simeq 800}$ at 68\% CL by combining CMB temperature and polarization. Due to the huge dynamic range of wavenumbers between CMB and $\mu$-distortion anisotropies, such large $f^\mu_{\rm NL}$ values would still be consistent with current CMB constraints in the case of very mild scale-dependence of primordial non-Gaussianity. Anisotropic spectral distortions thus provide a new path, complementary to CMB $B$-modes, to probe inflation with LiteBIRD.
INTRODUCTION
Observing the Cosmic Microwave Background (CMB) has provided us precise information about the primordial perturbation field. Its non-Gaussianity is tightly constrained by measurements of the 3and 4-point correlation functions of the CMB Primary Anisotropies (PA), i.e. temperature and -mode polarization fluctuations in the sky generated at the time of recombination (Planck Collaboration et al. 2020c). However, another property of the CMB also tracks primordial perturbations: its energy spectrum. Damping of acoustic modes in the pre-recombination era introduces heat in the photonbaryon plasma (Sunyaev & Zeldovich 1970;Hu et al. 1994a;Daly 1991;Chluba et al. 2012a). After double Compton scattering and Bremsstrahlung become ineffective at 2 × 10 6 the plasma lacks the ability to equilibrate the number density of photons. This leads to a distortion of the CMB spectrum that can be, in principle, observed today .
While other physical processes such as recombination (Hart et al. 2020) and photon injection (Bolliet et al. 2021) imprint the CMB spectrum with peculiar spectral signatures, the effect of heat dissipa-★ E-mail: remazeilles@ifca.unican.es † E-mail: andrea.ravenni@unipd.it ‡ E-mail: jens.chluba@manchester.ac.uk tion (after 2 × 10 6 ) can be successfully described by the amplitude of two specific spectral shapes: -and -distortions. The first is generated solely in the primordial universe, before 5 × 10 4 , when the photons, having acquired an effective chemical potential , distribute like a Bose-Einstein rather than a Planck distribution (e.g., Sunyaev & Zeldovich 1970;Burigana et al. 1991;Hu & Silk 1993;Chluba & Sunyaev 2012;Sunyaev & Khatri 2013;Lucca et al. 2020). Compton-distortions, achieved when energy is not effectively redistributed, are generated throughout the rest of the cosmic history, both pre-and post-recombination. While in the pre-recombination era -distortions can be produced by the same mechanism generating -distortions, they also receive important contributions in the late universe from the Sunyaev-Zeldovich effect (Mroczkowski et al. 2019) and from reionization (Hu et al. 1994b;Pitrou et al. 2010).
Here we only consider -distortions because they are both a more powerful probe of primordial non-Gaussianity and because one does not need to account for biases due to late-time secondary sources.
If the primordial perturbation field is non-Gaussian, spectral distortions can be spatially modulated, which leads, after projection on the sphere, to Spectral Distortion Anisotropies (SDA). If we consider the local model of non-Gaussianity, which peaks in the squeezed limit, the (small-scale) modulation in power is imprinted by a long-wavelength mode. CMB PA are a high signal-to-noise tracer of said long-wavelength mode, thus cross-correlating PA and SDA one can hope to achieve strong constraints on all models of primordial non-Gaussianity with a large squeezed contribution. This avenue was opened by Pajer & Zaldarriaga (2012); Ganc & Komatsu (2012) and further extended and quantitatively refined in Emami et al. (2015); Ota (2016); Chluba et al. (2017); Ravenni et al. (2017); Cabass et al. (2018). Recently it has been shown how the same idea can be applied to models involving primordial black holes (Özsoy & Tasinato 2021;Zegeye et al. 2021) and to models in which scalar and tensor perturbations are correlated (Orlando et al. 2021). While fainter than the monopole signal, spectral distortions anisotropies do not require (Ganc & Komatsu 2012) the use of a purposefully-built absolutely-calibrated spectrometer ), but can be extracted from differential measurement carried out by an imager (e.g. Hazumi et al. 2019). Moreover, cross-correlating them with much more intense signals such as the CMB temperature and polarization, drastically increases the signal-to-noise ratio, making a detection possible even with the next generation of satellites (Remazeilles & Chluba 2018). As is well known (Pajer & Zaldarriaga 2012;Ganc & Komatsu 2012), bulk of the information is contained in the lowest multipoles, with the signal-to-noise ratio decaying quickly towards smaller angular scales. Thus, for a fixed sensitivity, ample sky coverage is more important than high angular resolution, motivating our focus on satellite rather than ground-based telescope observations.
In Remazeilles & Chluba (2018) for the first time we performed realistic forecasts by analysing synthetic datasets including foregrounds and the effect of detectors, focusing the analysis on CMB temperature and maps. As shown in Ravenni et al. (2017), -mode polarization adds valuable information, and in fact its use provides tighter constraints than the temperature. The reasons for this are twofold.
-modes do not receive sizeable contributions in the late universe beside the low-ℓ bump due to reionization, whereas the integrated Sachs-Wolfe effect does not correlate with primordial signals but still adds to the temperature anisotropy. Moreover, temperature foregrounds are more complex than polarized ones. While this is not a challenge if we wish to recover just temperature or polarization anisotropies, it makes a difference when targeting a non-trivial spectral component (i.e., distortions). In this paper we investigate these claims, showing how cross-correlations offer tighter and more reliable estimates of non-Gaussianity compared to , and forecast the constraint given by the combination of both probes. This paper is organised as follows. In Sect. 2, we introduce the theoretical model that we wish to test and we model the observables. In Sec. 3 we describe the sky simulations that we analyse in this work. In Sec. 4 we dedicate a detailed discussion on the expected noise curves for -distortion anisotropies. In Sec. 5 we describe our pipeline and component separation methodology, and show our results, that we further discuss in Sec. 6, where we conclude.
THEORETICAL MODELLING
The primordial perturbation field can be described in terms of itspoint correlation functions in Fourier space. The two-and three-point functions, the power spectrum ( ) and bispectrum ( 1 , 2 , 3 ), are defined as The bispectrum shape varies depending on the class of models one considers. Here we are interested in shapes that peak on squeezed configurations ( 1 2 ≈ 3 ), as those are the ones PA-SDA correlations are most sensitive to. The most studied model peaking in this limit is the local bispectrum (e.g., Gangui et al. 1994;Verde et al. 2000;Komatsu & Spergel 2001), which naturally arises from non linearity of the perturbations in real space. It is especially important because it could allow us to discern multi-field (which predict NL 1) from single-field inflation (see Planck Collaboration et al. 2020c, for an extended list of references).
The local non-Gaussianity parameter constraint set by Planck Collaboration et al. (2020c) -NL = −0.9 ± 5.1 -has been achieved measuring the bispectrum of CMB PA, which probe primordial perturbation modes with typical wavenumbers 0 0.05 Mpc −1 . Therefore, the Planck constraint can be thought as being valid on those scales. In contrast, -distortion anisotropies are generated by non-Gaussian perturbation modes with much larger wavenumbers 740 Mpc −1 , therefore they would lead to NL constraints in a vastly different regime (Emami et al. 2015). In fact, NL does not need to be constant in general; various models predict some scale dependence (Dimastrogiovanni & Emami 2016;Byrnes et al. 2010;Shandera et al. 2011;Chen 2005). While this is well established, explicit expressions are generally provided only through expansions in powers of ln( / p ) about some pivot scale p . The coefficient of this expansion are related to the hierarchy of slow roll parameters, and thus are small. However, when we consider crosscorrelations of PA and SDA, we are effectively integrating over an extremely large range of scales ( ∈ [ 10 −4 Mpc −1 , 10 4 Mpc −1 ]) so that expansions in ln( / p ) = O (10) might not converge in general. Bearing in mind this caveat (see also the discussion in Planck Collaboration et al. 2020c), it is still instructive to notice that assuming even a mild scale-dependence of primordial non-Gaussianity, e.g. NL ( ) = NL ( 0 ) ( / 0 ) NL with NL 0.7, leads to NL ( 740 Mpc −1 ) > 4000 at -distortion scales, while still being consistent with NL ( 0 0.05 Mpc −1 ) 5 at CMB scales. As customary (Pajer & Zaldarriaga 2012;Ganc & Komatsu 2012;Emami et al. 2015), here we use a phenomenological approach and assume NL ( 1 , 2 , 3 ) = NL = constant when 1 spans the range of PA scales, and 2 , 3 span the SD scales.
The recipe to calculate power spectra and cross-correlations taking properly into account transfer effects of both PA and SDA is well established (Pajer & Zaldarriaga 2012;Ganc & Komatsu 2012;Chluba et al. 2017;Shiraishi et al. 2015;Ravenni et al. 2017). To fix the notation, let us define the harmonic coefficients of the PA field = , as and similarly the harmonic coefficients of the field as In the last two equations T / ℓ ( ) and ( 1 , 2 , 3 ) are respectively the PA transfer function and the SDA window function that relate observables to primordial perturbations. We calculated the PA transfer functions using CLASS (Blas et al. 2011). The spherical Bessel functions ℓ ( ls ) account for the angular projection of SDA's on the last scattering surface, at a comoving distance LS . Close to exact expressions of the window function were provided in Chluba et al. (2017), and a reasonable approximation is (Pajer & Zaldarriaga 2012;Ganc & Komatsu 2012;Chluba et al. 2017) ( 1 , 2 , 3 ) = 2.27 ( 2 where D ( ) = 12 000 Mpc −1 and D ( ) = 46 Mpc −1 are the diffusion damping scales at the beginning and end of the -distortion era. It is then straightforward to calculate the cross-correlation Notice that the first integral would match the temperature power spectrum in the Sachs-Wolfe limit if one were to take = and approximate the transfer function T / ℓ ( ) ≈ ℓ ( ls )/5. The integral in the second line is approximately equivalent to the sky averaged distortion sourced by dissipation of acoustic modes. It is then obvious that for fixed NL , comparatively higher values of the monopole strengthen the detection of this cross-correlation .
The Gaussian contribution to the auto-power spectrum ofdistortion anisotropies is extremely small (Pajer & Zaldarriaga 2012;Ganc & Komatsu 2012), and thus negligible when compared to any realistic instrumental noise term. As such, from the theoretical side, we only need to consider the non-Gaussian contribution to ℓ (see Emami et al. 2015) which, being quadratic in NL , can be important for large values of NL : 1 where s = 2.4 × 10 −9 is the power spectrum amplitude of the primordial curvature perturbation, assuming scale-invariance (i.e. 1), and = 2.3 × 10 −8 is the ΛCDM prediction for the average -distortion (Chluba 2016).
Cosmological signals
The covariance matrix of the correlated cosmological fields reads as where ℓ , ℓ , ℓ are theoretical CMB power spectra calculated from the Planck 2018 ΛCDM best-fit model (Planck Collaboration et al. 2020b), while the theoretical cross-power spectra ℓ , ℓ between CMB and -distortion anisotropies are given by Eq. (7) (Ravenni et al. 2017).
Following Remazeilles & Chluba (2018), we perform a Cholesky decomposition of the covariance matrix Eq. (9) in order to simulate correlated maps of -distortion anisotropies, CMB temperature 1 We stress that very large values of this parameter might be breaking the perturbative expansions. Knowing this limitation, we still investigate these limits as a proof of concept: the tightest constraints we will show it is possible to set are anyway safe from this caveat. anisotropies, and CMB -mode anisotropies based on the theoretical auto-and cross-power spectra described above. Our simulated maps are in HEALPix 2 format (Górski et al. 2005) using side = 512 pixelisation.
Through non-zero and correlations, our analysis focuses on non-Gaussianity of the primordial field at very small scales with NL ≡ NL ( 740 Mpc −1 ) ≠ 0, while we neglect non-Gaussian fluctuations in the maps at scales probed by CMB anisotropies, so NL is consistent with zero in lower ranges. Since we do not attempt at computing any bispectrum from the maps but only cross-power spectra and between maps, we do not inject NL in the primary anisotropy maps, but only use it to introduce and correlations. . Cross-correlation coefficient between the anisotropic -distortion field and either CMB or fields. The intrinsic correlation betweendistortion and CMB -modes is actually more significant than the intrinsic correlation between -distortion and CMB temperature anisotropies.
In the following, we will consider a set of three sky simulations, in which NL = 4500, NL = 10 4 , and NL = 10 5 , respectively. In addition, we consider a sky simulation in which NL = 0, i.e. without anisotropic -distortion signal, for our null tests. The -map realisation of each simulation is the same for all fiducial NL values considered.
Our simulated maps of anisotropic -type distortions, CMB temperature, and CMB -mode polarization are shown in Fig. 1 for = 2.3 × 10 −8 and NL = 4500, at an angular resolution of 60 . The anticorrelation between CMB and -distortion anisotropies at large angular scales, as expected from theory, is clearly visible on the simulated maps. The auto-and cross-power spectra of the simulated maps are plotted in Fig. 2, where they are shown to match the theoretical spectra for NL = 4500. We can also see from Fig. 2 that, for NL ( 740 Mpc −1 ) 4500, the correlated signal (grey line) has a magnitude similar to that of the CMB signal (red line) at low multipoles ℓ < 60 and the CMB signal (brown line) at higher multipoles 60 < ℓ < 300. Thus, if the amplitude of primordial non-Gaussianity is as large at high wavenumber 740 Mpc −1 then it makes -distortion anisotropies an accessible signal to future CMB imagers like the LiteBIRD satellite (Hazumi et al. 2019), thanks to its amplification by cross-correlation with the more intense signal of CMB anisotropies.
While the absolute amplitude of the correlated signal is significantly lower than that of the signal (Fig. 2), interestingly the degree of correlation between -distortion and CMB -mode anisotropies is significantly larger compared to the degree of correlation between -distortion and CMB temperature anisotropies across the multipoles. This is evident from Fig. 3, showing the Pearson correlation coefficients across multipoles for both and . Because of better correlation with CMB polarization, the observable ℓ should actually provide more constraining power than ℓ on NL ( 740 Mpc −1 ) in the presence of foregrounds, as we are going to demonstrate in this work. In addition, the foreground signal is much weaker in polarization and somewhat less complex than in intensity since only few of the foregrounds are actually polarized. This should further facilitate the recovery of ℓ as compared to ℓ and further enhance the constraining power of ℓ . CMB temperature and polarization anisotropies are achromatic in thermodynamic temperature (K CMB ) units, while -type spectral distortions have a peculiar spectral signature given by (see Sunyaev & Zeldovich 1970;Chluba 2018;Lucca et al. 2020): where CMB = 2.7255 K is the CMB blackbody temperature, = ℎ / CMB , and (3) is the value of the Riemann zeta function at integer = 3. We thus scale our simulated map of -distortion anisotropies, (ˆ) (top panel of Fig. 1), across the frequency bands of our sky simulation by using the emission law Eq. (10) as Since the anisotropic -distortion signal is unpolarised, it is absent from the , polarization channels of our sky simulations. We ignore primordial -distortion anisotropies in the current analysis as the amplitude of and correlations is anyway about an order of magnitude lower than the amplitude of and correlations (Ravenni et al. 2017). Moreover, the full spectral degeneracy between primordial -distortions and thermal SZ effect from galaxy clusters in the late universe makes it challenging to disentangle the primordial signal from the SZ foreground in temperature channels, and thus this deserves further investigation in a future work.
Galactic and extragalactic foregrounds
We use the Planck Sky Model (PSM; Delabrouille et al. 2013) to simulate maps of Galactic and extragalactic foreground emissions. For observations in temperature, the Galactic foreground components of our sky simulation include thermal dust, synchrotron, free-free, and anomalous microwave emission (AME), while the simulated extragalactic foregrounds include cosmic infrared background (CIB) anisotropies and thermal and kinetic Sunyaev-Zeldovich effects. For observations in polarization, our sky simulations include only thermal dust and synchrotron as main polarized Galactic foregrounds.
Since we are interested in large angular scales, where the bulk of the and correlated signals lies, we do not include unresolved radio and infrared sources in our simulations. Figure 4 displays the simulated foreground components as observed in temperature at 280 GHz, except for the AME component which is shown as observed at 40 GHz, while Fig. 5 shows the polarized foreground components of the simulation as observed in Stokes , fields at 280 GHz.
Thermal dust
Thermal dust emission from our Galaxy originates from silicate and carbonaceous grains of nanometer size in the interstellar medium which, by absorbing the UV light from stars, heat themselves and thus re-emit light at submillimetre and infrared wavelengths. This is the dominant foreground emission in the sky at frequencies 100 GHz. Thermal dust emission is also polarised because dust grains are aspherical and spin around Galactic magnetic fields, while radiative torque by stellar radiation force the dust grains to align with their long axis perpendicular to the magnetic field lines. Since the crosssection is proportional to the size of an object, dust grains emit more radiation parallel to their long axis, which induces linear polarization of thermal dust emission that is orthogonal to the magnetic field.
We use the Planck GNILC , , maps at 0 = 353 GHz (Planck Collaboration et al. 2016d, 2020d as the dust templates for our simulations. The dust template maps in MJy.sr −1 units are scaled across frequencies through a modified blackbody function: where the spectral index (ˆ) and temperature (ˆ) vary over the sky depending on the line-of-sightˆ, and is the Planck's law for blackbody radiation. The same Planck GNILC templates of dust spectral index and dust temperature (Planck Collaboration et al. 2016d) are used for the , , and fields.
Synchrotron
Synchrotron emission is due to the relativistic cosmic-ray electrons from supernovae explosions which are accelerated by the magnetic field of our Galaxy. This is dominant foreground emission in the sky at the lowest frequencies.
Synchrotron emission is also polarised. When the electrons are spiralling around the magnetic fields, their orbit projected on the sky plane of observation is seen as an oscillation orthogonal to the magnetic fields, which induces linear polarization of the synchrotron emission in the orbit plane orthogonal to the magnetic field lines.
We use the reprocessed Haslam 408 MHz map (Remazeilles et al. 2015) as the synchrotron template for intensity channels, while for polarization channels we use the Planck Commander template of synchrotron polarization , maps at 30 GHz (Planck Collaboration et al. 2020a). Both templates in brightness Rayleight-Jeans temperature ( RJ ) units are scaled across the frequency channels with the same power law of spectral index varying over the sky and given by the synchrotron index template map (ˆ) from Miville-Deschênes et al. (2008): (14)
Free-free
Free electrons in ionised star-forming HII regions of our Galaxy are braked by Coulomb interactions with heavy ions, thus losing part of their kinetic energy which is converted in so-called free-free emission (or thermal bremsstrahlung). Free-free emission is an important foreground to CMB temperature observations at low frequencies, although the bulk of free-free emission is mainly concentrated in the Galactic disk. We use the Planck Commander free-free template maps for the emission measure EM(ˆ) and electronic temperature e (ˆ) (Planck Collaboration et al. 2016a), and we adopt the prescription in the aforementioned reference to scale free-free emission in brightness temperature units across the frequencies as follows: where the free-free optical depth is given by and the Gaunt correction factor is (Draine 2003) Galactic free-free emission is unpolarised because of the randomness of Coulomb interactions, and thus absent from the , polarization channels of our sky simulation.
AME
Anomalous microwave emission (AME) from our Galaxy is now routinely detected at low frequencies ∼ 10-60 GHz (e.g. Planck Collaboration et al. 2016a), with a peculiar spectral signature that is inconsistent with that of synchrotron or free-free emissions. AME is strongly correlated with far-infrared thermal dust emission at 100 microns (Davies et al. 2006), so the best explanation for AME so far is electric dipole radiation from spinning dust grains in our Galaxy (Draine & Lazarian 1998). Since spinning dust polarization has been shown to be negligibly small (Draine & Hensley 2016), we consider unpolarized AME in our simulations.
We use the Planck GNILC dust optical depth map at 353 GHz (Planck Collaboration et al. 2016d), which we rescaled as a spinning dust map at 22.8 GHz using the thermal dust-AME correlation factor of = 0.91 measured by Planck Collaboration et al. (2016c). The spinning dust template is then scaled across frequencies using the model of Draine & Lazarian (1998) for spinning dust emission law with 96.2% warm neutral medium and 3.8% reflection nebulae.
CIB
Cosmic infrared background (CIB) temperature anisotropies arise from the cumulative diffuse emission of early dusty star-forming galaxies at redshifts 1 3 (e.g. Planck Collaboration et al. 2014). Therefore, CIB anisotropies form a diffuse extragalactic foreground to observations of -distortion anisotropies, in particular at high frequencies.
The CIB emission is simulated by the PSM assuming three populations of spiral, starburst, and proto-spheroid galaxies, which are distributed across redshift shells according to dark matter distribution (Delabrouille et al. 2013;Planck Collaboration et al. 2016b). Each population of infrared galaxies has its own spectral energy distribution (SED), which is redshifted accordingly depending on the frequency channel of observation. Maps of each population and each redshift shell are coadded to form CIB maps across frequencies, which have been shown by Planck Collaboration et al. (2016b) to reproduce the auto-and cross power spectra of CIB anisotropies as measured by Planck (Planck Collaboration et al. 2014).
SZ effects
In our sky simulation we include the thermal and kinetic Sunyaev-Zeldovich (SZ) effects as extragalactic foregrounds to primordial spectral distortion anisotropies.
Kinetic SZ (kSZ) effect is the Doppler boost of CMB photons that is caused by the proper radial velocities of galaxy clusters. In the nonrelativistic limit, the spectral signature of the kSZ effect is identical to that of CMB anisotropies, i.e. the derivative of the blackbody with respect to temperature, and thus kSZ emission is achromatic across frequencies in thermodynamic temperature units (Sunyaev & Zeldovich 1980). Inverse Compton scattering of CMB photons off the hot gas of free electrons residing in galaxy clusters also causes -type spectral distortions to the CMB blackbody spectrum. This is known as thermal SZ (tSZ) effect from galaxy clusters, whose peculiar spectral signature in thermodynamic temperature units is given in the non-relativistic limit by (Zeldovich & Sunyaev 1969) It is worth noticing that there is a full spectral degeneracy between this type of extragalactic foreground emission in the low-redshift universe and the primordial -distortion signal in the early universe. It thus makes it challenging to deal with the tSZ foreground for the search for primordial -distortions without some sort of spatial filtering, e.g. by masking most galaxy clusters in the maps. Galaxy clusters are simulated by the PSM by using both real and random cluster catalogues. A first catalogue of halos randomly distributed over the sky is generated using a Poisson distribution of the mass function from Tinker et al. (2008). The Comptonparameter of the tSZ effect is modelled using the universal pressure profile from Arnaud et al. (2010), while the modelling of the kSZ effect is done by assigning peculiar velocities to each galaxy cluster depending on their redshift using the continuity equation for linear growth of structures. Real clusters are also injected in the map using the Planck, ACT, SPT, and ROSAT catalogues. In addition, largescale diffuse tSZ emission is simulated and added to the SZ maps as a Gaussian realisation based on the theoretical tSZ power spectrum for the same aforementioned mass function, pressure profile and cosmological parameters.
The tSZ maps across the frequency bands are obtained by scaling the simulated Compton-map using the SED Eq. (18) as We neglected relativistic corrections to the SZ effect (e.g., Itoh et al. 1998;Chluba et al. 2012b) in our simulations, although these may become important for future spectral distortion science (Hill et al. 2015;Abitbol et al. 2017;Chluba et al. 2019) and SZ analyses (Remazeilles et al. 2019; Remazeilles & Chluba 2020).
Experiment specifications
LiteBIRD ( The main instrumental specifications of LiteBIRD that we use for our sky simulations are summarised in Table 1. We assumed that the sensitivities per channel for the temperature are a factor of √ 2 better than the sensitivities per channel for polarization given by Hazumi et al. (2020), thus providing a combined sensitivity from all frequency bands of about 1.53 K.arcmin in temperature.
The simulated maps of the cosmological signal and astrophysical foreground components are coadded in each LiteBIRD frequency band, assuming -function bandpasses, and convolved with a Gaussian beam of full width at half maximum (FWHM) values listed in Table 1. Using the noise RMS values listed Table 1, we also simulate Gaussian white noise maps for each frequency channel which we add (Hazumi et al. 2020). The sensitivity on intensity channels (third column) is assumed to be a factor of √ 2 higher than the sensitivity on polarization channels (fourth column). to the convolved sky maps, thus obtaining 15 LiteBIRD observation maps for each of the , , and fields.
EXPECTED ILC NOISE CURVES FOR -DISTORTION ANISOTROPIES
Here we address the exact analytical derivation of the expected noise curve, , ℓ , for the recovered anisotropic -distortion signal of a given experiment, and compare it with the actual noise curve as obtained from sky map simulations. We will show how the estimates of the -distortion noise, and subsequent forecasts on ( NL ), can differ from earlier theoretical estimates in the literature depending on (i) which effective weighting of the frequency channels is implemented for signal reconstruction and (ii) which frequency bands are used given the effective weighting of the channel sensitivities by the SED of the distortion.
In the absence of foregrounds, the resulting noise RMS, , in the reconstructed -map obtained from an internal linear combination (ILC) of frequency channels is given by the analytic expression: where { } 1≤ ≤ are the LiteBIRD noise RMS values across the frequency channels, as listed in Table 1, and ( ) is the unitless version of the peculiar SED of the -distortion signal Eq. (10), i.e.
evaluated at the frequencies of each channel. The weighting of the noise by the -distortion SED across the channels is an important factor which makes the effective noise different from the simple inverse-variance weighted mean of the channel sensitivities, the latter being only relevant to achromatic CMB signals.
We emphasize that alternative techniques of -distortion signal reconstruction, e.g. an ILC with extra constraints (Remazeilles & Chluba 2018;Remazeilles et al. 2021) to deproject some of the foregrounds or taking the difference of a pair of frequency channels (Ganc & Komatsu 2012;Mukherjee et al. 2018) to filter out CMB temperature anisotropies, would result in yet another effective weighting of the channel sensitivities different from that of Eq. (20).
LiteBIRD 10 17 (Ganc & Komatsu, 2012) Analytic estimate (2 channels) Analytic estimate (15 channels) ILC noise (15 channels) ILC noise (15 channels), incl. foregrounds ILC noise + fgds (15 channels), incl. foregrounds (2012) based on 2 channels (dotted black), analytic estimate for 2 channels using LiteBIRD noise RMS values anddistortion SED weighting (dash-dotted green), same analytic estimate for the 15 LiteBIRD channels (dashed blue), and actual projected noise power spectrum in the ILC -map obtained from foreground-free LiteBIRD simulations (solid red). Our ILC noise is consistent with our analytical estimate for 15 channels. The projected ILC noise power spectrum in the presence of foregrounds in the sky simulation is also plotted for comparison (solid orange). The sum of projected noise and foreground power spectra is also plotted for completeness (solid pink).
The analytic expression for the projected noise power spectrum in the -map is then given by (e.g., Knox 1995 where is given by Eq. (20), rad is the beam FWHM in radian of the reconstructed ILC -map, and pix is the number of effective pixels (or beams) in the map, i.e. pix = 4 / 2 rad . The normalisation factor 1/(10 6 CMB ) 2 in Eq. (22) because of the modulation by the -distortion SED. In contrast, the gain on sensitivity for the achromatic CMB temperature signal, which is given by the inverse variance weighted mean of the channel sensitivities, is much less significant: As a reference, we added to Fig. 6 the noise estimate from Ganc & Komatsu (2012) for LiteBIRD, , ℓ ∼ 10 −17 (black dotted line), which results from taking the difference between 2 consecutive channel maps to cancel out CMB temperature anisotropies while conserving the -distortion signal. Although the noise estimate from Ganc & Komatsu (2012) relies on a different version of LiteBIRD back to 2012 and on a weighting of two channels that is different from that of the ILC, it is of same order of magnitude than our analytic ILC noise estimate for two channels (119, 140 GHz), while it is about an order of magnitude larger than the ILC noise derived from all 15 channels in absence of foregrounds.
In the presence of foregrounds, the effective noise in the recovered -distortion map is expected to increase since some of the lowand high-frequency channels are requisitioned by the ILC to subtract the foreground contamination, thus reducing the effective number of channels used to mitigate the noise. This is shown by the solid orange line in Fig. 6, where the effective ILC noise curve from 15 channels in the presence of foregrounds is found to be of similar order of magnitude than the ILC noise for two channels in the absence of foregrounds. The addition of the power spectrum of projected foregrounds to the noise, i.e. , ℓ + , ℓ , is also shown through the solid pink line, which corresponds to the actual contribution to the total uncertainty on anisotropic -distortion measurements.
In Fig. 7, the overall noise curves , ℓ (solid brown line) and , ℓ (solid olive green line) accounting for residual foreground and noise contamination in the recovered ILC maps of CMB temperature and CMB -mode polarization, respectively, are shown along with the overall noise curve of the -distortion discussed before (solid pink line). On top of the noise curves, the auto-power spectra of the cosmological signals are plotted as dotted lines. This highlights that the overall uncertainty on CMB temperature and -mode measurements by LiteBIRD is mostly driven by the cosmic variance of the CMB signal, while in contrast the uncertainty on -distortion anisotropies is largely dominated by the residual foreground contamina- tion. Therefore, the main limiting factor in and cross-power spectrum measurements is not the residual foreground contamination in CMB maps but the residual foreground contamination in the -distortion map. In Fig. 8, we investigate which foregrounds prevail in the total error budget of the recovered -distortion anisotropies. The thick solid pink line in Fig. 8 is just the replicate of the one in Fig. 6 and Fig. 7, showing the power spectrum of the total foreground and noise contamination in the recovered -distortion map for the baseline sky simulation which includes all the foregrounds. In contrast, the solid purple line shows the sole contribution from the extragalactic foregrounds (SZ, CIB, CMB) to the error budget, i.e. ignoring Galactic foregrounds in the sky. The significant drop off of the noise curve in this case at low multipoles, where bulk of the / correlation and constraining power on NL lie, highlights that Galactic foregrounds largely prevail over extragalactic foregrounds in the total error budget. The green dotted line, cyan dash-dotted line and red dashed line show the resulting noise curve when removing from the baseline sky simulation either the synchrotron, free-free or thermal dust component, respectively. The resulting gain in sensitivity across multipoles shows that thermal dust (dashed red) was accounting for the largest contribution to the -distortion noise at low multipoles ℓ < 20 while Galactic free-free emission (dash-dotted cyan) is the most damaging foreground at multipoles ℓ > 30.
Finally, while we were investigating on the actual noise curves for anisotropic -distortions, we noticed that in our earlier paper (Remazeilles & Chluba 2018), we omitted the binning factor √ Δℓ in our estimates of the error on the binned power spectra. While this omission does not change the relevance of the component method that was developed in the paper nor the conclusion that foregrounds degrade the recovery of the -distortion anisotropies by about one order of magnitude, our earlier forecasts on ( NL ) were more pessimistic than they should be since the ( NL ) values computed in Remazeilles & Chluba (2018) should actually be divided by a factor of √ 30 ∼ 5. By including CMB polarization in the current analysis to leverage the detection of anisotropic -distortions, we will see that the expected constraints on ( NL ) for LiteBIRD are even more promising than any of earlier forecasts in the literature.
Component separation
For any line-of-sightˆand any frequency , the observed data in either temperature or -mode polarization can be decomposed in thermodynamic temperature units as follows: where (ˆ) is the anisotropic -distortion signal that we aim at recovering and ( ) is the peculiar SED of the distortion (Eq. 10), while CMB (ˆ) and CMB (ˆ) are, respectively, the CMB temperature anisotropies and CMB -mode polarization anisotropies, which both are achromatic, i.e. independent of frequency, in thermodynamic temperature units. Astrophysical foregrounds and instrumental noise are collected together into unparameterized terms ( ,ˆ) and ( ,ˆ) for the temperature and polarization channels, respectively.
Equations (25a)-(25b) highlight two important aspects: (i) CMB temperature anisotropies are a peculiar foreground todistortion anisotropies because they are also correlated with thedistortion signal. Hence, residual CMB temperature anisotropies in the reconstructed -map, (ˆ), from temperature channels may bias the measurement of the cross-power spectrum ℓ because of residual correlations (Remazeilles & Chluba 2018): where < 1 is an arbitrary percentage of residual CMB contamination in the reconstructed -map. (ii) In contrast to temperature channels, polarization channels are free from -distortion signals, so that any CMB -mode map obtained from the data can be safely cross-correlated with the recoveredmap to measure ℓ without suffering from spurious , , and correlations.
Following the methodology of Remazeilles & Chluba (2018), we perform foreground cleaning and component separation by means of Constrained needlet ILC (CILC) methods (Remazeilles et al. 2011(Remazeilles et al. , 2021 instead of standard needlet ILC (NILC; Delabrouille et al. 2009) methods in order to get rid of spurious correlations between the CMB foreground and the -distortion signal (see Eq. 26).
CMB-free -map
The estimate of the -map is obtained from the constrained linear combination of the temperature channel maps as in which the CILC weights ≡ { ( )} fulfill three conditions where the CMB SED CMB ( ) = 1 in thermodynamic units because of achromaticity, and C = (ˆ) (ˆ) is the covariance matrix of the temperature data. The first condition of Eq. (28) guarantees the mitigation of astrophysical foreground and noise contamination through minimisation of the variance (ˆ) 2 = C of the estimate. The second condition of Eq. (28) guarantees the full conservation of the signal of interest, here (ˆ), despite variance minimization. The third constraint of Eq. (28) guarantees the full cancellation of residual CMB temperature anisotropies in the recovered -map, i.e. = 0 in Eq. (26).
We emphasize that without the extra nulling constraint of the CILC in Eq. (28), the solution would reduce to the standard NILC solution, for which the recovered -map would suffer from residual CMB temperature anisotropies since the NILC weights would no longer be orthogonal to the CMB SED.
-free CMB temperature map
By interchanging the second and third conditions in Eq. (28), i.e. ensuring ( ) ( ) = 0 and ( ) CMB ( ) = 1, we similarly guarantee full cancellation of residual -distortions in the recovered CMB temperature map. With this prescription, the CILC solution for the reconstructed CMB temperature map is given by: Likewise the CMB-free -map (Eq. 29) which allows us to get rid of spurious correlations in the recovered cross-power spectrum ℓ (Eq. 26), the -free CMB temperature map (Eq. 30) will allow us to get rid of any residual correlations.
CMB -mode map
Since the polarization channels are naturally immune fromdistortions and CMB temperature anisotropies, we do not need to impose nulling constraints in this case. Hence, the CMB -mode map is obtained from the standard NILC method: where C = (ˆ) (ˆ) is the covariance matrix of the -mode polarization data.
Cross-correlating the NILC CMB -mode map Eq. (31) with the CILC -map Eq. (29), which is free from CMB temperature anisotropies, will allow us to get rid of residual correlations in the recovered cross-power spectrum ℓ . Additional nulling constraints can in principle be added to the CILC in order to deproject e.g. thermal SZ effects and primordialdistortions from either the -distortion map or the CMB temperature map 3 , but also to remove the bulk of Galactic foreground contamination by nulling its spectral moments (see Remazeilles et al. 2021). With additional nulling constraints, ( ) ( ) = 0, against the SEDs { } 1≤ ≤ of specific foregrounds, the expression for e.g. the recovered -map (Eq. 29) would simply be generalised as is the ( + 2) × SED matrix for + 2 constraints and channels, and is a ( + 2)-dimensional vector. Finally, both CILC and NILC methods are implemented on spherical wavelets called needlets (Narcowich et al. 2006;Guilloux et al. 2009), whose excellent properties of localization in both direct pixel space and conjugate harmonic space allow to adjust ILC filters to the local variations of the foreground and noise contamination both across the sky and across the scales. For details on the needlet implementation of the NILC and CILC filters, we refer to e.g. Delabrouille et al. (2009);Basak & Delabrouille (2012); Remazeilles et al. (2021).
Reconstructed and cross-power spectra
The recovered -distortion, CMB temperature and CMB -mode all-sky maps after foreground cleaning and component separation are then masked prior to computing their cross-power spectrum. We use a Galactic mask leaving sky = 65% of observed sky in order to mitigate residual Galactic foreground emission in the Galactic plane. The recovered power spectra are then deconvolved from the Galactic mask, using MASTER (Hivon et al. 2002), and binned on multipoles with a binsize of Δℓ = 30. Error bars in each multipole bin are computed analytically from the recovered power spectra after component separation: where stands for either or field, ℓ is the central multipole of the bin, Δℓ is the bin size, and sky is the fraction of sky outside the Galactic mask. Since the recovered maps include, on top of the signal, any residual foregrounds and noise that projected along with the signal, the uncertainty based on recovered map power spectra thus receives contributions from both signal cosmic variance and residual foreground and noise sample variance. Figures 9 and 10 show the cross-power spectrum between the recovered map of -distortion anisotropies, (ˆ), and the recovered maps of CMB temperature and -mode anisotropies, CMB (ˆ) and CMB (ˆ), after foreground cleaning and component separation, for different fiducial NL values. In all figures, the left panels show ℓ , while the right panels show ℓ . The blue solid lines show the fiducial and cross-power spectra as expected from the theory, while the orange solid lines show the cross-power spectra from the input signal map realisations of the simulation. The binned cross-correlations and non-Gaussianity 11 cross-power spectra of the recovered signal maps after component separation are plotted as green dots.
Overall, we see that LiteBIRD allows us to recover with good accuracy both the and correlation signals for each of the fiducial NL values considered, as long as the CILC component separation approach is used to get rid of spurious correlation biases. In the top panel of Fig. 9, for NL = 4500 in the absence of foregrounds, we displayed also the recovered signals if the standard NILC method (red dots) had been used in place of the constrained CILC method (green dots). Clearly, the recovered ℓ signal from NILC is highly biased across the multipoles because of strong spurious correlations propagated by residual CMB temperature anisotropies in the NILC -map, thus confirming our theoretical expectations from Sect. 5.1.
The recovered ℓ signal from NILC is less affected by biases because the -mode channels are immune from -distortion and CMB temperature anisotropies, but residual CMB temperature anisotropies in the NILC -map still projects residual correlations in the cross-power spectrum ℓ . In contrast to NILC (red dots), CILC (green dots) allows us to fully recovers both correlated and signals without bias.
By comparing the top panels (without foregrounds) and bottom panels (with foregrounds) of Fig. 9, we can appreciate the impact of foregrounds on the recovery of the signals. The increase of uncertainty on the recovered and cross-power spectra is clearly driven by residual foreground contamination, not by instrumental noise. This goes in the same direction than our earlier results in Remazeilles & Chluba (2018), in which we showed that extended frequency coverage typically provides more leverage to anisotropic -distortions than increased detector sensitivities. Finally, comparing Fig. 9 and Fig. 10, we can see how the recovery of the and cross-power spectra improves with increasing values of NL .
In Fig. 11, we performed a null test by running our component separation method on sky simulations in which NL = 0, i.e. in the absence of any anisotropic -distortion signal in the sky. The upper row shows results in the absence of foregrounds (i.e. only CMB and noise), while the bottom row shows results with foregrounds. In all cases, the CILC reconstructions of ℓ and ℓ are consistent with zero, meaning that CILC would not lead to false detections of the anisotropic -distortions.
The most important result coming out of Figs. 9, Fig. 10, and Fig. 11 is the better recovery of ℓ , with lower uncertainty, compared to ℓ . Although the correlation signal is weaker than , clearly we get better sensitivity to than to after foreground cleaning and component separation. As a result, ℓ will provide more constraining power on NL than ℓ , as we will show in Sect. 5.3. There are several reasons for ℓ to be a more sensitive observable than ℓ : (i) While the CMB -mode polarization signal is weaker than the CMB temperature signal, it has a higher degree of correlation with -distortion anisotropies, as we showed in Fig. 3. (ii) Polarization foregrounds are fewer and weaker than temperature foregrounds. (iii) Polarization channels are immune from -distortion and CMB temperature anisotropies, so that extra nulling constraints in the ILC are no longer needed for polarization and we do not pay the noise penalty that temperature maps get from these additional constraints. (iv) The instrumental noise in polarization channels is uncorrelated to the noise in temperature channels, hence ℓ do not suffer from noise auto-correlation bias, unlike ℓ . 4
Constraints on small-scale primordial non-Gaussianity
where ℓ is given in Eq. (35), while the corresponding 1uncertainty on NL is given by where stands for either or , and ℓ NL = 1 is the theoretical cross-power spectrum for NL = 1. When jointly exploiting both and observables, the estimator and uncertainty are computed as where These are the two-dimensional generalisation of Eq. (37) and (36). Our Fisher forecasts on NL ( 740 Mpc −1 ) derived from the recovered ℓ and ℓ after foreground cleaning are listed in Table 2, from which we can draw several conclusions. First, as it was 4 While we could perform Jackknife on ℓ by using different data splits for the -distortion and CMB temperature maps to get rid of the noise autocorrelation bias, the noise sample variance would still be increased by a factor of 2 because of the data splitting. In contrast, data splitting is not needed for cross-correlations since noise is uncorrelated between temperature and polarization channels. Theory E CILC E Figure 9. Cross-power spectrum between the recovered anisotropic -distortion map ( NL = 4500) and the recovered CMB temperature and -mode maps after foreground cleaning and component separation: ℓ (left column) and ℓ (right column), either without foregrounds (upper row) or with foregrounds (lower row). The upper row displays also NILC results (red dots) versus CILC results (green dots), highlighting that NILC results suffer from spurious and correlations due to CMB residuals in the -map and residuals in the CMB map, which biases ℓ . In contrast, CILC allows to fully recover ℓ and ℓ without bias. ℓ is also less affected by spurious correlations since the CMB -mode polarization is free from -distortions. Remazeilles & Chluba (2018), astrophysical foregrounds degrade the sensitivity to NL by about one order of magnitude for any combination of observables. Second, the observable provides more constraining power on NL than the observable, effectively increasing the detection significance of NL = 4500 by at least 14% in the presence of foregrounds. Third, the joint combination of temperature and polarization adds even more leverage to the detection of anisotropic -distortions and small-scale primordial non-Gaussianity, with further increase of the detection significance of NL = 4500 by about 40% as compared to in the presence of foregrounds. Fourth, with a joint analysis of and correlations, LiteBIRD would detect NL ( 740 Mpc −1 ) = 4500 at 5 significance after foreground cleaning. Finally, the smallest uncertainty on NL that LiteBIRD would achieve from the joint combination of and observables is about ( NL = 0) 800 after foreground removal. Aside from the baseline results in Table 2, we explored two directions of possible optimization of the analysis: Moment expansion of the foregrounds: We investigated adding extra nulling constraints to the CILC filter Eqs. (28)-(29) in order to deproject moments of the foreground emission in addition to the CMB temperature, i.e. by implementing the cMILC methodology (Remazeilles et al. 2021) outlined in the end of Sect. 5.1. Extra constraints to null out foreground moments effectively reduce residual foreground biases in the recovered -map at the expense of overall variance increase because of larger parameter space for the same of set of frequency channels. Since our current results on the recovered , and NL obtained with the baseline CILC approach are actually unbiased within one standard deviation, there is no gain in adding extra nulling constraints on foreground moments in our case as it only increases current uncertainties on NL . Extra frequency channels: We also explored the impact on NL forecasts of adding external low-or high-frequency channels to the baseline LiteBIRD configuration in order to assess which part of the frequency spectrum adds more leverage. We first considered three extra low-frequency channels at 10, 20 and 30 GHz from a futuristic ground-based full-sky survey. The sensitivity for these extra lowfrequency channels was scaled as 100 × ( /100 GHz) −3 , where 100 is the LiteBIRD sensitivity at 100 GHz (Table 1), thus following the frequency scaling of the synchrotron emission in order to keep a constant signal-to-noise ratio across the channels. The beam FWHM for these extra low-frequency channels was computed assuming a scaling of 20 × ( /10 GHz) −1 . As an alternative, we also considered having three extra high-frequency channels at 500, 650 and 800 GHz for which beam FWHMs and sensitivities were obtained by linear extrapolation of the current LiteBIRD beam and sensitivity curves.
Adding extra high-frequency channels to LiteBIRD was found to reduce the uncertainty ( NL ) by 6% for and 7% for , while adding extra low-frequency channels reduces the uncertainties by 9% for and 10% for . Therefore, low-frequency channels actually add a bit more leverage than high-frequency channels for -distortion anisotropies. This important outcome is somewhat consistent with our finding in Fig. 8 that low-frequency Galactic free-free emission is more damaging as a foreground than dust is at multipoles ℓ > 30. Since ℓ 200 (resp. ℓ 400) is where the dominant peak of the (resp. ) cross-correlation signal lies (see e.g. Fig. 10), this multipole range provides more constraining power on NL than low multipoles ℓ < 30 while it also coincides with the range where freefree is the most limiting factor. Using external low-frequency channels in conjunction with LiteBIRD to further clean low-frequency foregrounds is thus more helpful for -distortion anisotropies.
CONCLUSIONS
We investigated the capability of a future CMB satellite imager like LiteBIRD to detect -type spectral distortion anisotropies in the presence of foregrounds through cross-correlations with CMB temperature and -mode polarization, thereby testing the ability to constrain primordial non-Gaussianity at small scales 740 Mpc −1 . First, in the ideal case of absence of foregrounds, or perfect foreground cleaning, LiteBIRD would allow to detect NL ( 740 Mpc −1 ) = 4500 at about 50 significance and achieve a minimum uncertainty of about ( NL = 0) 60 at 68% CL by combining and observables (Table 2). However, using comprehensive sky simulations (Sect. 3), LiteBIRD instrument specifications (Table 1) and a tailored component separation method (Sect. 5.1), we performed the reconstruction of both and crosspower spectra in the presence of foregrounds (Figs. 9-10), showing that astrophysical foregrounds degrade the sensitivity to the inferred value of NL ( 740 Mpc −1 ) by about a factor of 10 (Table 2). We found that the main degradation factor to measuring and cross-power spectra arises from residual Galactic foreground contamination not in CMB fields but in the reconstructed -distortion map (Fig. 7), in particular from thermal dust at ℓ < 20 and free-free emission at ℓ > 30 (Fig. 8), while the CMB temperature and CMB -mode polarization maps are signal-dominated at all multipoles for LiteBIRD. The effective noise curve for -distortion anisotropies that we computed in Sect. 4 for LiteBIRD in the presence of foregrounds can be used as benchmark for future studies.
We emphasized the importance of constrained ILC approaches (Sect. 5.1) for component separation to simultaneously null CMB temperature anisotropies in the reconstructed -distortion map and -distortion anisotropies in the CMB temperature map, and thereby get rid of spurious correlation biases on ℓ and ℓ (Fig. 9). We showed that cross-correlations provide slightly more constraining power than cross-correlations on NL in the presence of foregrounds (Fig. 3, Figs. 9-10-11, and Table 2), while the joint combination of and observables adds even further leverage to the detection of NL . By combining both temperature and polarization, LiteBIRD will be able to detect NL ( 740 Mpc −1 ) = 4500 at 5 significance, and achieve a minimum uncertainty of about ( NL = 0) 800 at 68% CL after foreground removal (Table 2), a large value which is allowed by multi-field inflation models at large wavenumbers whilst still consistent with Planck CMB constraints at 0.05 Mpc −1 in the case of very mild scale-dependence of NL .
We anticipate even higher detection significance for non-Bunch-Davies (NBD) initial condition models of inflation (Ganc & Komatsu 2012), since the signal-to-noise ratio in such models is expected to be larger than that of multi-field inflation models which we considered here. The investigation of NBD models is left for future work.
Given the dependence of and cross-correlation signals on the value of the monopole distortion , differential measurements of -distortion anisotropies from an imager would still benefit from absolute measurement of the monopole distortion by a spectrophotometer like PIXIE (Kogut et al. 2011) and the ESA Voyage 2050 vision instead of relying on theoretical ΛCDM estimates (e.g., Chluba 2016). We also emphasize that precise inter-channel cross-calibration is needed for future imagers to avoid biasing the reconstruction of the and cross-correlation signals (see Ganc & Komatsu 2012;Remazeilles & Chluba 2018). Nevertheless, studies of SDA and their correlations with PA provide a powerful new window into the physics of the early Universe.
|
2021-10-29T01:16:14.214Z
|
2021-10-27T00:00:00.000
|
{
"year": 2021,
"sha1": "9afe8e940da7a8bd2a9f55afdc876ccab274471b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9afe8e940da7a8bd2a9f55afdc876ccab274471b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
7517672
|
pes2o/s2orc
|
v3-fos-license
|
On a property of special groups
Let G be an algebraic group defined over an algebraically closed field k of characteristic zero. We give a simple proof of the following result: if H^1(L, G) = {1} for some finitely generated field extension L/k of transcendence degree \ge 3 then H^1(K, G) = {1} for every field extension K/k.
Introduction
Let G be an algebraic group. J.-P. Serre stated the following conjectures in [Se 2 ] (see also [Se 3 , Chapter III]).
Conjecture I: If G is connected then H 1 (K, G) = {1} for every field K of cohomological dimension ≤ 1.
Conjecture II: If G is semisimple, connected and simply connected then H 1 (K, G) = {1} for every field K of cohomological dimension ≤ 2.
Conjecture I was proved by Steinberg [St 1 ]. Conjecture II remains open, though significant progress has been made in recent years; see [BP] and [Gi].
Our main result is a partial converse of Conjectures I and II. Recall that an algebraic group G is called special if H 1 (K, G) = {1} for every field K. Special groups were introduced by Serre [Se 1 ] and classified by Grothendieck [Gr,Section 5]; cf. [PV,Section 2.6].
Theorem 1. Let G be an algebraic group defined over an algebraically closed field k of characteristic zero. Suppose H 1 (K, G) = {1} for some finitely generated field extension K of k of transcendence degree d.
Note that the cohomological dimension of K equals d; see [Se 3 , Section II.4]. Thus, informally speaking, the theorem may be interpreted as saying that Conjectures I and II cannot be extended or strengthened in a meaningful way.
Our proof of Theorem 1 is rather simple: the idea is to use nontoral finite abelian subgroups of G as obstructions to the vanishing of H 1 . We remark that our argument (and, in particular, the proof of Lemma 3) does not rely on canonical resolution of singularities; cf. [RY,Remark 4.4]. Ph. Gille recently showed us an alternative proof of Theorem 1, based on case by case analysis and properties of the Rost invariant. We would like to thank him, J.-L. Colliot-Thélène and R. Parimala for informative discussions.
Preliminaries
Throughout this note k will denote an algebraically closed base field of characteristic zero. All fields, varieties, morphisms, algebraic groups, etc., will be assumed to be defined over k.
Let G be an algebraic group. An abelian subgroup A of G is called toral if A is contained in a torus of G and nontoral otherwise.
Lemma 2. Let G be an algebraic group, L be a Levi subgroup of G and A be a finite abelian subgroup of L. If A is nontoral in L then A is nontoral in G.
Proof. Assume the contrary: A ⊂ T for some torus T of G. Since T is reductive, it lies in a Levi subgroup L 1 of G; see [OV,Theorem 6.5]. Denote the unipotent radical of G by U ; then L and L 1 project isomorphically onto G/U . Since A is toral in L 1 , it is toral in G/U , and hence, in L, as claimed.
Recall that a G-variety X is an algebraic variety with a G-action; X is generically free if G acts freely on a dense open subset of X and primitive if k(X) G is a field (note that X is allowed to be reducible). Elements of H 1 (K, G) are in 1-1 correspondence with G-torsors over K, i.e., birational classes of primitive generically free G-varieties X such that k(X) G = K; see e.g., [Po,Section 1.3]. If X is a primitive generically free G-variety, we shall write cl(X) for the class in H 1 (k(X) G , G) given by X.
Our proof of Theorem 1 is based on the following result.
Lemma 3. ( [RY, Lemma 4.3])
Let G be an algebraic group, A be a nontoral finite abelian subgroup of G and X be a generically free primitive G-variety. Suppose A fixes a smooth point of X. Then cl(X) = 1 in H 1 (k(X) G , G).
Construction of a nontrivial torsor
Lemma 4. Let A be an abelian group of rank r and let K be a finitely generated field extension of k of transcendence degree d ≥ r. Then there exists an A-variety Y such that (i) k(Y ) A = K and (ii) Y has a smooth A-fixed point.
Proof. Since k is algebraically closed, A has a faithful r-dimensional representation Then the (geometric) quotient V /A is isomorphic to the affine space A d . Denote the origin of V by 0, and its image in V /A by 0. Let Y 0 be an affine variety over k such that k( The natural projection Y −→ Y 0 is a rational quotient map for this action; see, e.g., [R,Lemma 2.16(a)]. Thus Y satisfies (i). To prove (ii), set y = (y 0 , 0); y is fixed by A. The morphism Y −→ V is obtained from f by a base change, and hence, isétale at y; the smoothness of V implies then that Y is smooth at y. Thus, y ∈ Y is a smooth point fixed by A.
Proposition 5. Let G be an algebraic group, A be a nontoral abelian subgroup of G of rank r, and K/k be a field extension of transcendence degree d. If d ≥ r then Proof. Choose an A-variety Y and a smooth A-fixed point y ∈ Y , as in Lemma 4. We claim that the image of cl(Y ) under the natural map is the (geometric) quotient for the A-action on G×Y given by a(g, y ′ ) = (ga −1 , ay ′ ); see [PV,Section 4.8]. By [PV,Proposition 4.22], G * A Y is smooth at x = (1 G , y) since Y is smooth at y. Moreover, x is an A-fixed point of X; thus Lemma 3 tells us that cl(X) = 1 in H 1 (K, G), as claimed.
Proof of Theorem 1
In view of Proposition 5 it is sufficient to show that G contains a nontoral finite abelian subgroup A, where (a ′ ) rank(A) = 1, if G is not connected, Moreover, in view of Lemma 2 we only need to prove (a ′ ), (b ′ ), and (c ′ ) under the assumption that G is reductive (otherwise we may replace G by its Levi subgroup).
Proof of (a ′ ): Write G = F G 0 , where G 0 is the identity component of G and F is a finite group; see [V,Proposition 7]. Since G is disconnected, F is not contained in G 0 . Choose a ∈ F − G 0 and set A = <a>. Then A is cyclic, finite (because a ∈ F ) and nontoral (because every torus of G in contained in G 0 ), as desired.
Proof of (b ′ ): In view of (a ′ ) we may assume without loss of generality that G is connected. Now the desired conclusion follows from [St 2 , Theorem 2.27].
Proof of (c ′ ): Suppose G is not special. By [Se 4 , 1.5.1], G has a torsion prime p, and by [St 2 , Theorem 2.28] G has a nontoral elementary p-abelian subgroup A of rank ≤ 3. see also [Se 4 , 1.3].
|
2014-10-01T00:00:00.000Z
|
1999-10-26T00:00:00.000
|
{
"year": 1999,
"sha1": "8485bbc63371cb0c798702ef228d353f2942d0d7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "45406f3b4ecdf05b815bbd5295132117013c47c9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
3512558
|
pes2o/s2orc
|
v3-fos-license
|
The α7-nicotinic receptor is upregulated in immune cells from HIV-seropositive women: consequences to the cholinergic anti-inflammatory response
Antiretroviral therapy partially restores the immune system and markedly increases life expectancy of HIV-infected patients. However, antiretroviral therapy does not restore full health. These patients suffer from poorly understood chronic inflammation that causes a number of AIDS and non-AIDS complications. Here we show that chronic inflammation in HIV+ patients may be due to the disruption of the cholinergic anti-inflammatory pathway by HIV envelope protein gp120IIIB. Our results demonstrate that HIV gp120IIIB induces α7 nicotinic acetylcholine receptor (α7) upregulation and a paradoxical proinflammatory phenotype in macrophages, as activation of the upregulated α7 is no longer capable of inhibiting the release of proinflammatory cytokines. Our results demonstrate that disruption of the cholinergic-mediated anti-inflammatory response can result from an HIV protein. Collectively, these findings suggest that HIV tampering with a natural strategy to control inflammation could contribute to a crucial, unresolved problem of HIV infection: chronic inflammation.
Inflammation is a formidable response against pathogens; however, HIV-infected subjects suffer from chronic and persistent inflammatory processes 1,2,3 that promote 'immunosenescence' 4 and aging, and trigger AIDS-and non-AIDS-related complications such as neurocognitive deterioration, cardiovascular disease, thromboembolic disease, type 2 diabetes, cancer, osteoporosis, multiple end-organ disease, and frailty. 1,5,6 Inflammation persists indefinitely in HIV+ subjects despite combined antiretroviral treatment, undetectable levels of viremia and even the absence of symptoms. 7,8 It has been shown that soluble gp120 contributes to HIV-1 replication and dissemination, via the activation of multiple cell signaling pathways and its presence is associated with higher levels of proinflammatory cytokines in patients. 9 The latter highlights the need for better understanding of gp120 effects on immune cells to develop new intervention strategies to reduce inflammation and decrease morbidity and mortality in HIV+ individuals. 1,2 The cholinergic anti-inflammatory pathway (CAP) modulates the immune response and the progression of inflammatory diseases avoiding organ and systemic damage by inhibiting the release of cytokines. 10 Although the importance of the CAP in several disease states has been recently established, 11-13 the CAP has not been investigated in the inflammatory scenario of HIV infection. Several lines of evidence suggest that the cholinergic antiinflammatory response (dependent on vagus nerve integrity) could be compromised by HIV infection because infected subjects exhibit hyperactivity of the sympathetic autonomic nervous system or reduction in parasympathetic activity, both at rest and during postexercise recovery, 14 and autonomic dysfunction is also common in HIV-infected patients being associated with serious comorbid illnesses known to increase mortality risk. 15,16 The α7 nicotinic acetylcholine receptor (α7) is a homooligomeric nicotinic acetylcholine (ACh) receptor that is abundantly expressed in the central nervous system. The α7 is characterized by its fast desensitization and high calcium permeability. It is involved in learning and memory, and implicated in neurological disorders such as Parkinson's disease, Alzheimer's disease and schizophrenia. The α7 is also expressed in cells from the immune system such as lymphocytes, monocytes and macrophages. 17,18,19 This transmembrane pentameric ion channel has a pivotal role in the CAP operation because activation of α7 expressed by macrophages inhibits the production of proinflammatory cytokines. 18 Under basal conditions, the α7 responds to its endogenous agonist ACh by undergoing a conformational change that opens its highly selective calcium-permeable pore. The mechanism by which activation of α7 in macrophages regulates proinflammatory responses is subject of intense research, and important insights have thus been made. The available results suggest that activation of the macrophage α7 controls inflammation by inhibiting nuclear factor-κB nuclear translocation, and activating the JAK2/STAT3 (Janus kinase 2/signal transducer and activator of transcription-3 ) pathway 20 among other suggested pathways. 21 For a comprehensive review of the CAP signaling refer to Báez-Pagán et al. 21 Considering the antiinflammatory role of α7 activation in macrophages and because HIV+ patients are chronically inflamed, we set out to study this receptor and the cholinergic anti-inflammatory response in the HIV scenario.
Reports suggest that gp120 binds acetylcholine receptors and interferes with cholinergic neurotransmission; 22,23 therefore, we rationalized that gp120 could also affect the cholinergic anti-inflammatory response as inflammatory mediators correlate with gp120 levels despite viral suppression by antiretroviral therapy. 24 Moreover, during chronic HIV-1 infection, a long-term persistence of disproportionately high levels of gp120 have been detected in the absence of virus replication in patients under antiretroviral therapy. 24,25 In addition, the presence of anti-gp120 antibodies during chronic, but not acute HIV, infection 26 demonstrates the importance of studying the cholinergic anti-inflammatory response in chronic HIV-infected patients and the usefulness of studying gp120. Particularly, we focused our efforts in determining the role of clinically relevant doses of gp120 IIIB (a CXCR4 Figure 1 gp120 IIIB upregulates the α7 in MDMs isolated from control subjects. (a) Confocal imaging revealed that gp120 IIIB , at various concentrations, increases the α-BuTX binding in MDMs. Scale bar: 50 μm. The total of MDMs analyzed is within parenthesis. ***Po0.0001. Error bars represents s.e.m. (b) A frequency histogram analysis shows that a pathophysiological concentration of gp120 IIIB produces a right shift toward high mean fluorescent intensity values (n = 1253 MDMs for control and 1082 for gp120 IIIB ). ***Po0.0001 (inset). Error bars in the inset are box and whisker ranges. (c) Nicotine outcompetes α-BuTX binding in MDMs. Nicotine pretreatment followed by α-BuTX addition demonstrates the α-BuTX selectivity for α7 in MDMs. Student's t-test, n = 4 subjects. **P = 0.0042. (d) Twelve donors were evaluated for α7 levels after gp120 IIIB (0.15 nM) exposure. Events were recorded for each donor before and after gp120 IIIB treatment. Median fluorescence intensity (MFI) measurements show a significant (**P = 0.0034) increase in α7 expression. An n-fold representation of the α7 upregulation in these donors show a homogenous population (n = 12). Open circles represent the only two donors that exhibited a reduction in α7 expression when exposed to gp120 IIIB . (e) MDMs treated with a pathophysiological concentration of gp120 (0.15 nM) exhibit higher α7 protein levels relative to their untreated counterparts (n = 6). *P = 0.0313. (f) The endogenous agonist of CXCR4, stromal-derived factor 1α (SDF-1α) (0.3 μg ml − 1 ), induces the upregulation of α7. ***P = 0.0006. Error bars represent s.e.m. (n = 3 donors). Statistical analysis for panels a, b and f consists of a paired Student's t-test, and for panels d and e, it consists of a Wilcoxon's signed-rank test; n = 4 donors for panels a-c. GAPDH, glyceraldehyde 3-phosphate dehydrogenase. gp120 IIIB affects an anti-inflammatory response M Delgado-Vélez et al tropic-specific gp120) in the chronic inflammation suffered by HIV-infected subjects.
Here we report that HIV-infected subjects are upregulated for α7 in a variety of their immune cells, a phenomena recapitulated by gp120 IIIB exposure in monocyte-derived macrophages. Moreover, our results indicate that gp120 IIIB disrupts the cholinergic antiinflammatory response in macrophages because the activation of α7 does not inhibit the production of proinflammatory cytokines (interleukins (ILs) and chemokines). Our findings position α7 as an attractive therapeutic target for the development of novel anti-inflammatory strategies to counteract the chronic inflammation suffered by HIV-infected patients.
RESULTS
HIV-1 gp120 IIIB induces the upregulation of α7 in monocyte-derived macrophages We performed binding assays using the selective antagonist α-bungarotoxin (α-BuTX) to measure surface α7 protein levels in gp120-treated monocyte-derived macrophages (MDMs) from control subjects (Supplementary Table 1). α-BuTX irreversibly binds the α7 with high affinity (94 pM) 27 and it is particularly selective for α7. 28 Figure 1a shows that, following exposure to gp120 IIIB , there was a significant increase in bound α-BuTX in MDMs from healthy donors, as demonstrated by the shift towards higher fluorescence values in the frequency distribution histogram (Figure 1b). Similar results were obtained using gp120 IIIB from the NIH AIDS Reagent Program (data not shown). Consistent with previous studies, 18 α-BuTX selectively binds α7 in MDMs as demonstrated by the reduced fluorescence intensity upon nicotine pretreatment (Figure 1c). Furthermore, these results were confirmed in a greater number of cells by flow cytometric analysis (Supplementary Figure S1), showing a significant increase in α-BuTX binding in 85% of the examined donors ( Figure 1d). This increase was homogeneous among donors, with no evidence of donor sub-populations. Similarly, immunoblot assays showed increased levels of α7 in gp120 IIIB -treated MDMs ( Figure 1e). Furthermore, application of the CXCR4 endogenous agonist stromal-derived factor 1α also induced the upregulation of α7 in MDMs ( Figure 1f). Taken together, these results suggest that gp120 IIIB induces the upregulation of α7 in human MDMs.
α7 Nicotinic ACh receptor is upregulated in immune cells from HIV-infected subjects In view of these findings, we asked whether α7 upregulation would also be present in HIV+ individuals. To this end, we measured the α7 levels in samples obtained from HIV+ donors (Supplementary Table 2). Consistent with the aforementioned imaging and flow cytometry results of MDMs exposed to gp120 IIIB , higher levels of α-BuTx binding were detected by confocal microscopy in the MDMs from HIV+ patients demonstrating that the α7 indeed is upregulated in these subjects (Figures 2a and b). Interestingly, detailed observation of MDMs from controls ( Figure 2a) show discrete clusters of α-BuTX binding on the surface consistent with previous work. 18 To confirm and expand these imaging observations, we analyzed MDMs and other α7-containing immune cells from HIV+ subjects using flow cytometry. We found that the α7 is upregulated in MDMs (Figure 2c), monocytes (Figures 3a and b) and T-lymphocytes ( Figure 4) from HIV+ subjects. Interestingly, this approach revealed two distinct populations within monocytes that express low (α7-low) and high (Figures 3a and b), and a substantial increase of α7 in the α7-high cells in HIV+ subjects (Figure 3b). A marginal decrease in α7 expression within the α7-low cells was also observed ( Figure 3b). These data indicate that HIV+ subjects exhibit elevated levels of α7 in MDMs, monocytes and T-lymphocytes.
HIV-1 gp120 IIIB disrupts the cholinergic anti-inflammatory response Given the essential role of α7 in regulating inflammation, we initially hypothesized that high levels of α7 should potentiate the anti-inflammatory response. To test our hypothesis, we measured the secretion of cytokines (ILs and chemokines) in MDMs challenged with lipopolysaccharide (LPS). As expected, ACh reduced the production of proinflammatory cytokines in LPS-treated MDMs. 18 However, paradoxically, ACh did not reduce the production of cytokines in LPS-treated MDMs previously exposed to gp120 IIIB despite the upregulation of the α7 (Figures 5a and b). Furthermore, gp120 IIIB did not potentiate LPS-induced release of ILs or chemokines (data not shown). Taken together, these data suggest that gp120 IIIB , in MDMs, affects the cholinergic anti-inflammatory response, thus disrupting an innate immune response mechanism that controls inflammation. Figure 3 The α7 is upregulated in monocytes from HIV+ subjects. (a) Whole-blood analysis from HIV+ and HIV − subjects generated typical scatter plots. Anti-CD14-labeled monocytes were identified and gated (R1) in side scatter (SSC-H)/forward scatter (FSC-H) dot plot. Expression of high α7 levels in CD14 + monocytes were analyzed by two-color immunofluorescence as FL1 (CD14 + )/FL4 (α7 + ) dot plots and histograms. Two populations of monocytes expressing α7 were identified based on Alexa-647-α-BuTX-binding capacity and gated as α7-low and α7-high in the FL1/FL4 dot plots, and analyzed by their corresponding histograms. Open green histograms represent the unstained control monocytes, orange and blue-filled histograms represent HIV − and HIV+ subjects, respectively. (b) A significant increase (*P = 0.0171; 32%) of α7 expression was detected in the α7-high population (median fluorescence intensity (MFI) = 304.8 ± 78.0 for HIV − vs 403.0 ± 110.6 HIV+; mean ± s.d.). In addition, we detected a marginal decrease in α7 expression in the α7-low HIV+ cell population that was statistically significant according to the Mann-Whitney test; n = 10 for HIV − subjects and n = 17 for HIV+ subjects. nAChR, nicotinic acetylcholine receptor. An α7 antagonist, bupropion, selectively restores the chemokine-dependent cholinergic anti-inflammatory response We tested whether the medication bupropion (Bup), based on its α7 antagonism together with its well-established clinical use and safety in HIV field, 29-31 could restore the anti-inflammatory response in gp120 IIIB -treated MDMs. We found that Bup selectively restores the CAP in terms of the chemokines GRO-α (growth-related oncogeneα), MCP-1 (monocyte chemoattractant protein-1) and RANTES (regulated on activation, normal T-cell expressed and secreted), but did not have a significant effect on IL-8 and I-309, nor on ILs ( Figure 6). These results highlight the potential of α7 targeting to mitigate inflammation in HIV scenarios.
DISCUSSION
HIV infection is associated with chronic and persistent inflammation. In this context, inflammation leads to the emergence of a wide spectrum of complications that further compromise patients' health.
Moreover, it appears that innate immune responses such as the CAP (dependent on vagus nerve integrity) are also compromised in HIV+ subjects as evidenced by hyperactivity of the sympathetic autonomic nervous system or reduction in parasympathetic activity, both at rest and during postexercise recovery. 14 Also, combined antiretroviral treatment-independent 32 alterations in autonomic function have been reported. 33,34 Two recent studies began to shed light on the possible role of α7 in different scenarios of HIV pathogenesis. The first report shows that gp120 IIIB is able to upregulate α7 in human neuronal cells and the brains of mice expressing gp120 IIIB in the central nervous system. 35 The second report presents evidence that HIV-1 gp120 induces mucus formation in normal human bronchial epithelial cells through a CXCR4-dependent pathway that involve α7 (α7-GABA A Rα2). 36 In our study, we found that a soluble constituent of the HIV-1, gp120 IIIB , induces the upregulation of α7 in macrophages, as in neuronal cells, 35 demonstrating the ability of gp120 IIIB to upregulate α7 not only in the central nervous system but also in the immune system (Figures 1a, b, d and e). Interestingly, we also identified variations in basal α7 expression levels among donors consistent with previous observations. 37 Moreover, the extent of the α7 upregulation in MDMs was directly proportional to the basal α7 expression levels ( Figure 7). This variation in α7 expression in MDMs is in line with the functional and biochemical heterogeneity of macrophages among subjects 38,39 and the differences in their response 40 that have been proposed to arise from genetic variations. Furthermore, the activation of elevated levels of this highly calciumpermeable channel (α7) did not result in a significant increase in Figure 5 gp120 IIIB disrupts the cholinergic anti-inflammatory response of MDMs from uninfected donors. (a) IL quantification reveals that, consistent with the CAP operation, ACh addition significantly decreased proinflammatory cytokines (green bar). 18 However, gp120 IIIB pre-exposure (α7 upregulation) abolished the ACh-mediated anti-inflammatory response (orange bar). In agreement with early HIV studies, IL-10 levels increased in MDMs pre-exposed to gp120 IIIB as occurs in patients. 63 gp120 IIIB affects an anti-inflammatory response M Delgado-Vélez et al macrophage apoptosis (data not shown), which is consistent with the antiapoptotic signature expressed by monocytes recovered from HIV-infected patients 41 and macrophages infected with HIV-1. 42 Interestingly, in HIV+ and HIV − donors we identified two distinctive sub-populations of CD14 + monocytes expressing different levels of α7.
The origin of these subsets is uncertain. However, we speculate that these differences in α7 expression in monocytes perhaps arise from monocytes' intrinsic genetic heterogeneity ('classical' and 'nonclassical' monocytes), 43 or changes in α7 expression levels during the monocyte/ macrophage conversion phase as occurs with other cholinergic receptors. 18 In the case of HIV-infected patients, particularly, another possibility is that the inflammatory environment present in these patients selectively alters the appropriate assembly of α7 in one population of monocytes over the other, as demonstrated in other cholinergic receptors under proinflammatory settings, 44 thus disrupting the α-BuTX-binding capacity. Moreover, there is evidence demonstrating that HIV-1 modify monocyte's plasma membrane proteome, 45 which could also selectively affect the expression of α7 in a specific sub-population of monocytes. From the experimental side, we asked where do these cells ('classical' and 'nonclassical' monocytes) sit in the gating strategy? Our gating strategy only shows the two subsets of monocytes based on CD14/α7 expression. However, we were able to analyze the distinctive pattern distribution of monocyte subsets based on CD14/CD16 expression in a separate experiment. Two sub-populations were identified: the so-called 'classical' CD14 + /CD16 − monocytes and the 'nonclassical' CD14low/CD16 + monocytes. The position of these two monocyte subsets in the CD14 axis (FL1 channel) was the same as the position of CD14/α7 monocyte subsets. Based on this observation, we hypothesize that α7-high monocytes are the nonclassical CD16 + sub-population, whereas α7-low are the classical CD16 − subset. This observation is important as both monocyte subsets differ in migration and functionality in HIV infection. 46 However, these experiments were carried out separately and further studies using a three-color staining protocol for CD14/CD16/α7 is needed to validate this hypothesis.
With lymphocytes as well, we observed an increase in α7 levels in CD3 + cells from HIV-seropositive patients. We understand that this change may reflect an increase in the receptor expression of both helper (CD4 + ) and cytotoxic (CD8 + ) T-lymphocytes or only in one of these two sub-populations. Thus, the change may reflect different T-lymphocyte sub-populations that become more prevalent in HIV-seropositive patients. Further studies should attempt to quantify α7 levels on different T lymphocytic sub-populations using a four-color panel design of CD3/CD4/CD8/α7 for flow cytometry.
Although we cannot rule out the possibility that other viral proteins are having an important role in the α7 upregulation, we observed that HIV-infected subjects express elevated levels of α7 in their immune cells, a phenomena recapitulated by gp120 IIIB addition to MDMs in vitro. Paradoxically, the activation of α7-upregulated macrophages did not inhibit the production of inflammatory cytokines and chemokines (Figures 5a and b). These findings highlight a possible viral strategy to disrupt an important innate immune response that neutralizes exaggerated inflammation and thus shed light on the chronic inflammation observed in HIV+ patients.
Our upregulation findings can be discussed from both the viral or host point of view. From the viral point of view, whether the α7 upregulation is beneficial or detrimental to HIV remains unknown. However, the fact that the α7 is highly selective for calcium and the role that calcium has in the transcription, 47 replication and pathogenesis of HIV 48 invites the possibility that the virus could modulate α7 expression to allow the necessary calcium influx for its own benefit. 47 Alternatively, from the host's perspective, it is possible that α7 upregulation represents a frustrated attempt to control inflammation. The observed increase in α7 levels is in accordance with the α7 upregulation reported under inflammation settings in T-lymphocytes, alveolar macrophages and neutrophils. [49][50][51] However, whether this new pool of α7s retains its ligand gated ion channel activity or participate from the anti-inflammatory response remains unknown. Another possibility that we cannot exclude is that the increased expression of α7 in MDMs exposed to gp120 IIIB results from endocytosis of neighbor α7s.
With the advent of combined antiretroviral treatment, the nature of HIV disease has largely shifted from one of immunodeficiency to one of chronic and persistent inflammation and it is recognized that both In the current study, we conclude that gp120 interferes with the cholinergic anti-inflammatory response because we found that α7 activation is no longer able to reduce the production of proinflammatory ILs and some chemokines (Figures 6 and 8). This result was puzzling because the activation of high levels of α7 was hypothesized to accentuate the anti-inflammatory response in MDMs. Remarkably, however, these results are actually in agreement with the elevated levels of cytokines reported in HIV-infected patients despite the α7 upregulation in MDMs (Figures 2a and b). The ILs that are commonly elevated in patients include tumor necrosis factor-α, IL-6 and IL-17, as well as chemokines, MCP-1, RANTES, IL-8, GRO-α and I-309. Interestingly, although the α7 antagonist tested here, Bup, tends to reduce chemokine production in upregulated macrophages (Figure 6), it has also been shown to reduce proinflammatory ILs in uninfected humans 52 and experimental animals, 53 suggesting that, in vitro, gp120 IIIB interferes with the anti-inflammatory properties of Bup and underscores the complexity of the problem and the need for antiinflammatory medication tailored to HIV+ patients. Our observations are significant because they reveal a previously unrecognized alteration in the cells that actively participate in immune response and inflammation. In HIV infection, deregulation of the cytokine networks promotes persistent and chronic inflammation, generating AIDS-and non-AIDS-related complications. 1,7 The elucidation of the processes by which HIV/gp120 disrupts the CAP is critical to the development of effective therapeutic strategies aimed at reducing HIV-related chronic inflammation. For instance, the underlying mechanism of the CAP has been suggested to include inhibition of the JAK2/STAT3 pathway, which comprises recruitment of JAK2 to the α7, autophosphorylation of JAK2, phosphorylation of STAT3 by JAK2, dimerization of phosphorylated STAT3 and nuclear translocation of dimerized STAT3 where it exerts its antiinflammatory role. 20 The HIV gp120 IIIB protein could thus disrupt the CAP by indirectly interfering with JAK2 recruitment to the α7, JAK2 autophosphorylation, the phosphorylation of STAT3 or the nuclear translocation of dimerized STAT3, among other possibilities. In fact, a recent work proposed that gp120 signaling through STAT3 may explain the impairment of dendritic cells upon HIV exposure. 54 The current study is limited in that the cohort of HIV-infected patients consists of women exclusively. Nevertheless, the reason for using females in our study is that we have access to an extraordinarily well-characterized cohort of HIV+ female patients established by Dr Valerie Wojna. Several publications attest to the scrupulous characterization of this cohort for the past 14 years. 55,56 Overall, our findings demonstrate that gp120 IIIB can alter the normal function of an innate immune mechanism that controls inflammation and Bup was able to partially rescue it ( Figure 6). Moreover, these findings pave the way to study R5-tropic gp120 to determine whether CCR5 stimulation also influences α7 expression levels in MDMs. The present results position the α7 as an attractive therapeutic target that could be exploited as adjunctive therapy to counteract the chronic inflammation that causes a number of AIDS and non-AIDS complications in HIV-infected individuals.
METHODS Reagents
All reagents were purchased from Sigma-Aldrich (St Louis, MO, USA), unless otherwise specified.
Study subjects
All donors enrolled in this study signed the informed consent approved by the Institutional Committee for the Protection of Human Participants in Research (IRB number: 00000944). All experiments were performed in accordance with University of Puerto Rico guidelines and regulations. Phlebotomy to obtain peripheral blood mononuclear cells was performed on uninfected volunteer donors bled at the University of Puerto Rico, Río Piedras for the studies depicted in Figures 1, 5, 6 and 7. Donors were bled at the Puerto Rico Clinical and Translational Research Consortium. All HIV-infected donors were recruited as part of the Hispanic-Latino Longitudinal Cohort of HIV-seropositive women established at the NeuroAIDS Program of the University of Puerto Rico, Medical Sciences Campus. Inclusion criteria included HIV-infected individuals who presented with a CD4 nadir of ⩽ 500 cells per mm 3 and/or ⩾ 1000 copies of plasma viral load while using antiretroviral therapy upon study entry. Women with a history of neuropsychiatric disorders, active infectious process or active drug abuse were excluded. Evaluation consisted of history, neurological exam and neuropsychological test.
Smoking history was obtained using the Fagerström Test for Nicotine Dependency Questionnaire. 57
Cell culture
Whole blood from all subjects was processed as described elsewhere. 58 Peripheral blood mononuclear cells were counted by hemocytometer or Countess automates cell counter (Invitrogen, Eugene, OR, USA), adjusted to 1 × 10 6 cells per ml and seeded into 75 cm 2 flasks (Nunc, Rochester, NY, USA) for flow cytometry assays. For confocal imaging of HIV+ and HIV − subjects, 1-2 × 10 6 cells per ml were cultured into four-well Lab-Tek II Chambered Coverglass (Nalgene, Rochester, NY, USA) as described previously. 58 For western blot, MDMs were cultured in cell culture Petri dishes (Fisher Scientific, Pittsburgh, PA, USA). After separation of monocytes from lymphocytes by adherence, cells were differentiated for 7-8 days in RPMI-1640 supplemented with 20% inactivated fetal bovine serum, 10% inactivated human serum, 2 μg ml − 1 macrophage colony-stimulating factor (Invitrogen) and 1% PenStrep. All cultures were maintained at 37°C with 5% CO 2 . All experiments were performed with cells cultured from a single donor; blood or cells from different donors were not pooled. Cultures, buffers and reagents were endotoxin-free and experiments were performed under aseptic techniques, which included incubators and biological routine monitoring of safety cabinets for microbial growth. Also, the gp120 IIIB manufacturer certified that endotoxin levels were ⩽ 100 EU mg − 1 .
Western blot
For the western blot, MDM lysates were obtained with lysis buffer (mercaptoethanol diluted in phosphate-buffered saline (PBS) (1 × ) to a final concentration of 2.5%, and supplemented with a protease inhibitor cocktail (Thermo Scientific, Waltham, MA USA; pH 7.4). Protein sample quantification was performed using a Nanodrop (Thermo Scientific). Total homogenate samples, 50 μg, were loaded onto a 10% polyacrylamide gel and run for~1 h at 30 V, and then at 90 V until completion. After electrophoresis, gels were transferred to a PVDF membrane (Bio-Rad, Hercules, CA, USA) using a wet system (Bio-Rad) for 1 h at 100 V. After this, membranes were incubated in a blocking solution (5% non-fat dry milk, Tris-buffered saline in Tween-20 (TBS-T, 1 × ) for 1 h at room temperature). Subsequently, primary antibody incubation for α7, diluted 1:200 (cat. no.: H-302; Santa Cruz Biotechnology, Santa Cruz, CA, USA), was performed overnight at 4°C. After three consecutive washes (5 min each) with TBS-T 1 × , an anti-goat secondary antibody labeled with horse peroxidase conjugated and diluted 1:2000 (cat. no.: AP307P; Millipore) was added and incubated for 1 h at room temperature. Membranes were processed using a chemiluminescence assay (Super Signal West Dura Extended Duration Substrate; Thermo Scientific) following the manufacturer's instructions. Blots' relative intensities were evaluated using UVP Vision Works software (UVP, LLC, Upland, CA, USA). The band quantifications are presented as the α7/GAPDH (glyceraldehyde 3-phosphate dehydrogenase) ratio per each experimental condition.
Confocal imaging
After differentiation, MDMs were incubated and maintained in media supplemented with full-length monomeric glycosylated gp120 IIIB expressed in baculovirus, 495% purity by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (Fitzgerald Industries International, Acton, MA, USA), during 72 h or with stromal-derived factor 1α (EMD Chemicals Inc., Gibbstown, NJ, USA) at 0.3 μg ml − 1 . Monomeric gp120 was used for the following reasons: (i) monomeric gp120 interacts with macrophages in vivo, 59 (ii) monomeric and trimeric gp120 induce similar inflammatory responses 59 and (iii) monomeric gp120 triggers signaling in macrophages similar to those observed with the whole virus. 60 After incubation, the media were removed and MDMs were washed with PBS 1 × (pH 7.4), followed by fixation with 4% formaldehyde for 15 min at room temperature, washed once with PBS 1 × and labeled with Alexa-488-α-BuTX (Invitrogen) for 1 h at 70 μg ml − 1 final concentration in the buffer (NaCl, 120 mM; KCl, 4 mM; KH 2 PO 4 , 1.2 mM; MgSO 4 , 1 mM; HEPES, 15 mM (pH 7.4); CaCl 2 , 1 mM; bovine serum albumin, 2% and glucose, 1%). After α-BuTX labeling, MDMs were washed with PBS 1 × to remove unbound α-BuTX, followed by the addition of 90% glycerol/PBS 1 × solution to be finally studied under confocal microscope (Zeiss LSM Meta 510, Carl Zeiss, Pleasanton, CA, USA) at the Confocal Imaging Facility, University of Puerto Rico (http: //www.cifupr.org). The remaining bound α-BuTX was excited at a wavelength of 488 nm (0.2%) using an Argon/2 laser and its emission was acquired at 520 nm using a BP 505-550 filter, 64 μm pinhole using a Plan-Apochromat × 20/0.8M27 objective. Images were acquired by random snapshots at 2048 × 2048 dpi followed by background subtractions. Relative fluorescence intensity analyses of each MDM were performed using LSM 510 program. Relative intensities were averaged and plotted. In the case of samples recovered from HIV+ and HIV − subjects, these were prepared as described above, but the incubation time was 30 min. The magnification used for patient samples was × 20 and a 2.0 μm pinhole. The competitive binding assay was performed by adding nicotine to a final concentration of 500 mM before Alexa-488-α-BuTx (2 μg ml − 1 ) addition. MDMs were incubated for 15 min at 4°C in the dark and washed with RPMI-1640 non-supplemented base. Cells were then fixed with 4% formaldehyde-PBS solution (pH 7.2) for 15 min at room temperature. After fixation, MDMs were washed with PBS 1 × (pH 7.2). Finally, Vectashield with DAPI (Vector Labs, Burlingame, CA, USA) was added for visualization and examination by confocal microscopy. Images were collected in Z-stacks at a magnification of × 100 and analyzed. Random snapshots were performed and individual MDMs were analyzed for mean intensity and averaged.
Flow cytometry
To determine α7 expression levels in monocytes and T-lymphocytes from HIV+ and HIV − donors, freshly drawn blood samples (100 μl) were incubated with the α7 antagonist Alexa-647-α-BuTx (1 h, 4 ºC, 2 μg ml − 1 ), CD14-FITC monoclonal antibody (BD Biosciences, San Jose, CA, USA) and CD3-PerCP monoclonal antibody (BD Biosciences), following the manufacturer's instructions. α-BuTx is an α7 antagonist that binds with strong affinity (K d = 94 pM) 27 and is amply used in α7 expression studies of immune and other cells. 61,62 Erythrocytes were lysed by adding 1 × FACS lysis solution (Becton Dickinson, San Jose, CA, USA) for 10 min at 4°C. Cells were then washed two times with PBS 1 × /fetal bovine serum (3%) by centrifugation at 1100 r.p.m. for 5 min at room temperature. Later, cells were fixed with 0.5% paraformaldehyde and analyzed using flow cytometry. Monocytes were gated in forward scatter (FSC) vs side scatter (SSC) dot plot by size and granularity, and the CD14 + and Alexa-647-α-BuTx-labeled cells were identified in FLI vs FL4 dot plot. T-lymphocytes were also gated in FSC vs SSC dot plot, and the CD3 + and Alexa-647-α-BuTxlabeled cells were identified in FL3 vs FL4 dot plot. In the case of MDMS, after differentiation, these were labeled with CD14-FITC antibody and Alexa-647-α-BuTx (1 h, 4 ºC, 2 μg ml − 1 ). MDMs were gated in FSC vs SSC dot plot, and the CD14 + and Alexa-647-α-BuTx-labeled cells were identified in FLI vs FL4 dot plot. For all experiments FITC, PerCP and Alexa-647-α-BuTx emissions were measured in the FL1 (bandpass filter 530/30 nm), FL3 (585/40 nm) and FL4 (bandpass filter 661/16 nm) channels, respectively. Twenty thousand events were analyzed for each sample and the α7 fluorescence intensity of cells was analyzed from the median peak channel of the histograms. Data on scatter parameters and histograms were acquired in log mode. Viability assays for control (97.6%) and gp120 IIIB -treated MDMs (98.5%) were performed using 7-aminoactinomycin D (BD Biosciences) following the manufacturer's indications and measured in the FL3 channel (585/40 nm). The autofluorescence of monocytes, MDMs and T-lymphocytes was subtracted from the fluorescence intensity values of the stained samples. All samples were assayed using a FACSCalibur (Becton Dickinson) cytometer and analyzed using Cell Quest software (BD Biosciences). This software was used for data acquisition and multivariate analysis.
IL and chemokine quantification
Peripheral blood mononuclear cells from control subjects were cultured (7-8 days) in 24-well plates, differentiated into MDMs (Supplementary Figure 3A) and assayed for IL and chemokine production after treatments (Supplementary Figure 3B). After differentiation, media were changed for fresh media and gp120 IIIB was added for 72 h (to induce α7 upregulation), followed by three consecutive fresh media washes to remove gp120 IIIB . Later, to test the cholinergic anti-inflammatory response, an inflammation inductor, LPS, was added according to the experimental condition tested. The cholinergic antiinflammatory response experimental treatments consisted of LPS (100 ng ml − 1 ) challenges using Escherichia coli O111:B4 (Sigma, St Louis, MO, USA), followed by the addition of ACh (30 μM). The acetylcholinesterase inhibitor pyridostigmine (1 mM) was added 10 min before ACh application to avoid ACh hydrolysis. In the case of Bup (70 ng ml − 1 )-containing assays, to partially antagonize α7, it was added 10 min before LPS or ACh application. Supernatants were collected 20 h post-treatments and stored at − 80°C for further analysis. For further details about experimental design and procedures refer to Supplementary Figures 2 and 3. All supernatants were sent to a contract laboratory (Quansys Biosciences, Logan, UT, USA) for quantification using the multiplex ELISA technology. Samples were analyzed in triplicate.
Statistical analysis
Nonparametric statistics were used because of the small sample sizes. Comparisons between independent groups were made by using the Mann-Whitney t-test, and paired analysis was performed using the Wilcoxon's signed-rank test. One-sample t-test was used to compare the gp120 treatment means with LPS ( = 1). A P-value of o0.05 was considered to be significant. The Spearman's test was used to determine associations between two variables, with correlations considered to be significant when r40.3 and Po0.05. All statistical analyses were performed with GraphPad (GraphPad, San Diego, CA, USA).
|
2016-05-04T20:20:58.661Z
|
2015-12-01T00:00:00.000
|
{
"year": 2015,
"sha1": "f2f5e27c1f27fd36beaefeaa860f53c0ac6f489c",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1038/cti.2015.31",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f294fb3f9c0c58deb8b513986a08f08a5275c86",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
16673409
|
pes2o/s2orc
|
v3-fos-license
|
The Supply Chain Process Management Maturity Model – SCPM3
In recent years, a growing amount of research, much of which is still preliminary, has been dedicated to investigating maturity models development for the strategic management of supply chains (Chan and Qi, 2003; Gunasekaran et al., 2001; Coyle et al., 2003). The concept of process maturity derives from the understanding that processes have life cycles or developmental stages that can be clearly defined, managed, measured and controlled throughout time. A higher level of maturity, in any business process, results in: (1) better control of the results; (2) more accurate forecast of goals, costs and performance; (3) higher effectiveness in reaching defined goals and the management ability to propose new and higher targets for performance (Lockamy and McCormack, 2004; Poirier and Quinn, 2004; McCormack et al., 2008). In order to meet the performance levels desired by customers in terms of quantitative and qualitative flexibility of service in demand fulfillment, deadlines consistency and reduction of lead times related to fulfilling orders, firms have developed repertoires of abilities and knowledge that are used in their organizational process (Day, 1994 apud Lockamy and McCormack, 2004; Trkman, 2010). In two past decades, management of supply chain processes has evolved, also because of these new demands, from a departmental perspective, extremely functional and vertical, to an organic arrangement of integrated processes, horizontal and definitely oriented to providing value to intermediate and final costumers (Mentzer et al., 2001). This new pattern of logistical process management had lead towards the development and application of different maturity models and performance metrics useful as support tools to help define a strategy and to face trade-offs, as well as to identify items that are considered critical to quality improvement of logistical services rendered to the client. The purpose of this article is to explore the concept of maturity models and to answer an important question specifically directed to the management of supply chain processes. What best practices are fully matured and in use at what maturity level? This paper will more fully define the maturity levels based upon the capabilities of the company using statistical analysis of a global data set.
Introduction
In recent years, a growing amount of research, much of which is still preliminary, has been dedicated to investigating maturity models development for the strategic management of supply chains (Chan and Qi, 2003;Gunasekaran et al., 2001;Coyle et al., 2003). The concept of process maturity derives from the understanding that processes have life cycles or developmental stages that can be clearly defined, managed, measured and controlled throughout time. A higher level of maturity, in any business process, results in: (1) better control of the results; (2) more accurate forecast of goals, costs and performance; (3) higher effectiveness in reaching defined goals and the management ability to propose new and higher targets for performance (Lockamy and McCormack, 2004;Poirier and Quinn, 2004;McCormack et al., 2008). In order to meet the performance levels desired by customers in terms of quantitative and qualitative flexibility of service in demand fulfillment, deadlines consistency and reduction of lead times related to fulfilling orders, firms have developed repertoires of abilities and knowledge that are used in their organizational process (Day, 1994 apud Lockamy andMcCormack, 2004;Trkman, 2010). In two past decades, management of supply chain processes has evolved, also because of these new demands, from a departmental perspective, extremely functional and vertical, to an organic arrangement of integrated processes, horizontal and definitely oriented to providing value to intermediate and final costumers (Mentzer et al., 2001). This new pattern of logistical process management had lead towards the development and application of different maturity models and performance metrics useful as support tools to help define a strategy and to face trade-offs, as well as to identify items that are considered critical to quality improvement of logistical services rendered to the client. The purpose of this article is to explore the concept of maturity models and to answer an important question specifically directed to the management of supply chain processes. What best practices are fully matured and in use at what maturity level? This paper will more fully define the maturity levels based upon the capabilities of the company using statistical analysis of a global data set. team is not doing its job. (iii) if external information can be used, we will do so but we will not be share it with anybody. The company can only expand its efficiency levels when its leadership, especially the one linked to the operation areas, decides to break with these premises and dissipate the restrictions that they impose. At the third level, the company develops or redesigns its inter-organizational processes and starts to create a business network with few and carefully selected allies. During this stage, important suppliers are invited to participate in planning, operations, and sales sessions (S&OP -Sales and Operation Planning), bringing supply and demand closer to each other. Global relationships are established with logistical service suppliers, qualified in relation to transport functions, logistics and storage, and clients are encouraged to give feedback regarding current and desired products. Business allies, at this level, work together, using various tools and collaborative techniques to reduce, through mutual initiatives and shared results, cycle times, especially time-to-market, using their actives more efficiently. The fourth level is characterized by collaborative initiatives. Companies start using methodologies such as Activity Based Costing (ABC) and the Balanced Score Card to transform the supply chain into a value network of partners, who work towards the same strategic goals. Information is shared electronically, and inter-company teams are formed to find solutions for specific client problems. E-commerce technologies are considered crucial for this level, guaranteeing real-time sharing of all relevant information at each point of the value chain. Development and using of models and methodologies for implementation in design, planning and collaborative replenishment are crucial at this stage of the interorganizational relationship evolution. The fifth and most advanced stage in the supply chain is the most difficult goal to achieve. It is a developmental stage characterized by a complete join between agents throughout the whole supply chain. According to Pourier and Quinn (2003;, only a few organizations in a few sectors reach this stage. It is a stage of complete collaboration throughout the network and of strategic use of technology information to achieve position and status in the market. At this stage, companies usually reach extraordinary order prediction levels as well as a reduction in the cycle time throughout networks connected completely electronically.
The business process orientation maturity model
The concept of Business Process Orientation suggests that the companies may increase their overall performance by adopting a strategic view of their processes. According to Lockamy and McCormack (2004), companies with great guidance for their business processes reach greater levels of organizational performance and have a better work environment that is based on much more cooperation and less conflicts. A very important aspect of this model is the use of SCOR to identify the processes' maturity (Lockamy and McCormack, 2004;SCC, 2003). The SCOR measurements were adopted by their process orientation characteristics and their growing use among professionals and academics who are directly involved in logistic matters. The five stages of the maturity model show a progress of activities when the supply chain is efficiently managed. Each level contains characteristics associated with factors such as predictability, capability, control, effectiveness and efficiency. Ad Hoc, the model's first level, is characterized by poorly defined and bad structured practices. Process measurements are not applied and work and organizational structures are not based on the horizontal process of the supply chain. Performance is unpredictable and costs are high. Cross-functional cooperation and client satisfaction levels are low.
At the second level, defined, SCM's basic processes are defined and documented. There is neither work nor organizational structure alteration. However, performance is more predictable. In order to overcome company problems, considerable effort is required, and costs remain high. Client satisfaction levels improve but still remain low if compared to levels reached by competitors. At the third level, linked, the application of SCM principles occurs (Supply Chain Management). The organizational structures become more horizontally prepared through the creation of authorities that overlooks functional units. Cooperation among intraorganizational functions, supply managers and clients transform into teams that share measures common with SCM, and into objectives with a horizontal scope in the supply chain. Efforts for continuous improvement are made aiming to stop problems early and thus achieve better performance improvement. Cost efficiency grows and clients starts to get involved directly in the improvement efforts of intra-organizational processes. At the fourth level, integrated, the company, its suppliers, and clients strategically cooperate in the processes' levels. Organizational structures and activities are based on the SCM principles and traditional tasks, related to the expanded value chain processes, start to disappear. Performance measurements for the supply chain are used, with the advent of advanced practices, based on collaboration. The process improvement objectives are geared towards teams and well reached. Costs are drastically reduced, and client satisfaction, as well as team spirit, becomes a competitive advantage. At the final level, extended, competition is based in multi-organizational supply chains. Multi-organizational SCM teams appear with expanded processes, recognized authority and objectives throughout the supply chain. Trust and auto-dependence build the support base of the extended supply chain. Process performance and trust in the extended system are measured. The supply chain is characterized by a client-focused horizontal culture. Investments in the system's improvement are shared, as well as the investment's return.
Building the Supply Chain Process Management Maturity model -SCPM3
However, while previously developed maturity models outline the general path towards achieving greater maturity the idea of our paper is to more clearly identify which particular areas are important in the quest for achieving greater maturity at which level. We answer the questions: What best practices are fully matured and in use at what maturity level? This will more fully define the maturity levels based upon the capabilities present within the assessed company. From a database containing 90 process capabilities indicators of supply management processes, composed by respondents from 788 companies located in USA, Canada, United Kingdom, China and Brazil, an exploratory factorial analysis (EFA) was conducted. EFA using Maximum Likelihood aims to find models that could be used to represent the dataset organizing the variables in constructs, i.e. groupings. Dataset was composed by respondents whose functions were directly related to supply chain management processes. The sample deliberately included companies from different industries in order to get a cross industry perspective. The study participants were selected from two major sources: Set 1 -The membership list of the Supply Chain Council. The "user" or practitioner portion of the list was used as the final selection, representing members whose firms supplied goods rather than services, and were thought to be generally representative of supply chain practitioners rather than consultants. An email solicitation recruiting participants www.intechopen.com for a global research project on supply chain maturity was sent out to companies located in USA, Canada, United Kingdom and China. The responses represent 39.3% of the sample composition with 310 cases. Set 2 -In Brazil, the companies were selected from a list of an important educational institution of logistics and supply chain management in the country. An electronic survey was done. From a total of 2,500 companies contacted, 534 surveys were received, thus yielding a response rate of 21.4 percent. After data preparation, 478 respondents were included in the sample, representing 60.7% of the total sample. From the results, considering a cutting point of eigenvalues bigger than 1.0, 16 constructs were considered which were able to represent 64.3% of the overall data variance. The Kaiser-Meyer-Olkin measure of sampling adequacy, representing the proportion of the variables' variance that could be caused by the factors, got a very high result of 0.958, indicating that the results of the EFA can be useful for the dataset. Moreover, the Bartlett's Test of Sphericity was conducted resulting in a significance value lower than 0.0001 demonstrating a good relationship between the variables that would be considered to detect a possible structure or model. Additionally, the Goodness-of-Fit also demonstrated that those 16 groupings have an excellent adjustment for the dataset with a significance also lower then 0.0001. Further, the 16 constructs previously detected by EFA were submitted to a content analysis, considering the meaning of each question used to compose the questionnaire used for data collection. Such procedure enables a refinement resulting in a new list of 13 groupings, leaner and objectively composed, that were used to subsidy the first version of the Supply Chain Process Management Maturity Model (SCPM3). The Cronbach's Alpha for each of the 13 groupings was calculated and all groupings got values superior to 0.6 showing a good scale reliability. Additionally, by conducting a collaborative effort with a group of specialists in process management and supply chains, the 13 groupings were labeled considering the variables comprising them. A complete list of groupings and their respective variables can be found in the appendix of this paper. In order to identify the hierarchical relationship between the groupings and also the key turning points (McCormack et al., 2009) that could be used to classify them in different maturity models and its respective cutting points detonating a level change, a set of cluster analysis procedures was conducted. Cluster analysis, also denominated as "segmentation analysis" or "taxonomic analysis", aims to identify subgroups of homogeneous cases in a population. In this sense, the cluster analysis can identify a set of groups that minimizes the internal variation and maximizes the variation between groups (GARSON, 2009). Aiming to prepare the dataset for the cluster analysis, based on the sum of scores of all variables from each grouping it was generated a new variable for each grouping. Later, a variable Maturity Score was generated by summing all new indicators generated for each grouping representing the maturity score for each one of the 788 cases of the sample. Further, the TwoStep cluster analysis was then conducted, considering the maturity score as a continuous variable and taking a fixed number of 5 clusters -each representing one maturity level -aligned with the traditional classification of the existent maturity models that are composed by five different evolution levels. The TwoStep cluster analysis groups cases in pre-clusters that are treated as unique cases. As a second step, the hierarchical grouping is applied to the pre-clusters. The 788 cases in the sample were then classified considering its positions in each of the five clusters, i.e. in each of the five maturity levels identifying its respective turning points.
Considering each cluster as a distinct maturity level and taking the centroids identified for each cluster, the turning points for each level were established based on the minimum score for level 1 1 and the average between two centroids for the others, as can be illustrated in Figure 1. Taking the key turning points all the 788 cases were then reclassified regarding their maturity level and further identified in a new variable "LMaturity". In this sense, companies with maturity scores between 90 and 202 points were positioned at maturity level 1; between 203 and 256 points at level 2; ranging between 257 and 302 at level 3; between 303 and 353 at level 4; and above 354 points at maturity level 5. Such classification was based on a previous definition of the maturity levels as discussed by McCormack, Johnson and Walker (2003), with the turning points identified considering the data of this present research. The internal turning points in each process grouping -i.e., the points that can be used to define a change in a maturity level for each group -were further identified by means of the cluster analysis with K-means algorithm. This method, by using the Euclidian distance, defines initially and randomly the centroids for each cluster and later initiates the interaction cycle. In each interaction the method groups the observed values taking the cluster average which the Euclidian distance is more close. In this sense, the algorithm aims to minimize the internal variance of each cluster and maximize the variance between clusters. The cluster centroids change in each interaction considering its new composition. The process continues until saturation is reached -with no more changes in centroids -or until the maximum limit of interactions is reached. As conducted previously, the definition of the key turning points (McCormack et al., 2009) were based at the centroids scores. For the first level the minimum score for each construct was taken and for the others, the centroids average of the previous level and the level itself was considered for each group.
Aiming to find evidence about the relationship of precedence between groups, the Euclidian distances correlation matrix was used as reference. This matrix was calculated based on a dissimilarity measure -i.e. the distance between the variables -based on the squared root of the sum of the squared differences between the items. As discussed by Székely, Rizzo e Bakirov (2007) the correlation of the Euclidian distances can be considered as a new alternative to measure the dependence between variables. In this sense, by taking the scores from the proximities matrix as reference, the hierarchical analysis of the groups was conducted based on the Euclidian measure and the average link between groups. As result of this procedure a dendogram was generated (Figure 2) representing the precedence between each group of indicators of capabilities in supply chain management processes. To test the hierarchical relationships between groupings and the model composition and aiming to identify possible potential adjustments, path modeling and structural equation analysis was conducted. The tests were conducted relating the constructs of the maturity model with a performance variable (PSCOR), generated by summing the scores given by the respondents for the overall performance at the SCOR areas of Plan, Source, Make and Deliver. As a result, a new list of relationships between variables was generated indicating that, in case of change, it could improve the model adjustment reducing the scores of Chi-Square test. By using a cutting point of 200 points to determine which relationships could generate a significant improvement for the model adjustment, the constructs of Strategic Behavior and Strategic Planning Team were considered, if related, to improve the model adjustment. By understanding that the strategic behavior conditioned by firms developing teams to strategically plan their processes in supply chains, the relationship was considered valid. Additionally, looking at the composition of the construct Strategic Behavior, it is possible to notice that those indicators of capability in process refers, in general, to evidences about the existence of a strategic planning team working based on a wide view of the chain, considering the profitability of each customer and each product, working on the relationship with business partners, defining business priorities and evaluating the impact of the strategies on the business based on performance measures previously defined.
In addition, the relationship between groups was tested and all weighs were calculated and validated considering a p-value < 0.001, except the group Strategic Planning Team. Such group, when considered as a reflexive variable to Responsiveness and Collaboratively Integrated Practices, was rejected by the significance test. This results shows that it is not possible to assure that the estimated regression weigh is different to zero, and, therefore, it is not possible to consider a direct relationship between those constructs. Considering those results, the construct of Strategic Behavior was repositioned at the model inverting the precedence relationship previously identified, positioning it as a successor of Strategic Planning Team. After adjustment the model considering the new structure, was resubmitted to the structural equation modeling and path analysis and a new table with the new regression weighs was generated. All estimated regression weighs for the new model, considering the relationships between groups, were considered significantly valid. Thus, the visual representation of the model was readjusted considering the new precedence relationships, as well as the turning points previously identified that can be used to determine the change of levels in a maturity scale for supply chain management processes. Finally, after the model and the relationship nature of the variables was discussed by specialists of the BPM Team 2 and some final adjustments were suggested to be implemented in the model and further validated by empirical research by connecting the construct of Foundation Building as a direct antecedent of Demand Management and Forecasting, Production Planning and Scheduling and Supply Network Management. Such suggestions were considered valid and adopted to be tested in future research by considering that the background generated by Foundation Building is a necessary condition for companies develop capabilities that enable an effective demand forecasting and demand management, generating important outcomes to be considered by the production planning and scheduling processes and also for the management of the suppliers network. The final SCPM3 model emerging from the statistical analysis is presented in figure 3 and discussed below. The best practices present at each maturity level are show at the level where they become fully mature (the practices are additive as the company progresses). Level 1 -Foundation -is characterized by building a basic structure, aiming to create a foundation for the processes to avoid ad hoc procedures and unorganized reactions, looking to stabilize and document processes. At this level, the critical business partners are identified and order management best practices are implemented considering restrictions of capacity and customer alignment. Companies positioned at Foundation Level have the following characteristics: Process changes are hard to implement. Changes usually are energy consuming and hurt the relationships between those professionals involved. Changes are slow and need big planning efforts. There is always a sensation that customers are not satisfied with companies performance in delivery times. The commitments with the customers cannot be considered reliable and the company does not have an adequate control about what was ordered and not yet delivered. They are not prepared to generate deliveries to customers when some special treatment is requested. Processes are not flexible and, therefore, a lot of alternative resources are used to try to attend customers expectation generating unnecessary expenses for the organization.
Inadequate demand forecasts and lack of internal processes integration generate problems caused by sellers promising more than companies have productive capacity to deliver and its inventory levels can support. Additionally, the company doesn't have control and not properly document shortfalls situations. Process of order placement, distribution and procurement are not properly documented. Companies information systems do not fully support all supply chain processes. Companies have not yet identified suppliers for product and services as strategic. Service levels with suppliers are not appropriately agreed, understood and documented. At Level 2 -Structure -processes start to be structured in order to be further integrated. Control items are implemented in demand management processes, production planning and scheduling and for the distribution network management. Downstream, distribution network management practices are structured and the processes are defined. Demand starts to be evaluated in more detail. In other the direction, the processes of production planning and scheduling are structured taking the demand management and forecast as inputs. Companies positioned at Structure Level have the following characteristics: Investments are made to document the flows of planning and scheduling, develop metrics to verify the adherence of planning by production scheduling and to the business needs. Plans start to be developed in more detail considering each item or service to be produced. Production plans start to integrate along company's divisions and the applied methodologies consider capacity constraints. Information systems start to support the operations and integrate with organizational processes. Demand is evaluated for each item/service considering historical data of orders and a process of demand management and forecasting implemented and formalized.
Mathematical and statistical methods, together with customer information are used as baseline for distribution planning and demand forecasting. Forecasts are frequently updated and reliable. Forecasts are measured for accuracy and become the baseline for the development of plans and commitments with customers. Impact of future process changes is evaluated in detail before being implemented. Each node at the distribution chain has the measures and controls implemented. Automatic replenishment practices are in place in the distribution network. Distribution processes are measured and controlled and participants are rewarded based on those measures. When organizations reach Level 3 -Vision -process owners are established and become responsible for its management and performance results. Procurement processes are evaluated by a team that looks strategically to the acquisitions in order to align the interests of the marketing and operations department. At this level, organization can be assumed to start to develop a strategic behavior considering a broader perspective of the supply chain. Companies positioned at Vision level have the following characteristics: There is a procurement team formally designated and meeting periodically with other organizational functions such as marketing and operations. The process of order commitment has an owner that guarantees that commitments with customers are fulfilled. Similarly, the key processes of distribution, planning of the supply chain network, demand planning, procurement and operations have formal owners. Companies have a team responsible for the development of the operational strategic planning formally designated. The functions of sales, marketing, operations and logistics are represented on this team. The operational strategic planning team meets regularly and uses adequate tools for analysis to identify the impact of the changes before it is made. There is a planning process of operation strategy documented. When the team meets and make adjustments at the strategies, such adjustments are properly updated at the documents. At Level 4 -Integration -companies seek to build a collaborative environment with their supply chain business partners. The organizational processes integrate with the processes of suppliers and customers in a collaborative platform. The forecasts are developed in detail, considering the demands of each customer individually. The relationship with upstream partners becomes more solid and integrated. The company, based on a set of concrete metrics and health data about the process flow, starts to use analytics and become more strategically driven with its supply chain partners. Companies at Integration level have the following characteristics: Starts to develop, with its partners, the capability to respond to the demand signals working in a "pull" way. The company aligns with its suppliers developing plans. Measures and controls are implemented to appraise the suppliers performance.
Suppliers have access to inventory levels of the company and the information about production planning and scheduling are shared. Critical suppliers are considered partners and have broad access to company's information about production. The strategic planning team, established at the previous level, now continuously accesses the impact of its strategies based on supply chain performance measures. The strategic planning team is involved in the process to select new members and partners for the supply chain and actively participates in the relationships with suppliers and customers. The strategic planning team appraises the profits generated by each customer and each product individually and, based on such appraisal, defines specific priorities for each customer and product. Level 5 -Dynamics -is characterized by a strategic integration of the chain, when processes support collaborative practices between partners and generate a baseline enabling the chain to be responsive to market changes. The chain starts, therefore, to behave dynamically, continually improving its processes considering its key performance indicators and reacting synchronized and fast to the changes in the competitive environment. Companies positioned at the Dynamics level have the following characteristics: Functions of sales, marketing, distribution and planning collaborate between themselves to the process of order commitment and to develop forecasts. The order commitment process is integrated with the other supply chain processes. The demand management process and the production planning and scheduling are completely integrated. Companies establish a close relationship with customers and have control about demand and capacity constraints. Companies attend to the short term demands of customers and act in a responsive way. The supply times are considered critical for the production planning and are continuously revised and updated. Companies follow the orders and measure the percentage of orders delivered on time
Using the SCPM3 -A DRK methodological purpose
The following set of steps can be used as a guideline for managers and consultants as a roadmap for process improvement to maximize the return of the investment in supply chain management.
The bases of the application can be defined in three inter-related macro stages, as follows: The Discovery stage involves the scope definition to be evaluated -i.e. the focus of the analysis -and aims to identify possible adjustments necessary to the basic indicators.
(Appendix A), in order to collect information about specific points related to the defined scope and to proceed with data collection for the indicators of capabilities in supply chain management processes. The Knowledge stage approach the communication of the results obtained in the previous stage: the contextualization of the results, the communication of the recommendations for improvement. At this stage also the knowledge unification in the organization happens about: a) What is a maturity model for supply chain process management?; b) Why access the indicators of capabilities of supply chain management processes?; c) How the maturity models can be applied?; and d) What can the organization learn from using the model? The figure 6 illustrate the stages on a maturity cycle, that are further presented in more detail, aiming to provide guidelines to organizations looking to reach continuous improvement in their supply chain management processes: At the Discovery phase, initial step to apply the SCPM3, it is defined the scope of the analysis considering the broad of the vision under different perspectives for supply chains (internal, dyad or external). After the scope definition, it is necessary to identify the possible adjustments that would be necessary to the questionnaire (Appendix A), adding new complimentary questions aiming to gather information specific to the previously delimited scope. Such adjustments should be made with caution and followed by key professionals in the organization that have a strategic view about the supply chain processes. The next step comprises of the data collection with 20 to 30 professionals with a broad view about the organization and its processes. After to proceed to the data collection and the preliminary data analysis, it would be recommended to apply deep interviews with some professionals in order to capture some business specificities on the scope. The next step, Knowledge, aims to present the results of the research and the recommendations to the supply chain. It consists of four steps sequentially defined: 1. Alignment of the concepts about SCPM3; 2. Proceed to generate the preliminary results evaluation, based on the scores obtained on the indicators. What would be the maturity level of the organization and which would be the critical points to be developed and improved in order to reach a superior level; 3. Based on the data gathered, proceed with the evaluation of each group and identify the points that must be improved in each group of the model; 4. Compare each indicator with a benchmarking database for reference and present the results with recommendations for processes improvement and efforts prioritization. At the next step, an implementation plan for the recommendations must be elaborated and implemented. In the end, the organization must be prepared to restart a new cycle and revise its processes to continuously improve. As a result for each cycle, the following deliverables are expected to be generated: Visual representation of the positioning of the organization at the SCPM3 Scores by each group of the model Scores in each SCOR area (Plan, Source, Make and Deliver) Benchmarking of each score with the reference database, identifying the major gaps, weaknesses and strengths. A recommendation list and potential benefits for each recommendation, prioritizing each action and considering: cost reduction, inventory reduction, faster cycles and improvement of service levels delivered to company's customers. An executive report summarizing each cycle.
Conclusions and recommendations
In recent years, a growing amount of research has been dedicated to investigating ways to provide the right information for the right people in order to develop supply chain capabilities and resources to competitively bring products and services to the market. Key literature on the concept of business process management suggests both that organizations can enhance their overall performance by adopting a process view of business and that business-process orientation (BPO) has a positive impact on business performance.
The concept of process maturity derives from the understanding that processes have life cycles or developmental stages that can be clearly defined, managed, measured and controlled throughout time. A higher level of maturity, in any business process, results in: (1) better control of the results; (2) more accurate forecast of goals, costs and performance; (3) higher effectiveness in reaching defined goals and the management ability to propose new and higher targets for performance. In order to meet the performance levels desired by customers in terms of quantitative and qualitative flexibility of service in demand fulfillment, deadline consistency and reduction of lead times related to fulfilling orders, firms have developed repertoires of abilities and knowledge that are used in their organizational process. In the two past decades, management of supply chain processes has evolved, also because of these new demands, from a departmental perspective, extremely functional and vertical, to an organic arrangement of integrated processes oriented to providing value to intermediate and final costumers. This new pattern of logistical process management had lead towards the development and application of different maturity models and performance metrics useful as support tools to help define a strategy and to face trade-offs, as well as to identify items that are considered critical to quality improvement of logistical services rendered to the client. The SCPM3 model is the first SCM process maturity model the uses rigorous statistical analysis to define maturity levels and the best practices present at each level. This model is based upon a global data set of hundreds of companies across many industries. Therefore, the model will more closely represent what is really occurring rather than a preferred path to maturity represented by anecdotally developed models. This makes the SCPM3 broadly applicable as a benchmarking instrument. A company can complete the assessment using the indicators in Appendix A and use this score to place themselves on the maturity model. In this way, they can develop an action plan to improve process maturity incorporating best practices only as they are relevant to reaching the next maturity level, thus avoiding getting ahead of themselves and trying to implement best practices that do not have the precedence components in place. This will make the improvement efforts more effective and sustainable leading to less time needed to achieve each maturity level.
Demand Management and Forecasting
Do your information systems currently support the Demand Management process? Do you analyze the variability of demand for your products? Do you have a documented demand forecasting process? Does this process use historical data in developing the forecast? Do you use mathematical methods (statistics) for demand forecasting? Does this process occur on a regular (scheduled) basis? Is a forecast developed for each product? Does your demand management process make use of customer information? Is the forecast updated weekly? Is the forecast credible or believable? Is the forecast used to develop plans and make commitments? Is forecast accuracy measured?
Construct Name Question Text
Distribution Network Management Does your information system support Distribution Management? Are the network inter-relationships (variability, metrics) understood and documented? Are impacts of changes examined in enough detail before the changes are made? Do you use a mathematical "tool" to assist in distribution planning? Is the Distribution Management process integrated with the other supply chain decision processes (production planning and scheduling, demand management, etc)? Does each node in the distribution network have inventory measures and controls? Do you use automatic replenishment in the distribution network? Are Distribution Management process measures in place? Are they used to recognize and reward the process participants?
Order Management
Do you maintain the capability to respond to unplanned, drop-in orders? Do your information systems currently support the order commitment process? Do you measures "out of stock" situations? Can rapid re-planning be done to respond to changes? Are the customer's satisfied with the current on time delivery performance? Do you measure customer "requests" versus actual delivery? Given a potential customer order, can you commit to a firm quantity and delivery date (based on actual conditions) on request? Are the projected delivery commitments given to customers credible (from the customer's view)?
Process Governance
Do you have a Promise Delivery (order commitment) "process owner"? Is a Distribution Management process owner identified? Do you have someone who "owns" the process? Is there an owner for the supply chain planning process? Is there an owner for the demand management process? Is a "process owner" identified?
Foundation Building
Are changes made in response to the loudest "screams"? Are deliveries expedited (manually "bypassing" the normal process)? Do you promise orders beyond what can be satisfied by current inventory levels? Is your order commitment process documented (written description, flow charts)?
Construct Name Question Text
Is your Distribution Management process documented (written description, flow charts)? Is your Procurement process documented (written description, flow charts)? Does your information system support this process? Are the supplier inter-relationships (variability, metrics) understood and documented? Do you have strategic suppliers for all products and services?
Responsiveness
Do you meet short-term customer demands from finished goods inventory? Are supplier lead times a major consideration in the planning process? Are supplier lead times updated monthly? Do you track the percentage of completed customer orders delivered on time?
Collaboratively Integrated Practices Do the sales, manufacturing, distribution and planning organizations collaborate in the order commitment process? Are your demand management and production planning processes integrated? Do sales, manufacturing and distribution organizations collaborate in developing the forecast? Is your order commitment process integrated with your other supply chain decision processes? Do you automatically replenish a customer's inventory?
Customer Integration
Do you "build to order"? Do the sales, manufacturing and distribution organizations collaborate in the planning and scheduling process? Is your customer's planning and scheduling information included in yours? Are changes approved through a formal, documented approval process? Is a forecast developed for each customer?
|
2016-02-01T17:59:50.645Z
|
2011-08-01T00:00:00.000
|
{
"year": 2011,
"sha1": "453925f96da19055eeb937aac83c2d475b0f4712",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.intechopen.com/citation-pdf-url/17153",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "78a98b6c5379e335aaf859edbb1d70a5ad28c032",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
8421736
|
pes2o/s2orc
|
v3-fos-license
|
The AraC-type Regulator RipA Represses Aconitase and Other Iron Proteins from Corynebacterium under Iron Limitation and Is Itself Repressed by DtxR*
The mRNA level of the aconitase gene acn of Corynebacterium glutamicum is reduced under iron limitation. Here we show that an AraC-type regulator, termed RipA for “regulator of iron proteins A,” is involved in this type of regulation. A C. glutamicum ΔripA mutant has a 2-fold higher aconitase activity than the wild type under iron limitation, but not under iron excess. Comparison of the mRNA profiles of the ΔripA mutant and the wild type revealed that the acn mRNA level was increased in the ΔripA mutant under iron limitation, but not under iron excess, indicating a repressor function of RipA. Besides acn, some other genes showed increased mRNA levels in the ΔripA mutant under iron starvation (i.e. those encoding succinate dehydrogenase (sdhCAB), nitrate/nitrite transporter and nitrate reductase (narKGHJI), isopropylmalate dehydratase (leuCD), catechol 1,2-dioxygenase (catA), and phosphotransacetylase (pta)). Most of these proteins contain iron. Purified RipA binds to the upstream regions of all operons mentioned above and in addition to that of the catalase gene (katA). From 13 identified binding sites, the RipA consensus binding motif RRGCGN4RYGAC was deduced. Expression of ripA itself is repressed under iron excess by DtxR, since purified DtxR binds to a well conserved binding site upstream of ripA. Thus, repression of acn and the other target genes indicated above under iron limitation involves a regulatory cascade of two repressors, DtxR and its target RipA. The modulation of the intracellular iron usage by RipA supplements mechanisms for iron acquisition that are directly regulated by DtxR.
activity of aconitase (EC 4.2.1.3), which catalyzes the stereospecific and reversible isomerization of citrate to isocitrate via cis-aconitate, varies depending on the carbon source and that this is caused by transcriptional regulation (2). A repressor of the TetR family, called AcnR, was identified, which represses aconitase by binding to an imperfect inverted repeat within the acn promoter region and interfering with the binding of RNA polymerase (2). The factors that control binding of AcnR to its operator are not yet known. DNA microarray experiments revealed that acn expression is not only influenced by the carbon source but also by the iron concentration of the medium (2). Under iron limitation, the acn mRNA level in the wild type was 3-fold lower than under iron excess. In the ⌬acnR mutant, this decrease was even larger (4.8fold), presumably because the increased expression of aconitase, which contains a 4Fe-4S cluster, leads to an enhanced iron starvation.
We now have identified a new transcriptional regulator, designated RipA, which is responsible for iron-dependent regulation of aconitase and several other iron-containing proteins. Evidence is provided that RipA represses acn and six other target operons under iron limitation and is itself repressed under iron excess by the global iron repressor DtxR.
EXPERIMENTAL PROCEDURES
Bacterial Strains, Media, and Growth Conditions-All strains and plasmids used in this work are listed in supplemental Table S1. The C. glutamicum type strain ATCC13032 (3) was used as wild type. Strain ⌬ripA is a derivative containing an in-frame deletion of the ripA gene. For growth experiments, 5 ml of brain heart infusion medium (Difco) was inoculated with colonies from a fresh LB agar plate (4) and incubated for 6 h at 30°C. After washing, the cells of this first preculture were used to inoculate a 500-ml shake flask containing 50 ml of CGXII minimal medium (5) with 4% (w/v) glucose and either 1 M FeSO 4 (iron starvation) or 100 M FeSO 4 (iron excess). This second preculture was cultivated overnight at 30°C and then used to inoculate the main culture to an A 600 ϳ1. The main culture contained the same iron concentration as the second preculture. The trace element solution with iron salts omitted and the FeSO 4 solution were always added after autoclaving. For growth of C. glutamicum strains carrying plasmid pJC1 or pJC1-ripA, the medium was supplemented with 25 g/ml kanamycin. For all cloning purposes, Escherichia coli DH5 (Invitrogen) was used as host, for overproduction of RipA and DtxR E. coli BL21(DE3) (6). The E. coli strains were cultivated aerobically in LB medium at 37°C (DH5) or at 30°C (BL21(DE3)). When appropriate, kanamycin was added to a concentration of 50 g/ml.
Recombinant DNA Work-The enzymes for recombinant DNA work were obtained from Roche Applied Science or New England Biolabs (Frankfurt, Germany). The oligonucleotides used in this study were obtained from Operon (Cologne, Germany) and are listed in supplemental Table S2. Routine methods like PCR, restriction, or ligation were carried out according to standard protocols (4). Chromosomal DNA from C. glutamicum was prepared as described (7). Plasmids from E. coli were isolated with the QIAprep spin miniprep kit (Qiagen, Hilden, Germany). E. coli was transformed by the RbCl method (8), and C. glutamicum was transformed by electroporation (9). DNA sequencing was performed with a Genetic Analyzer 3100-Avant (Applied Biosystems, Darmstadt, Germany). Sequencing reactions were carried out with the Thermo Sequenase primer cycle sequencing kit (Amersham Biosciences).
An in-frame ripA deletion mutant of C. glutamicum was constructed via a two-step homologous recombination procedure as described previously (10). The ripA up-and downstream regions (ϳ500 bp each) were amplified using the oligonucleotide pairs orf1558-A-for/orf1558-B-rev and orf1558-C-for/orf1558-D-rev, respectively, and the products served as template for cross-over PCR with oligonucleotides orf1558-A-for and orf1558-D-rev. The resulting PCR product of ϳ1 kb was digested with EcoRI and HindIII and cloned into pK19mobsacB (11). DNA sequence analysis confirmed that the cloned PCR product did not contain spurious mutations. Transfer of the resulting plasmid pK19mobsacB-⌬ripA into C. glutamicum and screening for the first and second recombination event were performed as described previously (10). Kanamycin-sensitive and saccharose-resistant clones were tested by PCR analysis of chromosomal DNA with the primer pair orf1558amp-for/orf1558-amp-rev (supplemental Table S2). Of 10 clones tested, five showed the wild-type situation (2.0-kb fragment) and five had the desired in-frame deletion of the ripA gene (1.1-kb fragment), in which all nucleotides except for the first six codons and the last 12 codons were replaced by a 21-bp tag.
In order to complement the ⌬ripA mutant, the ripA coding region and 250-bp upstream DNA containing the promoter region were amplified using oligonucleotides (ripAϩ250-for (2) and ripAϩ250-rev) introducing a SalI and a PstI restriction site, respectively. The resulting 1245-bp PCR product was cloned into the vector pJC1 (12). The resulting plasmid pJC1-ripA and pJC1 were used to transform C. glutamicum wild type and the ⌬ripA strain.
For overproduction and purification of RipA with an N-terminal StrepTag-II (13), the ripA coding region was amplified using oligonucleotides that introduce an NdeI restriction site, including the start codon (ripA-2-for) and an XhoI restriction site after the stop codon (ripA-2-rev). The purified PCR product was cloned in the modified expression vector pET28b-Streptag (14), resulting in plasmid pET28b-Streptag-ripA. The RipA protein encoded by this plasmid contains 14 additional amino acids (MASWSHPQFEKGAH) at the amino terminus. For overproduction and purification of DtxR, the dtxR coding region (equivalent to NCgl1845) was amplified using oligonucleotides that introduced an NdeI restriction site at the translation initiation codon (dtxR-for-1) and four histidine codons plus an XhoI restriction site before the stop codon (dtxR-rev-1). The PCR product was cloned into the pET24b vector, resulting in plasmid pET24b-dtxR-C. The DtxR protein encoded by this plasmid contains 12 additional amino acids at the carboxyl terminus (HHHHLEHHHHHH). The PCR-derived portions of pET28b-Streptag-ripA and pET24b-dtxR-C were analyzed by DNA sequence analysis and found to contain no spurious mutations. For overproduction of RipA and DtxR, the two plasmids were transferred into E. coli BL21(DE3).
Preparation of Total RNA-Cultures of the wild type and the ⌬ripA mutant were grown in CGXII minimal medium containing 4% (w/v) glucose under iron limitation (1 M FeSO 4 ) or iron excess (100 M FeSO 4 ). In the exponential growth phase at an A 600 of 4-6, 25 ml of the cultures were used for the preparation of total RNA as described previ-ously (15). Isolated RNA samples were analyzed for quantity and quality by UV spectrophotometry and denaturing formaldehyde-agarose gel electrophoresis (4), respectively, and stored at Ϫ70°C until use.
DNA Microarray Analyses-The generation of whole-genome DNA microarrays (16), synthesis of fluorescently labeled cDNA from total RNA, microarray hybridization, washing, and data analysis were performed as described previously (2,(17)(18)(19). Genes that exhibited significantly changed mRNA levels (p Ͻ 0.05 in a Student's t test) by at least a factor of 1.7 were determined in two series of DNA microarray experiments: (i) five comparisons of the wild type and the ⌬ripA mutant cultivated in CGXII minimal medium with 4% (w/v) glucose under iron limitation (1 mM FeSO 4 ); (ii) two comparisons of the wild type and the ⌬ripA mutant cultivated in CGXII-glucose medium under iron excess (100 M FeSO 4 ).
Aconitase Assay-Aconitase activity was determined as the rate of cis-aconitate formation from isocitrate (20), as described previously (2), except that the assay was performed at 30°C. Cells of the 20-ml main culture were harvested by centrifugation at 5,000 ϫ g for 10 min and 4°C. The cell pellet was resuspended in 90 mM Tris/HCl, pH 8.0, and used for the preparation of cell extract. The assay mixture contained 950 -995 l of 90 mM Tris/HCl, pH 8.0, and 20 mM DL-trisodium isocitrate. The reaction was started with the addition of 5-50 l of cell extract, and the formation of cis-aconitate was followed by measuring the absorbance increase at 240 nm using a Jasco V560 spectrophotometer. An extinction coefficient for cis-aconitate of 3.6 mM Ϫ1 cm Ϫ1 at 240 nm was used. One unit of activity corresponds to 1 mol of isocitrate converted to cis-aconitate per min.
Overproduction and Purification of RipA-E. coli BL21(DE3) carrying the plasmid pET28b-strep-ripA was grown at 30°C in 200 ml of LB medium with 50 g/ml kanamycin to an A 600 of ϳ1.2 before adding 1 mM isopropyl -D-thiogalactoside. After cultivation for another 4 h, cells were harvested by centrifugation, washed once, and stored at Ϫ20°C. For cell extract preparation, thawed cells were resuspended in 10 ml of buffer W (100 mM Tris/HCl, pH 8.0, 150 mM NaCl). After the addition of 1 mM diisopropylfluorophosphate and 1 mM phenylmethylsulfonyl fluoride, the cell suspension was passed three times through a French pressure cell (SLM Aminco, Spectronic Instruments, Rochester, NY) at 207 megapascals. Intact cells and cell debris were removed by centrifugation (15 min, 5,000 ϫ g, 4°C), and the cell-free extract was subjected to ultracentrifugation (1 h, 150,000 ϫ g, 4°C). The supernatant obtained after ultracentrifugation was applied to a StrepTactin-Sepharose column with a bed volume of 1 ml (IBA, Göttingen, Germany). The column was washed with 6 ml of buffer W, and RipA tagged with StrepTag-II was eluted with 8 ϫ 0.5 ml of buffer W containing 7.5 mM desthiobiotin (Sigma). Fractions containing RipA were pooled, and the buffer was exchanged against TG buffer (30 mM Tris/HCl, pH 7.5, 10% (v/v) glycerin) using Vivaspin concentrators with a cut-off of 10 kDa. Protein concentrations were determined with the BCA protein assay kit (Pierce) using bovine serum albumin as a standard. The purity of the protein preparations was assessed by SDS-PAGE and subsequent protein detection with Gel Code blue stain reagent (Pierce). Using this protocol, ϳ0.2 mg of RipA protein was purified to apparent homogeneity from 200 ml of culture.
Overproduction and Purification of DtxR-E. coli BL21(DE3) carrying the plasmid pET24b-dtxR was grown at 30°C in 100 ml of LB with 50 g/ml kanamycin. Expression was induced at an A 600 of ϳ0.3 with 1 mM isopropyl -D-thiogalactoside. Four hours after induction, cells were harvested by centrifugation and stored at Ϫ20°C. For cell extract preparation, thawed cells were washed once and resuspended in 10 ml of TNGI5 buffer (20 mM Tris/HCl, pH 7.9, 300 mM NaCl, 5% (v/v) glycerol, 5 mM imidazol) containing 1 mM diisopropylfluorophosphate and 1 mM phenylmethylsulfonyl fluoride. Disruption of the cells and fractionation by centrifugation was performed as described above for RipA purification. DtxR present in the supernatant of the ultracentrifugation step was purified by nickel affinity chromatography using nickel-activated nitrilotriacetic acid-agarose (Novagen). After washing the column with TNGI50 buffer (which contains 50 mM imidazol), DtxR protein was eluted with TNGI100 buffer (which contains 100 mM imidazol). Fractions containing DtxR were pooled, and the elution buffer was exchanged against TG buffer (30 mM Tris/HCl, pH 7.5, 10% (v/v) glycerin). From 100 ml of culture, ϳ3 mg of DtxR was purified to apparent homogeneity.
Gel Shift Assays-For band shift assays of RipA with putative target promoters, purified RipA protein was mixed with DNA fragments (200 -630 bp, final concentration 8 -13 nM) in a total volume of 20 l. The binding buffer contained 20 mM Tris/HCl, pH 7.5, 0.5 mM EDTA, 5% (v/v) glycerol, 1 mM dithiothreitol, 0.005% (v/v) Triton X-100, 50 mM NaCl, 5 mM MgCl 2 , and 2.5 mM CaCl 2 . Approximately 20 nM of different nontarget promoter fragments (clpC, clpP, ripA, and porB) were added as a negative control. After incubation for 30 min at room temperature, the samples were separated on a 10% native polyacrylamide gel at room temperature and 170 V using 1ϫ TBE (89 mM Tris base, 89 mM boric acid, 2 mM EDTA) as electrophoresis buffer. The gels were subsequently stained with Sybr Green I according to the instructions of the supplier (Sigma) and photographed.
Binding of DtxR to the ripA promoter was carried out in a 20-l reaction mixture containing 100 mM Tris/HCl (pH 7.5), 5 mM MgCl 2 , 40 mM KCl, 10% (v/v) glycerol, 1 mM dithiothreitol, 150 M MnCl 2 , an 18 nM concentration of a 300-bp ripA promoter DNA fragment, and DtxR in concentrations ranging from 0 to 3.6 M. The ripA fragment covered the region from position Ϫ230 to ϩ70 relative to the translation start and was obtained by PCR with primers ripA-Prom-for and ripA-Promrev. As a negative control, a 23 nM concentration of a 200-bp acn promoter fragment extending from position ϩ190 to Ϫ50 relative to the acn transcription start site (2) was added. This fragment was amplified with primers acn-Prom4-for and acn-Prom4-rev. The reaction mixture was incubated at room temperature for 30 min and then loaded onto a 10% native polyacrylamide gel containing 1 mM dithiothreitol and 150 M MnCl 2 . Electrophoresis was performed at room temperature and 170 V using 1ϫ TB (89 mM Tris base, 89 mM boric acid) supplemented with 1 mM dithiothreitol and 150 M MnCl 2 as electrophoresis buffer. All PCR products used in the gel shift assays were purified with the PCR purification Kit (Qiagen, Hilden, Germany) and eluted in EB buffer (10 mM Tris/HCl, pH 8.5).
RESULTS
Identification of RipA as a Potential Iron-dependent Regulator of the Aconitase Gene-In a previous study, we showed that expression of the aconitase gene acn of C. glutamicum is influenced by the iron availabil-ity, being reduced under iron limitation (2). This regulation also occurred in a mutant lacking AcnR, a repressor of the acn gene, and thus must be mediated by a different regulator or regulatory mechanism. A candidate gene that might be responsible for iron-dependent regulation of acn was identified in the DNA microarray experiments used to compare the gene expression profile of C. glutamicum under iron excess and iron limitation. Expression of the gene NCgl0943 was strongly influenced by the iron availability (2). Its mRNA level was always found to be increased under iron-limiting conditions, and it thus behaved like typical iron starvation genes. The protein derived from NCgl0943 is composed of 331 amino acid residues (36.044 kDa) and contains a DNA binding domain of the AraC family (PF00165 in the PFAM data base (21), PS01124 in the PROSITE data base (22)) with two helix-turn-helix motifs extending from position 113 to 159 and from position 165 to 208. It is flanked by amino-and carboxyl-terminal domains of 112 and 123 residues, respectively, which show no significant sequence similarity to other proteins. Based on the results described below, the NCgl0943 gene was designated ripA (repressor of iron proteins A).
In order to test an involvement of the RipA protein in acn regulation, a ripA deletion mutant of C. glutamicum was constructed. In a first set of experiments, the growth behavior of the ⌬ripA mutant was tested. As shown in Fig. 1A, no differences were observed between wild type and mutant cultivated in glucose minimal medium containing excess iron (100 M). However, under iron-limiting conditions (1 M), the ripA mutant grew initially like the wild type, but after an A 600 of about 5, the growth rate of the mutant decreased more strongly than that of the wild type. The final cell density of the mutant (A 600 of 20) was only half that of the wild type (A 600 ϭ 40). Thus, the ⌬ripA mutant has a growth defect under iron limitation but not under iron excess. As shown in Fig. 1C, this growth defect could be reversed by transformation with a plasmid carrying the ripA gene with its native promoter region (pJC1-ripA), but not with pJC1 alone (Fig. 1B).
In a second set of experiments, aconitase activity was determined in wild-type and ⌬ripA cells from cultures grown under iron excess and iron limitation. As shown in Fig. 2, the aconitase activity of the two strains was nearly identical under iron excess, whereas under iron limitation, the ⌬ripA mutant had a 1.5-2-fold higher activity than the wild type at four different time points. Thus, the absence of ripA might result in an increased expression of the acn gene under iron limitation, but not under iron excess.
Comparison of the Expression Profiles of ⌬ripA Mutant and Wild Type with DNA Chips-In order to determine the effects of RipA on acn expression as well as on global gene expression, whole genome DNA microarrays of C. glutamicum (16) were used to compare the mRNA ratios of the ⌬ripA mutant and the wild type under iron limitation and iron excess. Under iron starvation (1 M iron), nine genes showed a Ͼ1.7-fold higher mRNA level in the ⌬ripA mutant (TABLE ONE). This group included the aconitase gene acn, supporting the above made assumption that increased acn expression is responsible for the elevated aconitase activity in the ⌬ripA mutant under iron limitation. Besides acn, catA (catechol 1,2-dioxygenase), leuCD (isopropylmalate dehydratase), narKGHJI (nitrate/nitrite transporter and nitrate reductase), sdhCAB (succinate dehydrogenase), and pta (phosphotransacetylase) showed higher mRNA levels in the ⌬ripA mutant compared with the wild type. The mRNA level of the ackA gene for acetate kinase, which is co-transcribed with pta (23), was slightly increased in the ⌬ripA mutant but below the cut-off used. Except for the transporter NarK, phosphotransacetylase, and acetate kinase, the enzymes encoded by these genes are known to contain iron, mostly in the form of iron-sulfur clusters (aconitase, isopropylmalate dehydratase, nitrate reductase, succinate dehydrogenase) and/or heme (nitrate reductase, succinate dehydrogenase) (24). Remarkably, the mRNA level of the genes mentioned above was changed only under iron limitation but not under iron excess (TABLE ONE).
Besides ripA, seven other genes showed a Ͼ1.7-fold decreased mRNA level in the ⌬ripA mutant under iron limitation (TABLE ONE). There is no obvious common property of these genes; however, dps (starvationinduced DNA protection protein) and ftn (ferritin) are critically involved in iron homoeostasis (25,26). In contrast to the genes with an increased mRNA level in the ⌬ripA mutant, dps and ftn had decreased mRNA levels not only under iron limitation but also under iron excess.
Binding of Purified RipA Protein to the acn Promoter-In order to test whether the influence of RipA on acn expression is direct, binding of RipA to the acn promoter was analyzed. For that purpose, the RipA protein containing an amino-terminal StrepTag-II was overproduced in E. coli and purified to apparent homogeneity by affinity chromatography (Fig. 3). Gel shift assays showed that the RipA protein bound with high affinity to fragment 1 covering the entire acn promoter region, whereas a control fragment covering the promoter of the porB gene encoding an anion channel (27) was not shifted (Fig. 4). A RipA-fragment 1 complex was already observed at a 5-fold molar excess of RipA. At a 10-fold excess, two RipA-fragment 1 complexes were observed, and at a 30-fold excess, only the second RipA-DNA complex was observed, suggesting the presence of two binding sites. Gel shift assays with 10 different subfragments (Fig. 4) clearly confirmed the presence of two FeSO 4 ). The genes leuC, sdhB, narK, and narH show an average mRNA ratio below 1.7 but were included, since they are organized in operons with genes (leuD, sdhCA, or narKGJI) having an mRNA ratio above 1.7. b This column provides the mRNA ratio (⌬ripA mutant/wild type) of the genes under iron excess conditions. It represents the average of two DNA microarray experiments performed with RNA isolated from two independent cultivations in CGXII minimal medium under iron excess (100 M FeSO 4 ).
distinct binding sites, extending from position Ϫ212 to Ϫ194 (binding site A) and from Ϫ155 to Ϫ137 (binding site B) relative to the transcription start site of acn determined previously (2). Fragments lacking these regions (e.g. fragment 2) were not shifted, fragments containing one of the two regions formed a single RipA-DNA complex (e.g. fragment 7), and fragments containing both regions (e.g. fragment 8) formed two RipA-DNA complexes (Fig. 4). Inspection of the two regions revealed that they contained a similar sequence motif but in opposite orientation (Fig. 5). The relevance of this motif was tested by mutational analysis, in which three or four nucleotides were exchanged simultaneously. As shown in Fig. 5, all mutations within the proposed motif prevented RipA binding, whereas the mutations outside did not inhibit binding. These results confirmed the importance of the sequence G(A/T)GCGN 6 GAC for RipA binding.
Binding of Purified RipA Protein to Additional Target Promoters-As a result of the DNA microarray experiments, the operons catA, leuCD, narKGHJI, sdhCAB, and pta-ack were identified as further putative target genes of RipA, since their mRNA level was also increased in the ⌬ripA mutant. We therefore tested the binding of RipA to the corresponding promoter regions. As shown in Fig. 6, all five promoter fragments were shifted by RipA at a molar excess (protein/DNA) of 5-10, and in all cases, two RipA-DNA complexes were formed. This indicates that there are two RipA binding sites in the corresponding promoter regions, as shown above for the acn promoter. Since expression of the katA gene encoding the hemoprotein catalase was also altered in some of the DNA microarray experiments, the katA promoter region was also tested for RipA binding and shown to contain two RipA binding sites with affinities comparable with those described above. In addition, a third binding site of lower affinity was detected (Fig. 6). Binding of RipA was also tested with the promoter regions of ripA, dps, and ftn. In the case of ripA and dps, no shift was observed, suggesting that there is no autoregulation of ripA and no direct control of dps expression by RipA. In the case of ftn, a weak binding was observed, with about 30% of the ftn fragment shifted at a 100-fold molar excess of RipA (data not shown). For the other RipA targets, a complete shift was observed at a 10 -30fold molar RipA excess. Thus, the affinity of RipA to the ftn promoter appears to be much lower. If RipA directly influences ftn expression, it should act as an activator, since the ftn mRNA level was decreased in the ⌬ripA mutant. Considering that induction of an iron storage protein under iron limitation appears counterproductive, the role of RipA in ftn expression is not yet clear.
As described above for acn, the RipA binding sites within the sdhCAB promoter were narrowed down with six different subfragments (data not shown). In this way, binding site A was shown to be located in the Table S2 (acn-for-3-acn-Prom10-for). At the right, it is indicated whether the fragment was shifted once (ϩ), twice (ϩϩ), or not at all (Ϫ). The boxes labeled A and B indicate the regions that were identified to contain RipA binding sites. B, gel showing binding of purified RipA (5-50-fold molar excess) to fragment 1 (8.5 nM 632-bp fragment). A 485-bp porB promoter fragment (11 nM) served as negative control. The DNA-protein mixture was incubated for 30 min at room temperature before separation by native polyacrylamide gel electrophoresis (10%) and staining with SybrGreen I. C, gel showing binding of RipA (5-and 10-fold molar excess) to fragment 2 (no binding site), fragment 7 (one binding site), and fragment 8 (two binding sites). region between Ϫ180 and Ϫ90 relative to the sdhC start codon and binding site B between Ϫ90 and ϩ12. Inspection of these regions revealed sequence motifs similar to the ones identified in the acn promoter. The relevance of these sites for RipA binding was again confirmed by mutational analysis (supplemental Fig. S1). Based on the four RipA binding sites identified upstream of acn and sdhC, the other RipA target promoters were searched for motifs similar to G(A/ T)GCGN5(T/C)GAC, and the relevance of putative motifs was subsequently tested by changing three adjacent nucleotides within the motif. In this way, two binding sites were identified upstream of narK and pta, three upstream of katA, and one upstream of leuC and catA. Fig. 7 gives an overview of all identified RipA binding sites, their position relative to the respective start codon, and their orientation. From the alignment of the 13 binding sites, the RipA consensus motif RRGCGN 4 RYGAC was derived. From the 13 motifs, two were present in inverted orientation (acn-B and pta-A). The distance between neighboring RipA binding sites varied between 57 and 339 bp.
Regulation of ripA Expression by DtxR-As shown previously (2), expression of ripA followed the same pattern as that of typical iron acquisition genes (i.e. its mRNA level was always increased under ironlimiting conditions). In Corynebacterium diphtheriae, DtxR in complex with iron represses expression of the iron starvation proteins under iron excess but is inactivated under iron limitation (28). C. glutamicum contains a protein with 72% sequence identity to C. diphtheriae DtxR (encoded by NCgl1845), and the C. glutamicum homolog was previously shown to repress the tox promoter from C. diphtheriae in an iron-dependent manner (29). It was therefore tempting to speculate that expression of ripA is repressed under iron excess by DtxR and derepressed under iron starvation. A 19-bp consensus operator of DtxR from C. diphtheriae has been defined as TWAGGTTAGSCTAAC-CTWA (30). Inspection of the C. glutamicum ripA promoter region revealed a sequence motif (i.e. TGAGGTTAGCGTAACCTAC) that differs in only three positions from the consensus binding site and ends 32 bp upstream of the ripA start codon. In order to test whether this motif is a DtxR binding site, the DtxR protein from C. glutamicum was overproduced in E. coli and isolated by means of a carboxyl-terminal histidine tag (Fig. 2). Gel shift analysis revealed that the purified DtxR protein bound to the ripA promoter region (Fig. 8). A partial shift was observed at a 20-fold molar excess (protein/DNA), whereas a 100-fold molar excess was required for a complete shift. Binding of DtxR to the ripA promoter was strictly dependent on the presence of divalent cations (e.g. Mn 2ϩ ) (data not shown). As a negative control, the promoter region of acn was used, which was not shifted. These results clearly support a regulation of ripA expression by DtxR. (1, 2, and 3) and outside (4 and 5) the proposed RipA binding sites A and B are listed below the wild-type sequence. Fragments containing these mutations were obtained with the primer pairs acn-A.1/acn-for-3 to acn-A.5/acn-for-3 and acn-B.1/acn-Prom4-rev to acn-B.5/acn-Prom4-rev (see supplemental Table S2). C, gel showing binding of RipA to the mutated DNA fragments. Approximately 30 nM fragments A1-A5 and B1-B5 were incubated for 30 min at room temperature either without RipA (lanes labeled with a minus sign) or with 1.2 M of purified RipA protein (lanes labeled with a plus sign). Subsequently, the samples were separated on a 10% nondenaturating polyacrylamide gel, and the gels were stained with SybrGreen I. DECEMBER 9, 2005 • VOLUME 280 • NUMBER 49
DISCUSSION
Iron is a critical element for bacteria, being essential as a co-factor in a multitude of enzymes, poorly soluble and dangerous, by catalyzing the formation of reactive oxygen species (25). Therefore, most cells have sophisticated regulatory systems to ensure a sufficient supply of iron but to avoid high levels of free Fe 2ϩ , the form responsible for hydroxyl radical production via the Fenton reaction (31). In many Gram-negative and low GC Gram-positive bacteria, the Fur protein is the central regulator of iron regulation (32,33), whereas in many high GC Grampositive genera (e.g. Corynebacterium, Mycobacterium, Rhodococcus, or Streptomyces), DtxR and homologous proteins are the key regulators in iron metabolism (28,34). Under iron excess, DtxR in complex with its co-repressor Fe 2ϩ represses its target genes, in particular uptake systems for iron siderophores, heme, or other iron sources. When iron becomes limiting, Fe 2ϩ dissociates from DtxR, and apo-DtxR dissociates from its target promoters. The DtxR protein was first identified in C. diphtheriae, where it regulates the expression of the diphtheria toxin gene carried by corynebacteriophage . In this work, we have unraveled a completely new aspect of DtxR (i.e. its influence on the expression of several prominent iron-containing proteins via the AraC-type regulator RipA).
The involvement of RipA in iron-dependent regulation was suggested by recent microarray experiments in which the ripA mRNA level was always found to be increased under iron limitation, similar to a multitude of iron acquisition genes (2). In our present study, transcriptome comparisons of a ⌬ripA mutant and the wild type revealed seven operons whose mRNA level was increased in the ⌬ripA mutant under iron limitation, but not under iron excess (i.e. those encoding aconitase (acn), isopropylmalate dehydratase (leuCD), succinate dehydrogenase (sdhCAB), nitrate/nitrite transporter and nitrate reductase (narKGHJI), catechol 1,2-dioxygenase (catA), phosphotransacetylase (pta), and catalase (katA)). The hypothesis that RipA functions as a repressor of these operons was supported by gel shift assays showing that purified RipA binds to the seven corresponding promoter upstream regions. In all cases, at least two RipA-DNA complexes of distinct mobility were identified in the gel shift experiments, suggesting the presence of at least two RipA binding sites. Using subfragments and mutational analysis, the binding sites upstream of acn and sdhCAB were identified and used to search for similar sequences in the other target promoters. Subsequently, mutational analysis led to the identification of three binding sites upstream of katA and of two binding sites upstream of narKGHJI and pta, whereas in the case of catA and leuCD only one of the binding sites could be identified up to now. Alignment of the corresponding sequences revealed a minimal consensus sequence of the type RRGCGN 4 RYGAC. AraC-type regulators (35,36) contain two adjacent helix-turn-helix (HTH) 3 motifs, which in the case of MarA insert in two adjacent segments of the major groove of the mar promoter (37). Thus, one might speculate that one HTH motif of RipA interacts with the conserved RRGCG motif and the adjacent HTH with the RYGAC motif.
Whereas the vast majority of AraC-type regulators investigated to date function as transcriptional activators (35,36), the results presented here indicate that RipA predominantly acts as a transcriptional repressor. Repression is usually accomplished by binding of the regulator between the Ϫ35 and Ϫ10 regions of the promoter and blocking access of RNA polymerase. From the RipA target operons identified in this work, transcriptional start sites have been determined for acn (located 113 bp (TS2) and 110 bp (TS1) upstream of the start codon (2)) and for pta (located 158 bp (TS2) and 46 bp (TS1) upstream of the initiation codon (38)). In the case of acn, the two identified RipA binding sites are centered at Ϫ203.5 and Ϫ146.5 with respect to TS2. Since these sites are far upstream of the RNA polymerase binding site, the question arises of how RipA represses acn expression. One possibility is the presence of one or more additional binding sites that we have not yet identified. A weak third RipA-acn complex that was observed at high RipA concentrations (Fig. 4B) supports this suggestion. A promising RipA binding motif is located immediately downstream of the acn start codon (GAGCTCACTGTGAC). However, fragment 4 in Fig. 4A, which contains this motif, was not shifted by RipA, at least under the conditions used in the experiment. Possibly, this site can only be occupied after previous binding to one of the other sites or under different conditions. Another possibility could be that an additional protein is involved whose binding is influenced by the presence of RipA. In the case of pta, the identified RipA binding sites are centered at Ϫ111.5 and ϩ156.5 with respect to TS2, with the second binding site overlapping the pta 3 The abbreviation used is: HTH, helix-turn-helix. plus and minus signs. The designations A, B, and C of the binding sites were assigned according to the distance to the translation start site, with the A sites located most distantly. In the derived consensus sequence, single residues are indicated when they occur in at least 10 binding sites. The first two and the last two bases shown are probably not essential for binding, since mutation of these sites did not inhibit RipA binding in the case of acn-A, acn-B, sdhC-A and sdh-B. The bases interfering with RipA binding in the case of the acn and sdhC regions are shown in Figs. 5 and S1, respectively. The relevance of the other binding sites was confirmed by showing that mutation of three consecutive bases inhibited binding of RipA to the fragment containing the proposed binding site. The bases mutated were GAC for narK-A, GGC for narK-B, GCG for pta-A, GTC for pta-B, GCG for katA-A, GAG for katA-B, GCG for katA-C, GCG for leuC-A, and GCG for catA-A. start codon. In this case, direct inhibition of transcription by RipA can be envisaged. For narKGHJI, the RipA binding sites are centered at Ϫ149.5 and Ϫ2.5 with respect to the narK start codon. As in the case of pta, the second site very likely interferes with transcription of the nar operon. In the case of sdhCAB, katA, catA, and leuCD, no predictions can be made on the mechanism of repression yet. The presence of at least two binding sites in each RipA target promoter and the large and varying distances between these binding sites might suggest that DNA looping is involved in the mechanism of action of RipA, as reported, for example, for the AraC-type regulators AraC (39) and MelR (40).
Except for the nitrate/nitrite transporter NarK and presumably phosphotransacetylase, all of the enzymes repressed by RipA contain iron; aconitase and isopropylmalate dehydratase possess one iron-sulfur cluster, succinate dehydrogenase probably harbors three iron-sulfur clusters and two hemes (24), nitrate reductase presumably contains four iron sulfur-clusters and two hemes (24), catalase contains heme, 4 and catechol 1,2-dioxygenase contains one non-heme iron (41). Therefore, it is obvious to assume that a major function of RipA is to reduce the synthesis of iron proteins under iron-limiting conditions, thus reducing the cell's iron demand and preventing the formation of inactive apoenzymes lacking iron. In agreement with such a function, the mRNA levels of acn, leuCD, sdhCAB, narKGHJI, catA, pta, and katA were decreased under iron limitation compared with iron excess, both in the wild type and in the ⌬acnR mutant (2). Whereas repression of the non-iron protein NarK can be explained by co-transcription with the nitrate reductase structural genes, the inclusion of phosphotransacetylase in the RipA regulon must have other reasons. A dependence on iron has only been described for the enzyme of Clostridium acidiurici (42), whereas phosphotransacetylase from other species apparently does not require iron. Phosphotransacetylase catalyzes the reversible conversion of acetylphosphate and acetyl-CoA and, in concert with acetate kinase, is involved in the catabolism of acetate (38) as well as in the formation of acetate from acetyl-CoA. C. glutamicum, in contrast to E. coli, usually does not form acetate as a product of aerobic overflow metabolism, and therefore the primary function of phosphotransacetylase in this species appears to be in acetate utilization. Since acetate catabolism involves a 2-3-fold higher carbon flux through the citric acid cycle compared with growth on glucose (43), repression of pta by RipA may serve to reduce acetate utilization under iron limitation and thus to prevent an increased citric acid cycle flux, which cannot be maintained if aconitase and succinate dehydrogenase are repressed at the same time.
A regulatory cascade with an analogous function to that of DtxR and RipA in Corynebacterium is found in E. coli. Here the role of DtxR is fulfilled by Fur, whereas the small RNA RyhB plays a function similar to that of RipA (44). Expression of ryhB is repressed by Fur under iron excess and increases under iron limitation. RyhB acts as an antisense RNA and inhibits translation of the mRNAs encoding succinate dehydrogenase (sdhCDAB), aconitase A (acnA), fumarase A (fumA), ferritin (ftnA), bacterioferritin (bfr), and superoxide dismutase B (sodB). Although the spectrum of target genes regulated by RipA and RyhB only partially overlaps, it is remarkable that both involve the iron-containing proteins of the citric acid cycle (the only fumarase of C. glutamicum belongs to the type II fumarases and does not contain iron).
A search for the distribution of RipA revealed that homologous proteins are only present in Corynebacterium efficiens (CE1047; 70.1% sequence identity) and C. diphtheriae (DIP0922; 51.5% sequence identity), but not in Corynebacterium jeikeium (45) and other high GC gram positives (e.g. the genera Mycobacterium or Streptomyces). The C. efficiens ripA gene, as annotated in the genome sequence (46), encodes a protein of 400 amino acids. We prefer an ATG start codon that is located 68 codons downstream of the annotated GTG start codon, because the derived protein has a length comparable with the RipA proteins from C. glutamicum and C. diphtheriae (supplemental Fig. S2) and because a well conserved DtxR binding site (TGAGGTTAGCGTA-ACCTAC) deviating in only two positions from the consensus sequence (30) ends 40 bp upstream of the ripA ATG start codon proposed here (164 bp downstream of the annotated GTG start codon). In C. diphtheriae, the annotated genome sequence from strain NCTC13129 predicts that the ripA homologous gene DIP0922 encodes a protein of 335 amino acid residues (47). Inspection of the corresponding upstream sequence revealed a putative DtxR binding site (CGAGCAAGGAGTAAC-CTTA) ending 87 bp upstream of the proposed start codon, which, however, differed in eight positions from the consensus sequence and thus is quite speculative. Interestingly, Lee et al. (48) previously identified a DtxR-regulated gene region designated IRP3 from C. diphtheriae strain C7, which is equivalent to the one described above. The DtxR binding site they identified experimentally by DNase I footprinting Under iron limitation, DtxR repression is relieved, and RipA protein is synthesized and partially represses expression of its target genes, which encode iron-containing proteins, except for narK, pta, and ackA. In this way, intracellular iron usage is modulated and supplements mechanisms for iron uptake that are directly regulated by DtxR.
DIP0922. Further studies are required to determine the relevance of the strain differences and the functionality of the putative DtxR binding site upstream of DIP0922.
The discovery of RipA as a repressor of iron proteins and its own repression by DtxR have unraveled a new aspect of the regulatory network controlling iron metabolism in Corynebacterium (Fig. 9). Aspects that have to be addressed in future work are the mechanism(s) of repression by RipA and the mechanism of RipA inactivation after a shift from iron limitation to iron excess. This will probably require an understanding of the function of the N-and C-terminal domains that show no homology to other proteins.
|
2018-04-03T00:00:36.906Z
|
2005-12-09T00:00:00.000
|
{
"year": 2005,
"sha1": "1db652e6ac4afd26926c82d04f38e8fba0548a89",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/280/49/40500.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "7c819041373a3aec858317da90e81a6d3c53228c",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
221841248
|
pes2o/s2orc
|
v3-fos-license
|
Recent advances in the development of protein–protein interactions modulators: mechanisms and clinical trials
Protein–protein interactions (PPIs) have pivotal roles in life processes. The studies showed that aberrant PPIs are associated with various diseases, including cancer, infectious diseases, and neurodegenerative diseases. Therefore, targeting PPIs is a direction in treating diseases and an essential strategy for the development of new drugs. In the past few decades, the modulation of PPIs has been recognized as one of the most challenging drug discovery tasks. In recent years, some PPIs modulators have entered clinical studies, some of which been approved for marketing, indicating that the modulators targeting PPIs have broad prospects. Here, we summarize the recent advances in PPIs modulators, including small molecules, peptides, and antibodies, hoping to provide some guidance to the design of novel drugs targeting PPIs in the future.
INTRODUCTION
PPIs and diseases Proteins are the basic building blocks of life that are made by amino acids. The amino acids are coded by genes and form the peptides, peptides further form various proteins, and the proteins form the living tissues. Besides, proteins also have a central role in biological processes such as catalyze reactions, transport molecules, immune reactions to the various pathogens, and signal transduction between cells. What is more, the critical biological processes in the cells that directly associate with our health like DNA replication, transcription, translation, and transmembrane signal transduction all rely on the functional specific proteins. The aforementioned biological activities are regulated through protein complexes, which are typically controlled via protein-protein interactions (PPIs). [1][2][3] PPIs in cells form a complicated network which has a term named "interactome". 4,5 The interactome has a significant role in physiological and pathological processes, including signal transduction, cell proliferation, growth, differentiation, and apoptosis, etc. [6][7][8] Therefore, the aberrant PPIs are associated with many human diseases such as cancer, infectious diseases, and neurodegenerative diseases. [9][10][11] Since the classic drug targets are usually enzymes, ion channels, or receptors, the PPIs indicate new potential therapeutic targets. 12 In recent years, the PPIs have received increasing attention and became attractive targets. 13,14 Recent studies indicate that the PPIs have great potential as an intervention target for novel treatment of refractory diseases, and its regulation is widely regarded as a promising strategy in drug discovery 8,15,16 (Table 1).
Challenges in discovering PPIs modulators
The classic small molecule drug discovery approach mainly focuses on the protein-ligand interactions, such as enzymes, ion channels, or receptors, because these proteins typically contain a well-defined ligand-binding site that small molecules can interact with. 17 The PPIs modulation through small molecules is generally considered difficult and PPIs were regarded as "undruggable" targets. 18,19 It is estimated that there are about 130,000-650,000 types of PPIs in the human interactome. 4,8,20 Although the number of protein complexes exceeds that of enzymes and receptors, designing a small molecule to bind to a PPI interface is challenging because of the reasons below. First, the PPIs occur on the interface of a specific domain where two identical or different proteins are in contact. The interface area of the interaction usually reaches 1500-3000 Å 2 , 21 which is larger than that of receptor-ligand contact area (300-1000 Å 2 ), 22 and the interface is highly hydrophobic. 21 Second, the PPIs interface tends to be flat and contains few grooves or pockets, thus making it difficult for the designed small molecule compounds to bind. [23][24][25] Third, the amino acid residues involved in PPIs are either continuous or discontinuous in their respective protein structures, thus results in high-affinity binding between the proteins, making it difficult for the small molecular compounds to inhibit such highaffinity interaction. 26 Forth, compared with traditional drug target enzymes or receptors, PPIs lack endogenous small molecular ligands for reference. 26 Besides, compared to traditional small molecule drugs , drugs acting on PPIs have a higher molecular weight (>400 Da), which makes it challenging to meet the criteria like Lipinski's "rule of 5". 23,27 Hot-spots Theoretically, the large binding interfaces are not regarded as the ideal drug targets because it is difficult to find a matching molecule. However, the emergence of "hot-spots" makes the designing drugs for PPIs possible. 28 Usually, PPIs happen on several amino acid residues in the interaction regions, having critical roles in the interaction. The regions of the amino acid residues on the PPIs interface that contribute to the binding-free energy are called "hot-spots". [29][30][31] As the area of PPIs expands, the number of hot-spots increases. The area of all hot-spots is about 600 Å 2 , usually located at or near the PPIs interface. The hot-spots in the PPIs are identified through a point mutation experiment. Specifically, the amino acid residues on PPI are muted into alanine, and the change of the binding-free energy is measured to determine the residues that contributes significantly to the binding-free energy. Hot-spots have been defined as these sites where alanine mutations cause a significant increase in the binding-free energy of at 2.0 kcal/mol. 32 Tryptophan, arginine, and tyrosine are more likely to appear in hot-spots than other amino acids. 15,30 Because of the important role of these "hot-spots" amino acids, they are often used to design PPI drugs. Therefore, although the interface of PPIs is relatively large, small molecule drugs only need to act on "hot-spots" to intervene in the PPIs.
Current approaches for the discovery of PPI modulators Targeting PPIs is challenging because of its unique interface. Compared to the binding pockets of conventional protein targets, the interface of PPIs tends to be flat. Therefore, classic medicinal chemistry methods are less effective for designing and identifying PPIs modulators. Thus, it is necessary to develop more effective approaches for screening the PPI modulators. A wide variety of strategies have been developed to identify hits and leads of PPI modulators in recent years.
High-throughput screening. High-throughput screening (HTS) is a well-established method for discovering classic drug targets. It has been used to identify compounds that target the hot-spots of PPI interfaces. 16 Because of the particularity of PPI interface, the compound library used for screening conventional targets may not be suitable for screening PPI modulators. It's crucial to have a broad compound library to have chemical diversity that may match the PPI target. However, HTS has been proved to be useful in the identification of molecules at the initial stage. For example, it successfully screened out inhibitors against MDM2/p53 interaction. [33][34][35] Fragment-based drug discovery. Fragment-based drug discovery (FBDD) aims to identify molecular fragments from fragment libraries. 36 Compared to HTS, FBDD is a better approach for PPIs modulators designing because the PPI interface often consists of discontinuous hot-spots. Surface plasmon resonance (SPR), nuclear magnetic resonance (NMR), X-ray crystallography, and mass spectroscopy (MS) can be utilized for discovery and validation of the fragment hits. 37,38 Once the fragment hits are identified, the fragment linking, fragment optimization, and fragment self-assembly can be used to obtain the hits. 39 Because the molecular weight of fragments is low and the contact interface is limited, the affinity is relatively low. 40 The X-ray crystallography and NMR can provide structural information for the hits optimization. As a result, FBDD is not suitable for the targets with unknown structure. The examples of successful application of FBDD in PPI modulators' discovery include XIAP/caspase-9, 41 Bcl-2/Bax, 42 and bromodomains, 43 etc.
Structure-based design. Since most PPIs lack endogenous small molecule ligands, it is challenging to rationally design the associate PPI modulators. However, the hot-spots provide important structural information and a basis for the rational design of PPI modulators. At present, there are two design strategies for structure-based design PPI modulators. The first is based on the hot-spots structure. Through bioisosterism and de novo design, the novel small molecule modulators can be obtained. 44 For example, during the development of VHL/HIF1α PPI inhibitors, Hyp564 was identified as a crucial amino acid. Through the de novo design targeting the Hyp546, the inhibitors were obtained. 45,46 The second is peptidomimetic design which mainly rely on computer modeling and phage display to simulate the secondary structure of the key peptides in PPIs. Furthermore, small molecules were designed or binding peptides were synthesized based on the stable α-helix structure formed by the key peptides. 47 The α-helix is the most common identified secondary structure in PPIs. 48 At present, many PPI modulators have been successfully developed based on the α-helix structure, including c-Myc/Max, 49 Bcl-2/Bax, 50 and MDM2/p53. 51 Virtual screening. The virtual screening is based on professional application software to screen out hits from compound libraries.
One big challenge in developing PPI modulators is to identify the disease-related and druggable PPIs among thousand of available ones. The virtual screening may be useful to locate the binding sites by analyzing the protein surface. It can be classified into both a structure-based approach and a ligand-based approach. The ligand-based approach aims to screen compounds that satisfy the built pharmacophore model. In contrast, the structure-based approach relies on the structural information of the target protein.
The virtual screening was successfully applied in the development of PPI modulators including Ubc13/Uevl, 52 MDM2/p53, 53 and TCF/ β-catenin. 54 Mechanism of PPIs modulators The small molecule PPIs modulators can interact not only with protein-protein interface but also with allosteric sites 55,56 (Fig. 1).
Studies showed the small molecule modulators can either bind to the non-interaction region of the proteins which is named allosteric inhibition or bind to the PPI interface, which is named orthosteric inhibition. Besides PPI inhibition, some modulators can stabilize or even enhance PPI. There are two models to explain the stabling effects: when the modulator binds to the allosteric regulatory site of the protein, it triggers the conformation change of the target protein, thereby enhance the affinity of the target protein to the other protein. In case the modulator binds to the PPI interface, provides more contact sites for the two proteins, the Fig. 1 Orthosteric and allosteric mechanisms for PPI inhibition and stabilization binding force of the two proteins gets enhanced. 57 For the PPI with hot-spots, the corresponding ligands can be designed to directly affect PPI. In case the PPI without hot-spots, the PPI can be indirectly regulated through the allosteric mode. 58 Specifically, if the PPI hot-spots residues gather together and form appropriate pockets, the orthosteric modulators can be designed and developed based on the pockets structure information to directly influence the associate PPI. If the hot-spots can not form appropriate binding sites, developing the allosteric modulators will be a better choice. 59 Most of the small molecules that have been identified to modulate PPIs are inhibitors. The PPI stabilization represents a promising modulation approach since the combination with pre-existing complexes is more advantageous in energy saving compared to the inhibition of complexes formation. [60][61][62] However, the development of PPIs stabilizers has not received sufficient attention as compared to the development of PPIs inhibitors. 63 Three types of PPIs modulators Up to date, the PPI modulators can be classified into three categories ( Table 2). The first category is the small molecule modulators. Compare to the classic drug targets like enzymes or ion channels, the PPI interface is large, flat, and lacks a suitable size pocket which the small molecules can bind with. What is more, the PPI interface is usually hydrophobic. Therefore, a potent PPI modulator should cover a large surface area and make a large number of hydrophobic contacts. Such a modulator may face pharmacokinetic issues due to its large molecular weight and poor solubility. 8 Therefore, the small molecule modulator is more suitable for the tight and narrow PPI interface. 44 The second category is an antibody. When targeting a large PPI interface, an alternative other than small molecular compounds is needed to cover the large interface. Although monoclonal antibodies compete with PPIs, because of their large molecular weight, the application of monoclonal antibodies is limited to the extracellular targets. Up to date, the monoclonal antibodies have been successfully used in clinical treatment although they may trigger adverse reactions associated with the immune reactions. The third category is peptides. The peptides are designed based on the structure information of the hot-spots. 64 The designed peptides retain the key roles when they bind to proteins, thereby forming a strong affinity with the proteins. Compared with small-molecule PPI modulators and monoclonal antibodies, the molecular weight of peptide is between the two. It has higher target specificity and affinity and is a potential PPI modulator. However, the peptide is susceptible to hydrolysis by various hydrolases in the body, which makes its half-life short.
In this review, we summarized the latest advances in PPIs modulators development including the small molecules, peptides, and antibodies. Also, we summarized the up to date some PPIs modulators in clinical trials, hoping to provide some guidance to the design of novel drugs targeting PPIs in the future.
INHIBITORS OF PPIS
Inhibitors of MDM2/p53 interaction (small molecules, peptides) The p53 is an important protein that regulates the cell cycle and functions as a tumor suppressor. 65 Studies showed~50% human cancers have alterations in the p53 gene which results in the inactivation of p53 function or loss of p53 expression. 66 The mouse double minute 2 (MDM2) is a proto-oncogene and a key negative regulator of p53. A negative feedback loop between MDM2 and p53 has been uncovered as the mechanism of how they regulate each other's level in the cells (Fig. 2a). 67 MDM2 directly binds to and forms a complex with p53, inhibiting the transactivation of p53. Therefore, recovering the impaired the function of p53 by disrupting the MDM2/p53 interaction offers a potential approach for the treatment of cancer. 68,69 The X-ray crystallography disclosed the details of the MDM2/ p53 interaction. The interaction between MDM2 and p53 involves four key hydrophobic residues (Phe19, Leu22, Trp23, Leu26) in an α-helix formed by p53 and a small but deep hydrophobic pocket in MDM2 28 (Fig. 2b). An effective strategy to block their interaction is to design a small molecule compound that mimics the "hotspots" residue structure of p53, which competes with p53 to bind with MDM2, thereby preventing the inactivation of p53. The peptide-like design, HTS, and structure-based design were adopted as the strategies to screen the MDM2/p53 inhibitors with good drug-like properties. [70][71][72] The imidazoline compounds Nutlins discovered by Vassilev et al. 33 through HTS showed strong inhibitory effects against MDM2/p53 interaction (Fig. 2c). As a group of small-molecule inhibitors of MDM2, the Nutlins mimic the effect of p53 peptide segment. The Nutlins bind to the deep hydrophobic pocket in MDM2, therefore block the MDM2/p53 interaction. Studies showed the IC 50 of Nutlin-1, Nutlin-2, and Nutlin-3 on MDM2/ p53 interaction were 260, 140, and 90 nM, respectively, in vitro. 33 Based on the inhibitory dose values, the Nutlin-3 was selected as the lead compound. Roche restructured the Nutlin-3 by substituting the methyl for the 4-and 5-position hydrogen atoms of its imidazole ring, and replaced the cyclomethoxy group at the para position of the benzene ring with a tert-butyl group which prevented the metabolic inactivation of the imidazole ring and the benzene ring. 73 Meanwhile, the isopropoxy group was replaced by an ethoxy group to reduce the molecular weight, and the hydrophilic side chain of carbonyl piperazine was replaced by a methylsulfonyl propyl piperazine, therefore obtained the compound RG7112 (Fig. 2c). The homogeneous time-resolved fluorescence (HTRF) assay showed that the compound RG7112 (IC 50 = 18 nM) was optimized to be four times more sensitive than that of Nutlin-3. 73 RG7112 is the first MDM2 inhibitor entered clinical trials for the treatment of advanced solid tumors.
Using peptides to inhibit PPIs has become a promising way to discover active compounds. Chang et al. 74 reported a class of potent MDM2 peptide inhibitors ATSP (Table 3), among which the IC 50 value of ATSP-7041 reached 0.9 nM, and the reported K i values of the ATSP series peptides reached the nanomolar level. The key is that the peptide mimics the key α-helical structure in the p53/ MDM2 interaction, thus binding MDM2 competitively with p53. ATSP inhibitors showed a certain biological activity in vivo, which may be related to the good cell membrane permeability produced by the stable α-helix structure. The western blot analysis also showed that the ATSP inhibitors inhibit MDM2 in cells, thereby activating the role of tumor suppressor protein p53. 74 Inhibitors of Bcl-2/Bax interaction (small molecules) The Bcl-2 family is a key regulator of apoptosis, and it has over twenty members. According to their role in apoptosis, the Bcl-2 family members can be divided into two categories including the anti-apoptotic proteins and the pro-apoptotic proteins (Fig. 3a). The anti-apoptotic proteins include Bcl-2, Bcl-w, Mcl-1, and Bcl-A1. The pro-apoptotic proteins include Bax, Bok and Bak, Bid, Bad, Bmf, Noxa, Puma, Hrk (among them, Bid, Bad, Bmf, Noxa, Puma, and Hrk are BH3-only protein). 75,76 Both antiapoptotic and pro-apoptotic members usually synergize in the form of dimers, having the role of apoptotic switch. 77,78 Proapoptotic proteins such as Bax and Bad have critical roles in the apoptosis. The functions of these pro-apoptotic proteins are blocked when they bind to the anti-apoptotic proteins like Bcl-2. Therefore, inhibiting the interaction between the pro-and Table 3. Peptide inhibitors of MDM2/p53 interaction reported by Chang et al.
Name
Sequence ATSP-1800 Ac-Gln-Ser-Gln-Gln-Thr-Phe-R8-Asn-Leu-Trp-Arg-Leu-Leu-S5-Gln-Asn-NH2 25.9 ATSP-3848 Ac-Leu-Thr-Phe-Glu-His-Tyr-Trp-Ala-Gln-Leu-Thr-Ser-NH2 14.6 ATSP-3900 Ac-Leu-Thr-Phe-R8-His-Tyr-Trp-Ala-Gln-Leu-S5-Ser-NH2 anti-apoptotic proteins prevents the tumor cells from escaping apoptosis. The Bcl-2 family members have low homology, but they contain at least one or four conserved Bcl-2 homology (BH) motifs, named BH1, BH2, BH3, and BH4. 76 There are two hydrophobic ɑ-helix structures in Bcl-2 which are surrounded by six to seven amphiphilic ɑ-helix structures, of which four amphiphilic ɑ-helix structures form a hydrophobic BH3 "pocket" to interact with Bax ( Fig. 3b). 79 Compared with the Bax/Bak homodimer, the Bcl-2/Bax homodimer is more stable, which weakens the role of Bax/Bak in inducing cell apoptosis and prevents cell apoptosis. Therefore, the lead compounds should mimic the function of the pro-apoptotic protein domain. The ideal compounds will bind to the hydrophobic pocket on the surface of the anti-apoptotic protein, thereby blocking the anti-apoptotic protein to bind with the BH3 domain and result in the cancer cell apoptosis induction. 80,81 Abbott researchers studied the Bcl-XL hydrophobic groove and found that the hydrophobic groove consists of two relatively independent small pockets. 82 They used the "SAR by NMR" approach to screen the fragments with BH3 on Bcl-XL, and obtained compound 1 (K d = 0.30 ± 0.03 mM) and compound 2 (K d = 4.3 ± 1.6 mM) from the library (Fig. 3d). The researchers used a fragment-based drug design strategy and screened the compounds based on the NMR data. Based on the position and spatial Fig. 3 The Bcl-2/Bax interactions and inhibitors. a The Bcl-2 family can be classified into two categories: the anti-apoptosis proteins and proapoptosis proteins. The pro-apoptosis proteins can be divided into multi-BH proteins and BH3-only proteins. b The crystal structure of Bcl-2 in complex with Bax BH3 peptide (PDB:2XA0). c The binding modes of ABT-199 binds to Bcl-2 (PDB:6GL8). d The chemical structures of inhibitors of Bcl-2/Bax orientation data obtained from the complexes of the Bcl-XL hydrophobic groove-binding pockets with the compound 1 and compound 2, the researchers modified the compound 2's structure by adding a linking group thereby constructed a highly active new lead compound 3 (IC 50 = 36 nM). However, the compound 3 exhibited poor water solubility but high affinity to human serum albumin (HSA). In subsequent structural optimization, the researchers reduced the compound's affinity to HAS through substituting polar groups at specific sites. It was found that introducing 2-dimethylaminoethyl substituent at the second ligand of compound 3 and substituting piperazine at the first ligand improved its affinity to Bcl-2 protein, and thus compound ABT-737 was obtained (Fig. 3d). The ABT-737 binds to Bcl-XL (K i < 0.5 nM) and Bcl-2 (K i < 1 nM), and its IC 50 value reaches 35 nM in 10% of human serum. The ABT-737 is not only widely used in the biological studies associated with apoptosis, but also in preclinical studies in lymphoma, small cell lung cancer and chronic lymphoblastic leukemia. 83,84 However, the poor oral absorption of ABT-737 significantly limits its clinical application. The ABT-263 (Navitoclax) (Fig. 3d) is the second generation of Bcl-2 anti-apoptotic protein inhibitor based on the structure of ABT-737. 85,86 It can bind with Bcl-2 (K i < 1 nM), Bcl-XL (K i < 0.5 nM), Bcl-w (K i < 1 nM), and MCL-1 (K i = 550 nM). The preclinical studies showed that ABT-263 alone effectively inhibited the small cell lung cancer xenograft tumors growth in mice model. Besides, the ABT-263 also showed synergic effects in inhibiting the solid tumors and blood tumors in combination with other antineoplastic agents. 86 However, studies also showed the ABT-263 could temporarily decrease the platelet count. 87 The ABT-199 (Venetoclax) (Fig. 3d) is the first small molecule PPI inhibitor approved for marketing. It is a Bcl-2 selective inhibitor based on the structure design of lead compound ABT-263. 88 It was approved to be marketed in 2016 for the treatment of chronic lymphoblastic leukemia. 89 By studying the complex structure of Bcl-2 protein and small molecule acyl sulfonamide compounds, it was found that the introduction of indole group was beneficial to enhance the binding of drugs to P4 pockets through hydrophobic interaction and resulted in the formation of electrostatic interaction with aspartic acid residues specific to Bcl-2 protein 88 (Fig. 3c). The researchers from the Abbvie introduced indole group and azaindole group into the ABT-263 skeleton structure and studied the structure-activity relationship. The studies showed that the ABT-199 had good activity on Bcl-2-dependent hematological cancers. 88 The ABT-199 showed a high affinity for Bcl-2 (K i < 0.01 nM) and a weak affinity for Bcl-XL (K i = 48 nM). It showed an excellent inhibitory effect on the acute lymphoblastic leukemia cells with high expression of Bcl-2 (EC 50 = 8 nM). Compared with the second-generation drug ABT-263, the ABT-199 significantly reduced the damage to the platelets in both in vitro and in vivo studies.
Inhibitors of XIAP/caspase-9 interaction (small molecules) Inhibitors of apoptosis proteins (IAPs) are an important class of endogenous anti-apoptotic proteins. 90 They bind to the caspase or other pro-apoptotic proteins, results in the inhibition of the proapoptotic proteins functions and promotes their degradation, thereby regulates the apoptosis. 91,92 The IAPs has eight family members: XIAP, c-IAP1, c-IAP2, ML-IAP/Livin, ILP2, NAIP, Bruce/ Apollon, and surviving. 93 The caspase, a cysteine-containing aspartate proteolytic enzyme, is the main implementer of apoptosis, which induces apoptosis through two pathways. One of which is the death receptor pathway (extrinsic pathway) that mediated through caspase-8. The other one is the mitochondrial pathway (intrinsic pathway), which mediated via cytochrome C/ caspase-9 (Fig. 4a). 94 The BIR3 domain of the XIAP binds to and inhibits pro-apoptotic caspase-9, thus suspends the apoptosis. 95 Interestingly, the endogenous protein inhibitor of the XIAP-caspase-9 interaction exists in the form of Smac (second mitochondria-derived activator of caspase). When the Smac released from the mitochondria, its N-terminal amino acids, alanine-valine-proline-isoleucine (AVPI) bind to the BIR3 domain of XIAP, which makes the XIAP lose the ability to combine with caspase, so as to promote apoptosis. 96,97 Fig. 4 The XIAP/caspase-9 interactions and inhibitors. a The apoptotic pathway. There are two apoptotic pathways: extrinsic and intrinsic. The extrinsic pathway (also known as death receptor) involves the binding of a death receptor ligand to a member of the death receptor family. Active caspase-8 cleaves and activates the executioner caspase-3 and caspase-7, leading to the cell death. The intrinsic pathway (also known as mitochondrial) is mediated by caspase-9. After the mitochondrial membrane is stimulated by apoptosis, it releases cytochrome c and Smac proteins into the cytoplasm. Smac is a pro-apoptotic protein. Cytochrome c combines with Apaf-1 to form a polymer, and promotes procaspase-9 to form apoptotic bodies, and then activate caspase-9. The activated caspase-9 can activate other caspases, such as caspase-3, so as to induce apoptosis. b The chemical structures of inhibitors of XIAP/caspas-9 The four amino groups of the AVPI at the N terminus of the Smac protein have a very important role in the binding of XIAP to caspase-9, which competes with the caspase-9 protein for binding to XIAP. 98,99 Therefore, the interaction between XIAP and caspase-9 can be inhibited by the Smac protein mimetics that exhibit the similar affinity to XIAP. 100 The crystal structure of Smac and XIAP-BIR3 domain revealed that the Val of P2 position and the Ile of P4 position in the Smac formed three hydrogen bonds with the Gly306 and the Thr308 of XIAP-BIR3 domain. 99 The 3-position Pro ring bind with the hydrophobic region formed by the Trp323 and Tyr324 of XIAP-BIR3 domain through van der Waals force. Moreover, the Pro ring is essential for maintaining the conformation of AVPI peptide chain, so proline is relatively stable and usually not replaced by other amino acids. Flygare et al. 101 discovered the first Smac simulator GDC-0152 through a combination of peptide-like design strategies and highthroughput screening (Fig. 4b). The GDC-0152 binds to XIAP-BIR domain with high affinity by mimicking the structure of the Smac AVPI peptide. Another Smac mimetic GDC-0917 (CUDC-427) (Fig. 4b) has entered phase І clinical trials for the safety evaluation of patients with advanced solid tumors and lymphomas. 102 Novartis LCL-161 (Fig. 4b), which is currently progressing rapidly, has entered phase II clinical trials for triple negative breast cancer. 103 Inhibitors of Hsp90/Cdc37 interaction (small molecules) The heat shock protein 90 (Hsp90) is a widely existed, highly conserved molecular chaperone that was discovered in 1962. It is also one of the most abundant proteins in cells. Studies showed the expression of Hsp90 in tumor cells is two to ten times higher than that of normal cells, which indicates it has a very important role in tumor cell growth and survival. 104 Studies also showed the Hsp90 participates in the maturation of protein kinases and transcription factors such as Her2, VEGF, mutant p53, CDK4, HIF-1ɑ, Raf -1, Akt, etc. which regulate the cancer cell's growth and apoptosis signaling pathways. 105,106 Hsp90 stabilizes the conformation of the client proteins mentioned above and prevents them from ubiquitination-mediated degradation, thereby stabilizes them to stay in the active form and promotes the tumor growth and metastasis (Fig. 5a). 107 Therefore, inhibiting the interaction between the Hsp90 and its client proteins may promote the degradation of the client proteins, and thus results in the inhibition of the tumor growth. [108][109][110] The previous studies also showed that the Hsp90 is a homodimer and each monomer is formed by three highly conserved domains including a N-terminal ATP-binding domain, a middle domain, and a C-terminal dimerization domain. 111 The Nterminal domain is an ATP/ADP-binding site that hydrolyzes the ATP in the ATP-binding pocket. The ATP/ADP-binding site acts as a conformational transformation region, regulates the assembly of the Hsp90 involved multi-molecular chaperone complexes. 112 The middle domains serve as both nuclear localization sequences and the target protein-binding sites. The middle domains distinguish the different substrate proteins and regulate the activity of specific substrates of molecular chaperones. 113 The C-terminal domain is a self-dimerization site of Hsp90, which enhances the interaction of two Hsp90 N-terminal domains. 114 The current Hsp90 inhibitors can be classified into three categories: the N-terminal ATP pocket inhibitors, the C-terminal nucleotide site inhibitors, the Hsp90 and chaperone complexes inhibitors. One critical function of Hsp90 is to regulate its client proteins to utilize ATP. The inhibition of such crucial function affects many normal proteins, results in high toxicity. 115 Hsp90 inhibitor SNX-5422 developed by Pfizer terminated clinical phase 1 trial in 2011 due to ocular toxicity. 116 Therefore, researchers believe that targeting Hsp90 and its molecular chaperones is a new direction for the cancer treatment study. 117,118 Among the numerous molecular chaperones of Hsp90, the value of cell division cycle protein 37 (Cdc37) has attracted much attention. 110 Many protein kinases (such as EGFR, CDK, Akt) rely on Cdc37 to aggregate onto Hsp90, thus completing the correct folding of the complex's spatial conformation. 110,119 Therefore, inhibiting the interaction of Hsp90 and Cdc37 may deactivates the kinase client proteins, thereby inhibiting the proliferation and growth of tumor cells. In addition, the PPI targeting Hsp90/ Cdc37 specifically targets the kinase client protein of Hsp90, thereby improving selectivity and avoiding a series of adverse reactions.
In 2004, the researchers resolved the first crystal structure of Hsp90 N -Cdc37 M which provided a solid structural basis for the design of Hsp90/Cdc37 interaction inhibitors. 120 The NMR analysis of the Hsp90 N -Cdc37 M complex indicated the hydrophobic interaction is the major interaction force between the two proteins. 121 The key amino acids of the interface include Met164, Trp193, Ala204, Leu205 of Cdc37 M and Ala117, Ala121, Ala124, Ala126, Met130, Phe134 of Hsp90 N . The Leu205, the leucine residue of Cdc37, is very important for the formation of Hsp90 N -Cdc37 M complex. Experiments show that the mutation of Leu205 resulted in the loss or decrease of the binding ability between Hsp90 N and Cdc37 M .
In 2018, Xie et al. 107 first reported the small molecule inhibitor DCZ3112 inhibits the interaction of Hsp90/Cdc37 (Fig. 5c). The DCZ3112 directly binds to the N-terminal domain of Hsp90, inhibits the Hsp90/Cdc37 interaction without affecting the ATPase activity of Hsp90 (Fig. 5b). The DCZ3112 mainly inhibits the proliferation of HER2-positive breast cancer cells and its IC 50 value of SK-BR-3 and BT-474 cells was 7.9 and 4.6 μM, respectively. Experiments in SK-BR-3 and BT-474 cells showed that DCZ3112 downregulated the number of Hsp90 client proteins HER2, Akt, RAF-1, CDK4, and CDK6 in a concentration-dependent manner. The in vitro experiments results showed the DCZ3112 has a synergistic effect in inhibiting cell proliferation, inducing G1 arrest, inducing apoptosis, and reducing phosphorylation of Akt and Erk. 107 Inhibitors of c-Myc/Max interaction (small molecules) The c-Myc is a transcription regulator encoded by the protooncogene Myc. It is a highly conserved protein with helix structure. It has a critical role in promoting tumorigenesis, maintaining the growth, proliferation and differentiation of tumor cells, angiogenesis and apoptosis. [122][123][124] Aberrant expression of c-Myc has been confirmed in most malignant tumors. 125 As a result, c-Myc has become a research hot spot. The c-Myc has a bHLH-ZIP domain, its function depends on the formation of Myc-Max dimer. 126 The Myc-Max dimer recognize the CACGTG in E-box sequence on their target DNA and bind to it to activate or enhance the transcription of the regulated genes. 126 Therefore, inhibiting the PPI between the c-Myc and Max may inhibit the activation or transcription of oncogenes, indicating an antitumor effect. 127,128 Castell et al. 129 used the cell-based bimolecular fluorescence complementation (BiFC) to screen small molecules that interfere with c-Myc/Max interaction. Three compounds with good potential were identified from a library of 1990 compounds: MYCMI-6, MYCMI-11 and MYCMI-14 (Fig. 6) Chauhan et al. 130 discovery that the compound 10074-G5 inhibits the formation of heterodimers between c-Myc/Max (Fig. 6). The nitro and furan rings of 10074-G5 interact with Arg366, Arg367, and Arg372 in the HLH domain, therefore, inhibit the formation of heterodimers between c-Myc and Max. 128 The optimization of compound 10074-G5 led to the discovery of compound JY-3-094 (Fig. 6). 131 The electrophoretic mobility shift assays (EMSAs) showed the formation of heterodimers between JY-3-094 and c-Myc/Max was five times more active than the 10074-G5 (IC 50 = 33 μM vs 146 μM). However, unlike 10074-G5, the JY-3-094 does not inhibit the proliferation of human promyelocytic leukemia (HL60) or Daudi Burkitt lymphoma cell lines because the charged carboxylic acids groups in the molecule impeded the cell entry. By esterifying the carboxylic acid in the JY-3-094 into a series of ester prodrugs, the lower IC 50 values were reached in both HL60 and Daudi Burkitt lymphoma cell lines. However, the activity of ester prodrugs is always limited by the activity of carboxylic acid metabolites, so the structural optimization of JY-3-094 continuous. Studies showed the phenyl ring adjacent to aniline in 10074-G5 enhanced the inhibitory effects. The introduction of phenyl ring into the JY-3-094 led to the formation of 3JC48-3, with an IC 50 value of 34.8 μM for c-Myc/Max protein inhibitory activity (Fig. 6). Further studies showed that 3JC48-3 inhibits the tumor cell proliferation by inducing cell arrest in the G0/G1 phase. Such a significant increase of c-Myc/Max protein inhibitory effect may be the interaction between phenyl ring with the Phe375/lle381 and Arg378 in the c-Myc/Max. 130 Inhibitors of KRAS/PDEδ interaction (small molecules) Oncogenic RAS is an important antitumor target, and are mutated in about 20-30% of human cancers. 132 The RAS family has three members: HRAS, KRAS, and NRAS. The KRAS protein is often mutated in various cancers. Specially, the KRAS mutation has been observed in a large proportion of pancreatic cancers. 133 The RAS mutations lead the cells on the hyperactive state for unlimited proliferation. As a molecular switcher, RAS activates the downstream signaling pathways such as MAPK and PI3K-Akt through binding to GTP, thus regulating the growth, proliferation, differentiation, and apoptosis of the cells. If the RAS proteins are continuously activated, it can bind to the downstream effector proteins and transmit signals to the downstream proteins, causing aberrant cell proliferation or tumorigenesis. 134 Therefore, RAS proteins can be developed as an important target for the cancer treatment. At present, there are mainly two strategies to inhibiting KRAS. The first strategy is directly targeting the signal pathway of KRAS protein. The second strategy is inhibiting the KRAS membrane association which impairs the localization of KRAS and the signal transduction of tumor proliferation. To carry out its signal transduction function, the RAS proteins need to be recruited to the inner side of the plasma membrane after expression. 135 In the process of KRAS relocating to the cell membrane, PDEδ promotes the KRAS protein recruit to the Golgi apparatus 135,136 (Fig. 7a). Therefore, through interfering with the interaction between PDEδ and KRAS, the localization of KRAS on the plasma membrane can be inhibited and the signal transduction of carcinogenic RAS can be blocked. 137 However, some studies showed that the degree of dependence of KRAS on PDEδ is not yet clear. For example, PDEδ knockout mice are fertile, 138 whereas the knockout of KRAS in mice is embryonic lethal, 139 indicating KRAS is functional in the absence of PDEδ. Although the relationship between KRAS and PDEδ is vague, blocking KRAS membrane association is a good direction to inhibit the KRAS activity. 140 There are many small molecule compounds that inhibit the interaction between PDEδ and KRAS. 137,[141][142][143][144][145][146] In 2018, Chen et al. 147 discovered the novel KRAS/PDEδ inhibitors through fragment-based drug design. By applying molecular docking, they found the compound 4 and compound 5 exhibited inhibitory effects on PDEδ and KRAS interaction when the two molecules exist in a specific way (Fig. 7b). The molecular docking model showed that the distance between the benzene ring of the compound 5 and the nitrogen atom of the amide of the compound 4 was 5.3 Å; the distance between the benzene ring of the compound 4 and the nitrogen atom of the imidazole of the compound 5 was 5.0 Å. The above distances are both suitable for using an ether linker to connect the two methylene groups. Therefore, the two series of compound 6 and 7 were obtained (Fig. 7b). A further optimization of the compound 6 and 7's structure led to the synthesis of compound 8. Compound 8 exhibited a good affinity for PDEδ (K d = 38 ± 17 nM) (Fig. 7b). The molecular docking data showed the cyclopropyl group in compound 8 forms a hydrophobic interaction with the amino residues Ile129, Val145, and Leu147 in the PDEδ. Compound 8 also exhibited inhibitory effects in Capan-1 cells (IC 50 = 8.8 ± 2.4 μM). The RAS family regulates MAPK and PI3K-Akt-mTOR signaling pathways, and studies showed that the compound 5 downregulates the phosphorylation levels of Akt and Erk. In sum, compound 8 induces apoptosis in Capan-1 cells.
Inhibitors of CD40/CD40L interaction (small molecules) T cells have an important role in the immune system. Their activation requires not only the direct stimulation of foreign antigens, but also the co-stimulus signal transmitted by the interaction of surface molecules. 148 The CD40/CD40L pathway is one of the most important co-stimulating pathways in T-cell activation. Because of its critical role in the T-cell activation, the aberrant CD40/CD40L pathway is responsible for various pathological conditions. CD40 is a membrane surface molecule has a key role in B-cell development and activation. It is a surface antigen associated with T cells and B cells function. 149 CD40L, a Tcell-B-cell-activating molecule, is widely expressed in the activated T cells, especially CD4 + T cells. 150 The CD40 and CD40L are a pair of complementary protein molecules. The CD40 is a member of the tumor necrosis factor receptor superfamily and its ligand CD40L (also known as CD154) belongs to the tumor necrosis factor family. 151 Both CD40 and CD40L are mainly expressed by T and B cells. As a pair of the membrane proteins, CD40/CD40L participates in various vital physiological processes including B-cell activation, proliferation, differentiation, antibody production, apoptosis, T-cell activation, cytokine production, humoral immunity, cellular immunity, and inflammatory response 152 (Fig. 8a). The abnormal expression of CD40/CD40L is closely related to the occurrence and development of inflammatory reaction, autoimmune diseases and immunodeficiency diseases. [152][153][154][155] Therefore, blocking the interaction between CD40 and CD40L may has great potential to treat the associated diseases.
Multiple antibodies that block the interaction of CD40 and CD40L have been tested in preclinical or clinical trials, including: bleselumab, lucatumumab, and dacetuzumab, etc. 156 Dacetuzumab is a IgG1 humanized anti-CD40 monoclonal antibody. In the absence of IL-4 and CD40L, dacetuzumab activates the B cells' proliferation but inhibits the highly differentiated B cells' proliferation. Besides, dacetuzumab transmits apoptosis signals through caspase-3, mediates antibody-dependent cell-mediated cytotoxicity (ADCC) and antibody-dependent cellular phagocytosis (ADCP) effects. 157 However, most of these antibodies' trails were terminated due to the severe thrombolysis side effect. [158][159][160] Previous studies demonstrated the thrombolytic side effect may be a feature of antibody treatment. 158 What is more, recent studies discovered that the antibody aggregation induced by the mAb Fc domain is also associated with thrombolysis side effect. 161 Therefore, to avoid the thrombolysis severe side effect, alternative approaches like using small molecule compounds to block the interaction between CD40 and CD40L need to be developed. Buchwald's group reported small molecular organic dyes that blocked the interaction between CD40 and CD40L. 162,163 Based on these organic dye compounds, they synthesized a series of small molecule compounds that block the interaction between CD40 and CD40L. 164 Among them, the IC 50 of DRI-C21041, DRI-C21045, and DRI-C25441 were 0.31, 0.17, and 0.36 μM, respectively (Fig. 8b). These compounds also showed inhibitory effects on CD40Linduced B-cell activation, proliferation, and the activation of NF-κB. In addition, they also inhibit the immune response induced by alloantigen.
Inhibitors of Skp2/Skp1 interaction (small molecules) Ubiquitin-protein degradation system (UPS) is composed of more than 1000 proteins. As the main pathway of protein degradation in cells, UPS has a key role in cell cycle regulation, intracellular signal transduction, gene transcription, metabolic regulation, immune surveillance and other basic cell life processes. The aberrant UPS system is responsible for the occurrence of various diseases. 165,166 UPS consists of ubiquitin-activating enzyme 1 (E1), ubiquitin-conjugating enzyme 2 (E2), ubiquitin-protein ligase 3 (E3) and proteasome. 167 At present, the E3 has been studied most. Skp1-Cullin 1-F-box (SCF) ubiquitin ligase containing F-box protein is one of the most important ubiquitin ligases and has attracted wide attention. 168 SCF is a multi-subunit structure consisting of four parts: Cull, Skp1, Rbx1 and F-box 169 (Fig. 9a). As a member of the F-box protein family, S phase kinase-associated protein 2 (Skp2) and Skp1, Cull, and Rbxl constitute E3 ligase, which is involved in the process of catalyzing the transformation of cells from G1 to S phase. 170 The overexpression of Skp2 is extremely common in human cancer cells, and Skp2 overexpression promotes cancer invasion and metastasis. 171 The interaction between Skp2 and Skp1 is the precondition of the completeness of Skp2-SCF complex and the key to exerting its E3 ligase activity. Therefore, blocking the interaction between Skp2 and Skp1 prevents the formation of Skp2-SCF complex and thus may inhibit the occurrence and development of tumors.
The crystal structure of the Skp2-SCF complex shows that Skp2 interacts directly with Skp1 through its F-box domain and indirectly binds to Cul1 and Rbx1. 172 Along with the Skp2-Skp1 interface, Chan et al. also reported that Skp2 has 19 "hot-spots" amino acids in contact with Skp1. They classified these key Skp2-Skp1-binding sites into two pocket regions. 173 The first region (pocket 1) is near the N terminal of Skp2 and is in the F-box motif which including the amino acid residues of Trp97, Phe109, Glu116, Lys119, and Trp127. The second region (pocket 2) is close to the C terminal of Skp2, formed by a Leu-rich repeat sequence with some residues from the F-box domain (Fig. 9b). Inhibitors bind to one or both of these pockets prevent the formation of Skp2-Skp1 complex.
Chan et al. 173 identified seven compounds that could inhibit the formation of Skp2-Skp1 complex through HTS. Among which, SZL-P1-41 exhibits strong inhibition effects to the Skp2-Skp1 complex formation (Fig. 9c). The molecular docking model shows the SZL-P1-41 binds to pocket 1 rather than pocket 2, which suggests the pocket 1 in the F-box sequence of Skp2 may have a leading role in the Skp2-Skp1 interaction 173 (Fig. 9b). The docking model also suggests that the benzothiazole structure of SZL-P1-41 interacts with the Trp97 residue on Skp2 through an aromatic stack and a polar contact; The flavone groups of SZL-P1-41 interact with the Asp98 and Trp127 of Skp2 via hydrogen bonding or hydrophobic interaction; The ethyl group on phenol ring extends into Skp1 region; The piperidine interacts with both Asp98 and Trp127. Both in vitro and in vivo experiments results showed the Skp2 inhibitors could inhibit the Skp2-mediated P27 Fig. 9 The Skp2/Skp1 interactions and inhibitors. a The composition of Skp2-SCF complex. Cullin 1 (Cul1) forms the backbones of ubiquitin ligase complexes. Cul1 is activated by covalent conjugation with NEDD8. The SCF complex consists of the invariable components Rbx1 (RINGfinger protein), Cul1 (scaffold protein), and Skp1 (adaptor protein) as well as a variable F-box-protein component, which is responsible for substrate recognition. Skp2 is a member of the F-box protein and is a substrate recognition subunit of the SCF complex. Skp2 can specifically recognize the substrate and mediate its ubiquitination degradation. b The potential-binding pockets on the interface of Skp2-Skp1 complex (PDB:1FQV). c The chemical structures of inhibitors of Skp2/Skp1 ubiquitination. The in vivo experiments data also showed the SZL-P1-41 could effectively inhibit the growth of tumors. In addition, the Skp2 inhibitors not only inhibit the formation of Skp2-Skp1 complex, but also reduce the Skp2 E3 ligase activity. The higher doses of SZL-P1-41 also reduces the Skp2 protein expression.
Inhibitors of Keap1/Nrf2 interaction (small molecules, peptides) The Keap1-Nrf2-ARE signaling pathway is the most important antioxidant stress pathway, which is associated with a variety of oxidative stress-related diseases including cancer, Alzheimer's disease, Parkinson's disease, diabetes, and arthritis. 174 Under physiological conditions, the Keap1 targets the Nrf2 to initiate the ubiquitin-dependent degradation of protein media. When cells under electrophilic or oxidative stress, the Nrf2 escapes the Keap1-mediated degradation and enters the nucleus, where it mediates the activation of the antioxidant and cytoprotective genes 175,176 (Fig. 10a). Therefore, the Nrf2 signaling pathway activators should have therapeutic effect in oxidative stressinduced diseases. Up to date, most of the "Nrf2 activators" are inhibitors of Keap1/Nrf2 interaction which covalently bind with the sulfhydryl groups on the cysteine in Keap1 through oxidation or alkylation. The covalent adduct changes the Keap1 conformation that prevents the Nrf2 interact with Keap1. 177 However, the covalent binding is irreversible. As a result, the long term application of the Keap1/Nrf2 inhibitors results in the accumulation of the active Nrf2, which may trigger other problems like cancer 178 Therefore, finding non-covalent small molecules that directly interfere with the Keap-Nrf2 interaction, dissociating the two and exerting antioxidant defense effects has become a new therapeutic strategy. 179 In 2006, Hannink's group analyzed the structure of a complex between the Kelch domain of Keap1 and Nrf2-derrived peptide, thus revealed the binding interface between Nrf2 and Keap1, and determined the key residues of Keap1, including Arg380, Arg415, Arg483, Ser363, Ser508, Ser555, and Ser602, which laid the foundation for the design of Keap1/Nrf2 interaction inhibitors. 180 The study of the Keap1/Nrf2 inhibitors begins with the investigation of the polypeptides that inhibits the Keap1/Nrf2 interaction. Up to date, a number of the inhibitive polypeptides have been reported [181][182][183][184] (Table 4). Hu et al. 185 developed a series of fluorescent probes to verify when the peptide chain length is 9 amino acids it has the best activity to inhibit the Keap1/Nrf2 interaction. The peptide inhibitor P1 designed based on the fluorescent probes has a moderate inhibitory activity (IC 50 = 3480 ± 920 nM), and its activity increases with the elongation of the polypeptide chain (7-16 amino acids). For instance, the hexadecapeptide P2 (IC 50 = 163 ± 11 nM) exhibits the highest activity. 181 The acetylation of the N terminus of such a peptide neutralizes the positively charged group at the N terminus, which also greatly changes its electrical property. The nonapeptide P3 was obtained via the modifications as above-mentioned and exhibits great activity (IC 50 = 194 ± 49 nM). 186 Subsequently, the structure-activity relationship study demonstrated that the heptapeptide also shows activity, such as heptapeptide P4 (IC 50 = 8230 ± 262 nM) and P5 (IC 50 = 558 ± 53 nM), which exhibits moderate inhibitory activity. The follow-up work focused on acetylation and C18 fatty acid stearic heptapeptide. The compound P6 showed excellent Keap1 inhibitory activity (IC 50 = 22 ± 3 nM). 182 However, the peptide inhibitor has large molecular weight and poor ability to penetrate the cell membrane. Therefore, it is of great significance to find a class of small size peptide inhibitors with strong membrane permeability. Steel et al. designed and synthesized a number of highly permeable membrane peptides. Among them, the compound P7 induces the expression of heme oxygenase-1 (HO-1) in cells, and inhibits the proinflammatory cytokine-TNF's expression. 187 Most of the high binding peptides have poor cell permeability. Subsequently, the high binding peptides' activity is not ideal. Therefore, screening the small molecule inhibitors have become a hot spot in the study of Keap1/Nrf2 interaction inhibition. [188][189][190][191][192][193][194][195] The mile stone of the small molecule inhibitors study is the development of benzothiazepine heterocyclic Keap1-Nrf2 small molecule inhibitors made by GSK through fragment-based drug design. 189 After screening 330 fragments via X-ray crystallography, the compounds 9-11 ( Fig. 10b) were identified which interact with Arg483, Tyr525, and Ser602, respectively. The binding of the compounds to the amino acids as mentioned above simulating the binding of Nrf2 peptide segment with Keap1, but the binding activity of these three compounds was low (K d > 1 mM). To improve the compounds' Keap1-binding activity, various structural modifications were made. Among which, the introduction of methanesulfonamide on the benzene ring of compound 12 significantly increased the compound's Keap1-binding activity by 20 times (IC 50 = 61 μM). When compound 13 was completely introduced, its IC 50 strikingly reached 0.27 μM. A series of structural optimizations were performed using compound 13 (Fig. 10b) as the lead: when methyl group is substituted for the chlorine atom on the benzene ring, it can release its potential binding to the sulfonamide center; Introducing an electrondonating group methoxy group at the 7-position of benzotriazole can improve hydrogen bonding and enhance surface bonding; The conversion of benzenesulfonamide ring to seven membered benzothiazide heterocycle can make more space occupied by sulfonamide and benzotriazole sites, and the activity of compound 14 (Fig. 10b) is significantly increased (IC 50 = 0.015 μM). These compounds can induce the expression of Nrf2 downstream target protein NQ01 in BEAS-2B cells, and reduce the lung inflammation induced by ozone in animal experiments.
Inhibitors of PD-1/PD-L1 interaction (small molecules, peptides, antibodies) Studies indicate that the PD-1/PD-L1 signaling pathway has critical role in tumor immune escape and tumor development. 196 PD-1 (also known as CD279) is an immunosuppressive receptor belongs to the CD28 superfamily of T-cell regulatory receptors and its natural ligand is PD-L1. Under physiological conditions, PD-1 is mainly expressed in activated immune cells, which promotes the maturation of T lymphocytes, regulates unnecessary or excessive immune response through negative regulation, and maintains immune tolerance. The over activation of PD-1/PD-L1 signaling pathway negatively regulates the function of T cells, which cancels the immune system surveillance function and promotes the escape of tumor cells. 197 Therefore, blocking the interaction of PD-1 and PD-L1 maintain the T cells immune function may be a potential strategy for tumor treatment (Fig. 11a). The PD-1/PD-L1 signaling pathway inhibitors include monoclonal antibodies, peptides, and small molecule inhibitors.
Up to date, there are five monoclonal antibody drugs Pembrolizumab (Keytruda), Opdivo (Nivolumab), Tecentriq (Atezolizumab), Bavencio (Avelumab), and Imfinzi (Durvalumab) have been approved as PD-1/PD-L1 signal pathway inhibitors for the treatment of melanoma, non-small cell lung cancer and other diseases [198][199][200][201][202] (Table 5). Pembrolizumab is the first PD-1 inhibitor approved by the FDA for the treatment of advanced or unresectable melanoma, which does not respond to other drugs. 203 Pembrolizumab is a highly selective humanized IgG4-κ anti-PD-1 monoclonal antibody, which activates tumor infiltrating lymphocytes (TIL). The combination of PD-1 highly expressed in TIL and PD-L1 expressed in tumor cells is an important factor for tumor immune escape. Pembrolizumab binds to PD-1 on the surface of TIL and inhibits its interaction with PD-L1/2 to activate TIL. Although immunotherapy against PD-1/PD-L1 has been applied in clinic, the use of monoclonal antibodies may affect the proliferation and activation of T cells, thereby trigger severe immune-related adverse reactions, including tissues damage, weaken the function of Fc-immune effect (killing immune cells), etc. 204,205 Compared with monoclonal antibodies, the peptides and small molecule drugs do not have the limitations of monoclonal antibodies as mentioned above. 206 Chang et al. 207 developed the first hydrolysis-resistant D-peptide antagonists to target the PD-1/PD-L1 pathway by using the mirror-image phage display ( Table 6). The optimized compound D PPA-1 binds to PD-L1 at an affinity of 0.51 μM in vitro. The cellular level blockade assay data and tumor-bearing mice experiments results all indicate that the D PPA-1 disrupts the PD-1/PD-L1 interaction under in vivo condition. 207 Aurigene developed a small peptide AUNP-12, which is an anti-PD-1 targeted immunotherapy for cancer (the structure of the compound has not been disclosed). 208 The AUNP-12 inhibits the binding of PD-1 and PD-L1 under in vitro conditions (IC 50 = 0.72 nM), but the time of drug metabolism was short. The animal trials data demonstrated the AUNP-12 has good anti-PD-L1 activity and effectively inhibits the growth and metastasis of tumor cells.
Because the structure of PD-1 and PD-L1 proteins are not available, the development of small molecule PD-1/PD-L1 inhibitors lags far behind the development of antibody drugs. By analyzing the PD-1/PD-L1 complex structure, Zak et al. 209 reported that there are three main binding pockets in the contact interface between PD-1 and PD-L1, which provide a rational basis for drug development. With the success of PD-1 monoclonal antibodies and macromolecular biomedical drugs, Bristol Myers Squibb (BMS) conducted an in-depth investigation of small molecule inhibitors of the PD-1/PD-L1 pathway. In 2015, the company disclosed its first patent on a biphenyl immunomodulator. The homogeneous timeresolved fluorescence (HTRF) test results demonstrated that these compounds blocked the interaction between PD-1 and PD-L1. Surprisingly, some of the compounds even reached nanomolar activity. The IC 50 of representative compounds 15 and 16 (Fig. 11b) were 18 and 22 nM, respectively. 210 In the same year, in another patent disclosed by BMS, additional structural modifications have been made which include the benzene ring in part A of the compound was replaced by 1,4-benzodioxane, and mcyanobenzene was introduced into the benzene ring of part C through an ether bond. The structural optimization as abovementioned significantly improved the compound's PD-1/PD-L1 inhibition activity that their IC 50 values reached 0.6-10 nM range. Specifically, the IC 50 of representative compounds 17 and 18 (Fig. 11b) were 2.25 and 1.4 nM, respectively. 211 To improve the compound's inhibition activity further, the researchers continued to optimize the structures of this class of compounds. The additional structural modifications include introducing different hydrophilic groups into a part of the hydrophobic biphenyls through a carbon chain, which improved the compounds' activity further. The representative compounds 19 and 20's IC 50 values were 0.48 and 0.88 nM (Fig. 11b), respectively. 212 In 2018, BMS disclosed new compounds with symmetric structures. Compare to other compounds, the new compounds replaced the original groups on the left side with new groups that have the same or similar structures as the ones on the right side based on the original structure, thus forming a compound characterized by "central symmetry". The activity of this type of compounds is generally less than 1 nM, and the representative compound 21's IC 50 values (Fig. 11b) reaches 0.04 nM. 213
STABILIZERS OF PPIS
Stabilizers of 14-3-3/H + -ATPase (small molecules) The tyrosine 3-monooxygenase/tryptophan 5-monooxygenase activator protein family (14-3-3 protein) has an important role in PPIs. It is a highly conserved ubiquitous protein family encoded by multiple genes in most organisms. 214 There are at least seven highly conserved subtypes of 14-3-3 proteins encoded by different genes in mammals. The 14-3-3 proteins bind to various ligand proteins including kinases, phosphatases, and transmembrane receptors. 215 The 14-3-3 proteins regulate more than 500 endogenous molecules' activity through binding to them. 216 Since these endogenous molecules have critical roles in the cell metabolism process, cell cycle modulation, apoptosis, cell differentiation, transcription, signal transduction, and other vital biological events, the intervene of these molecules' activities may yield severe consequences in the cells. 217,218 Specifically, the 14-3-3 proteins also named as "bridge protein" of protein-protein interactions as they bind with transcription factors to form complexes that regulate the expression of associated genes. Due to the important function of 14-3-3 proteins in cells, they have critical roles in various diseases including the nervous system diseases, arthritis, malignant tumors, and infectious diseases etc. 218-220 All 14-3-3 proteins have similar tertiary structures and the structure can be divided into three parts: N terminal, conservative core region, and C terminal. Each monomer consists of 9 helium spirals (ɑA~ɑI) located between the N terminal and the C terminal that arranged from an anti-parallel to an L-shaped structure separated by a short loop. 221 Under certain conditions, the 14-3-3 proteins can aggregate together in the form of stable homologous/heterodimers that can bind with two ligands simultaneously. 222 The dimer formation is a necessary regulatory step for its binding to the ligand protein. The dimer interface consists of αA from one monomer with the combination of and αC and αD from another monomer forming highly conserved facultative grooves. The grooves contain both polar and nonpolar amino acid residues and also contain an intense negative charge. The nonpolar amino acid residues in all subtypes of 14-3-3 proteins mainly distribute along the inner grooves, while the polar amino acid residues are located on the outer surface of the grooves. Such a unique distribution of nonpolar-polar amino acid residues makes the grooves identify the target proteins with common characteristics. Fusicoccin A, a natural product, is the first reported stabilizer to regulate the interaction between the 14-3-3 protein and its ligand (Fig. 12). Fusicoccin A is a diterpenoid glycoside with a 5-8-5 ring that binds to 14-3-3 receptors. Fusicoccin A stabilizes the complex formed by 14-3-3 protein and plasma membrane ATPase (PMA). 223 The crystal structure studies showed that the 14-3-3 dimer forms a complex structure with 52 amino acids at the C terminal of H + -ATPase, and Fusicoccin A fills the gap between the protein-protein interface of the complex. 223 The hydrophobic 5-8-5 ring is inserted into the binding channel of the 14-3-3 protein.
The bottom of the hydrophobic cavity contains Val153, Phe126, and Met130. The methyl or methoxy substituent is an important condition for contacting the hydrophobic bottom. The 5-8-5 ring has extensive hydrophobic interactions with Pro174, Ile174, Gly178, Leu225, Ile226, and Ile956 of H + -ATPase proteins. In addition, many water-mediated polar interactions were formed between Fusicoccin A and 14-3-3 proteins.
Richter et al. 224 reported that the pyrrolidone derivatives could stabilize the interaction between 14-3-3 and PMA. Among these, the compound 22 exhibited the highest activity (Fig. 12). The crystal structure of the pyrazole derivatives and 14-3-3/PMA complex showed that the rigid pyrazole part penetrated into the protein-protein interaction interface deeply, therefore enlarged the interface with PMA. Compared with the natural product Fusicoccin A (EC 50 = 498 ± 65 nM), the activity of compound 22 was better (EC 50 = 33 ± 4 μM). Furthermore, compound 22 showed a good selectivity and has no effect on 14-3-3/C-Raf or 14-3-3/p53 interactions.
Stabilizers of S100 pentamer (small molecules) The S100 protein was given its name because it is well soluble in 100% saturated ammonium sulfate under neutral conditions. 225 Up to date, at least 20 members of the S100 protein family have been identified, including S100A1-A15, S100B, S100P, etc. 226 The S100 proteins mainly exist in the forms of homodimers, heterodimers, trimers, and tetramers in the cells. 227 Previous studies have shown that the S100 proteins act as a calcium sensor, which regulates many intracellular and extracellular activities in a calcium-dependent manner. 228 The binding of calcium ions changes the S100 protein conformation, exposing its binding sites to the target proteins. Therefore, various biological functions of S100 protein can be exerted through regulating the calcium ions under in vivo conditions. 229 For example, S100 regulates protein phosphorylation, enzyme activity, cell proliferation, cell differentiation, inflammatory reaction induction, and protects cells from oxidative damage. 225,229 Studies showed that the high expression of S100A4 associate with rheumatoid arthritis, kidney fibrosis, and cardiac hypertrophy. 230,231 Garrett et al. 232 reported several phenothiazines that block the activity of S100A4. One of these compounds, trifluoroperazine inhibits the S100A4 function through stabilizing its inactive pentamer (Fig. 13b). The complex structure study discovered that the trifluoroperazine forms a pentamer complex with S100A4 and the two molecules are in contact with each other at the interface. Further analysis of the complex structure found that trifluoroperazine binds to a hydrophobic patch, which includes the side chains of Ile82, Met85, and Cys86, from one protomer and Phe89 as well as Phe93 from the other (Fig. 13a). The methylated piperazine ring of trifluoroperazine interacts with the protomer of Ser44, Phe45, Leu46, and Gly47. In addition, the carbonyl oxygen atom of the protomer Phe45 forms a hydrogen bond with the nitrogen atom on the piperazine ring.
Stabilizers of influenza nucleoprotein protomers (small molecules) Influenza virus is the pathogen causing acute infectious disease influenza. Influenza virus nucleoprotein (NP) is its main structural protein and the main component of nucleocapsid. 233 The ribonucleoprotein complex is composed of ribonucleoprotein and RNA fragments of virus and three kinds of dependent RNA polymerase PA, PB1, and PB2, which participate in the transcription, replication, and assembly of the virus. As the main structural protein of the virus, nucleoprotein contains many functional domains, such as nuclear localization sequences, RNA-binding domains, NP-NP-binding domains, and PB2-binding domains. All these domains have vital functions that are indispensable components of viral replication. Therefore, inhibiting the nucleoprotein function may have antiviral effects.
Gerritz et al. 234 reported a triazole compound that induces the formation of higher-order nuclear protein oligomers, which prevents the nuclear proteins entering the nucleus, thereby inhibiting viral replication. Previous studies showed that the binding sites of the triazole compound might be located in two regions on NP: one in NPY289/N309 region and the other in NPY52 region. Six molecules of compound 23 (Fig. 14) bridge two NP trimers (NP_A, NP_A′, NP_A″ and NP_B, NP_B′, NP_B″) to form a hexamer. The structure analysis on compound 10 and NP protein complex showed that compound 10 located between the interfaces of two trimers and stabilizes the complex. 234 The other unique structures of the triazole compounds and NP complex include a hydrophobic pocket formed between two NP monomers by the amino acid residues Tyr289, Phe291, Try296, Tyr52, and Tyr313 on each monomer. The nitro moiety on the aromatic ring of compound 23 forms a Π-Π interaction with Tyr289 on NP_A.
The piperazine moiety of compound 23 forms a hydrophobic interaction with Tyr254 on NP_B. Further, the hydroxyl group of the NP_B Ser forms a hydrogen bond with the carbonyl group of compound 23.
Stabilizers of microtubules (small molecules)
The microtubule is the main component of the cytoskeleton, which is composed by α-tubulin and β-tubulin. Microtubules have a vital role in maintaining cell morphology, cell division, signal transmission, and material transport. 235 In the living cells, microtubules aggregate with each other into spindles in the early stages of cell division. The spindles pull chromosomes to move towards the two poles into two daughter cells during mitosis, thereby completing cell proliferation. Under physiological conditions, there is a dynamic balance between the microtubule and tubulin dimer. The microtubule stabilizers stabilize microtubules and promote the multimerization of microtubules, thus block the depolymerization of microtubules, and thereby destroy the dynamic instability of tubulin. Such effect further destroys the rapidly differentiated tumor cells during mitosis, stagnates the cell cycle, and in turn induces the tumor cells to undergo apoptosis. 236 Paclitaxel (Fig. 15) is the first approved microtubule stabilizer. Studies showed that the Paclitaxel binds with β-tubulin, promotes the aggregation of microtubulin, stabilizes microtubule structure, hinders the formation of spindles, and leads to cell cycle arrest in G2/M phase. 237,238 Zampanolide (Fig. 15) is a 20-membered macrolide isolated from the Tongan marine sponge Fasciospongia rimosa. 239 It arrests cells in mitosis and inhibits cell proliferation by stabilizing microtubules. The structural analysis shows that zampanolide induces the disordered curled M-loop into an ordered spiral structure through its side chain. 240 The M-loop is composed of eight amino acid residues in the middle region of the tubulin subunits, which maintains the interaction between the microtubule fibrils. The change in the M-loop facilitates the lateral contact between the microtubule fibrils and thus stabilize the microtubules. In 1A9 cells, zampanolide exhibited a IC 50 value of 14.3 ± 2.4 nM, which demonstrated itself a potential microtubule stabilizer. 241
CONCLUSION
In recent years, new PPIs modulators development has been an attractive goal in preclinical studies. 242,243 However, the design of modulators targeting PPIs still faces tremendous challenges. Besides the challenges mentioned previously like the difficult PPI interfaces for the drug design, lack of ligands reference, the ineffectiveness of the classic medicinal chemistry approaches for PPI drug development, lack of guidance rules for the PPI modulators development, the biggest obstacle is the lack of high-resolution PPI proteins structures. Because the medicinal PPI drug design is based on the high-resolution PPI protein structures, more resources should be put into the structural biological studies of the identified PPIs.
Inhibitors and stabilizers are two ways to modulate PPIs. Some of these modulators have been applied in the clinic, some have entered clinical trials, and some have lead compounds that require further structural optimization. Although compounds such as trifluoroperazine and zampanolide exhibit PPIs stabilizing activity, the PPIs stabilizers' development has not received sufficient attention as compared to the PPIs inhibitors' development. 63 The difficulties for the PPIs stabilizers' development include the insufficient understanding of PPIs mechanisms, the poor chemical space performance of PPI stabilizers in existing small molecule libraries, and the extreme diversity of the PPIs stabilizers' Fig. 13 Proteins and small molecule inhibitors of S100 pentamer. a The binding modes of trifluoroperazine binds to S100 (PDB:3KO0). Due to the clarity, only two adjacent S100A4 monomers and their contact interface are shown. b The chemical structure of a stabilizer of S100 pentamer Fig. 14 The chemical structures of stabilizers of influenza nucleoprotein Fig. 15 The chemical structures of stabilizers of microtubules molecular structures make it difficult to establish the criteria to guide the design of new PPIs stabilizers. Most of the identified PPI stabilizers are natural products, only a few of them are synthesized through the rational design method. The HTS of the natural products that have PPI stabilization activities may be the direction of finding the lead compounds of PPI stabilizers.
Compared with the traditional small molecule inhibitors, peptides exhibit higher affinity and specificity, making it easier to bind with the target proteins. However, the peptide has two major problems when used as drugs: its instability under in vivo conditions and their poor membrane permeability. Fortunately, new technologies are available now to counter the two problems. To prevent the quick degradation of the peptides after entering the body, the chemical modifications can be applied to improve the stability of the peptides. Regarding the peptides' poor membrane permeability issue, there is a class of short peptides that have been found in recent years to have the function of penetrating biomembrane and mediate transmembrane transduction of macromolecular substances. 244 This brings significant progress to the development of intracellular peptides.
In recent years, remarkable progress has been made in the development of antibodies that regulate PPI, especially the monoclonal antibodies regulate PD-1/PD-L1 interaction. However, due to the high research cost, the instability, and potential severe immunogenic side effects of antibodies, more and more attention has been drawn to the peptides and small molecular inhibitors, especially the small molecular inhibitors. Compared with antibodies, the classic small molecule drugs have advantages such as lower research costs, diverse preparations, oral administration, and better tumor microenvironment penetration.
Decades ago, due to the limited understanding of the PPI properties and very limited available screening techniques by time, the modulation of PPIs has been recognized as one of the most challenging tasks in drug discovery for a long period of time. However, the rapid development of structural biology and the associated methodologies have helped us to understand the PPI properties to a level we could not imagine before. Besides, the rapid development of various high-throughput screening approaches also makes the quick screening of the PPI modulators possible. As a result, great progress has been made in the development of PPI modulators lately. In summary, opportunities and challenges coexist in the discovery of modulators targeting PPIs. In the future, with the emergence of new and better approaches to reveal the structures of protein complexes and the development of structural biology, it is believed that more PPI small molecule modulators will be developed and enter the clinic to benefit the patients.
|
2020-09-23T13:58:35.951Z
|
2020-09-23T00:00:00.000
|
{
"year": 2020,
"sha1": "59f22d52d6ed1a11a6d63d8691715ce2daf6d3a3",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41392-020-00315-3.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59f22d52d6ed1a11a6d63d8691715ce2daf6d3a3",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1040872
|
pes2o/s2orc
|
v3-fos-license
|
Effect of thermal annealing Super Yellow emissive layer on efficiency of OLEDs
Thermal annealing of the emissive layer of an organic light emitting diode (OLED) is a common practice for solution processable emissive layers and reported annealing temperatures varies across a wide range of temperatures. We have investigated the influence of thermal annealing of the emissive layer at different temperatures on the performance of OLEDs. Solution processed polymer Super Yellow emissive layers were annealed at different temperatures and their performances were compared against OLEDs with a non-annealed emissive layer. We found a significant difference in the efficiency of OLEDs with different annealing temperatures. The external quantum efficiency (EQE) reached a maximum of 4.09% with the emissive layer annealed at 50 °C. The EQE dropped by ~35% (to 2.72%) for OLEDs with the emissive layers annealed at 200 °C. The observed performances of OLEDs were found to be closely related to thermal properties of polymer Super Yellow. The results reported here provide an important guideline for processing emissive layers and are significant for OLED and other organic electronics research communities.
importance to optimize device performance. The effect of processing conditions on efficiency and performance is also seen in other organic electronic devices such as organic photovoltaics and organic field effect transistors 14,15 .
Among solution-processable conjugated polymer emissive layers, copolymer Super Yellow is one of the most widely used emissive layers in organic light emitting devices including OLEDs 3, [16][17][18][19] . It has been shown that different solution processing techniques for fabricating Super Yellow thin films have limited influence on its photophyical and charge transporting properties 20 . However, the effect of thermal annealing Super Yellow thin films during OLED fabrication has not been systematically documented. As such, research reports in organic light emitting devices using polymer Super Yellow as emissive layer show a wide range of thermal treatment, from non-annealed films to annealing at temperatures up to 200 °C 18,[21][22][23][24] . A brief summary of different annealing temperatures is listed in Table S1 (Supplementary Information). Furthermore, there is limited knowledge of thermal properties of conjugated polymer Super Yellow.
In this work, we have done a systematic investigation of the effect of annealing polymer Super Yellow films on the efficiency of OLEDs. Non-annealed Super Yellow films and films annealed at 50 °C, 100 °C, 150 °C and 200 °C were investigated. The results show an optimum annealing temperature of 50 °C, which produced OLEDs that reached a maximum EQE of 4.09%. This is significantly higher than EQE of OLEDs with Super Yellow emissive layer annealed at 200 °C, which had a maximum efficiency of only 2.72%. The thermal properties of polymer Super Yellow and the optical and morphological thin film properties were investigated to explain the trend observed in OLED performance with respect to annealing temperature.
Results
OLED Performance. To investigate the effect of annealing Super Yellow films on efficiency of OLEDs, we fabricated five sets of OLEDs, sets A, B, C, D and E, with different annealing temperatures. A schematic of the device structure is shown in Fig. 1(a). The thermal annealing step was carried out after deposition of the Super Yellow layer. Set A has non-annealed Super Yellow films while sets B, C, D and E have Super Yellow films annealed at 50 °C, 100 °C, 150 °C and 200 °C, respectively. The chemical structure of polymer Super Yellow is shown in Fig. 1(b). In our devices, poly (3,4-ethylenedioxythiophene):polystyrene sulfonate (PEDOT:PSS) was used as a hole transport layer and Ba was used as an electron injection layer.
Normalized electroluminescence (EL) spectra of all sets of OLEDs are shown in Fig. 1(c). There is no visible influence on the EL spectra by annealing Super Yellow films at different temperatures. CIE co-ordinates of all sets of OLEDs at different brightness are shown in inset of Fig. 1(c), which shows little or no variation in emitted colour among the OLEDs. Broadening of EL spectrum and/or shifts in the emission peak upon annealing the emissive layer have been reported for other polymer systems 10,12,13 .
Current density and luminance with respect to voltage for the best device in each set are shown in Fig. 2(a). As seen in the figure, there are variations in both current density and luminance when the Super Yellow films are annealed at different temperatures. The current density increases monotonically at all voltages from set A to set D. This indicates increased bulk conductivity arising from a denser film of Super Yellow. Gather et al. have reported a similar observation of increasing current density with increase in annealing temperature for white OLEDs 25 . The trend of increasing current density with higher annealing temperature changes slightly with set E which has almost the same current density as set D at lower voltages but higher current density at higher voltages. A similar trend to current density is seen for luminance with respect to voltage. Luminance increases from sets A to D but for set E the luminance starts decreasing. The turn on voltages are given in Table S1 (Supplementary Information). The average turn on voltage of all sets varies between 2.10 V to 2.28 V. Current efficiency and EQE plots against luminance for the best devices are shown in Fig. 2(b). The trend in current efficiency and EQE is the same for all sets of OLEDs. The difference between efficiencies is more pronounced at lower brightness. Set B has the highest efficiency reaching a maximum current efficiency of 12.03 cd/A and a maximum EQE of 4.09% and set E has the lowest efficiency with maximum current efficiency and EQE of 8.07 cd/A and 2.72%, respectively. For sets A and B, the efficiency reaches a maximum at lower brightness and starts rolling off, while for sets C, D and E the efficiency rises at a slower rate and holds steady or has a lower roll off. Averages of maximum efficiencies are given in Table S1 and averages of current efficiencies and EQEs at 100 cd/m 2 , 1000 cd/m 2 and 10000 cd/m 2 are given in Table S1.
From Fig. 2(a) and (b), we see that current density is highest for set E, however set D has the highest brightness and the efficiency is maximum for set B. This implies that the OLED with the highest current density is not necessarily the brightest and the brightest OLED does not necessarily have the highest efficiency. The trend in efficiency is easier to see from Fig. 2(c), which is a plot of brightness against current density. Higher luminance at the same current density implies higher efficiency. The inset of Fig. 2(c) is a magnification of the plot at lower current density. From the difference in luminance at the same current density between the different sets of OLEDs, we can infer that there are intrinsic properties of polymer Super Yellow that are affected by thermal annealing, and that these play a role in the overall performance of the device.
Super Yellow Properties. Photoluminescence (PL) intensities were measured for non-annealed and
annealed Super Yellow thin films to investigate the effect of annealing temperature on photophysical properties. As shown in Fig. 3(a), the non-annealed film has higher intensity than all annealed films. The intensities gradually decrease with increasing annealing temperature up to 150 °C, beyond which there is no change in intensity. There is no variation in the shape of the spectra with annealing temperatures. A previous report on PPV based polymers 12 demonstrated that the shape of PL spectra for PPV derivatives with large side groups is not affected by annealing temperature. Since Super Yellow is a PPV based co-polymer with big side groups, our results are in agreement with this report. Therefore, even though there is a difference in the intensity of emission, the emitted colour will remain stable. This is reflected in the EL for all sets of OLEDs, as well as in the constancy of the observed colour co-ordinates. For a deeper understanding of thermal properties of polymer Super Yellow, we performed differential scanning calorimetry (DSC) and also carried out thermogravimetric analysis (TGA). Results of DSC and TGA are shown in Fig. 3(b) and (c), respectively. From the DSC scan we see that the glass transition temperature (T g ) of polymer Super Yellow is around 83 °C. The T g obtained from our DSC scan is significantly below that reported earlier for T g of Super Yellow polymer (~150 °C) 26 . Given that molecular weight of a polymer plays a critical role in T g , it is highly like there will be variations in T g between different batches of the polymer. For Super Yellow used in this study, we determined the molecular weight by gel permeation chromatography (GPC) which showed a number average molecular weight (Mn) of 184.3 kDa with polydispersity index (PDI) of 1.38. Gel permeation chromatogram is shown in Fig. S1 (Supplementary Information).
On the other hand, TGA showed that polymer Super Yellow starts degrading at lower temperatures and has lost almost 4% of its weight by 50 °C. From Fig. 3(c), we see three distinct regions of weight loss before 350 °C. The first region, which extends approximately up to ~50 °C, has a steeper slope than the second region, which extends up to ~175 °C. The third region extends up to ~350 °C, and again has a shallower slope as compared to the second region. By 200 °C, which is the highest temperature in our study, the polymer has lost ~9% of its weight. This weight loss might be due to breaking of the long alkoxy chain substituted on the benzene ring and vinylene bond present between the phenylene units. Beyond 350 °C the polymer starts degrading rapidly on a path to complete degradation.
To determine the influence of annealing temperature on the surface morphology of Super Yellow thin films in the device, atomic force microscopy (AFM) was performed on glass/ITO/PEDOT:PSS/Super Yellow films that were annealed at different temperatures. Typical AFM images of all films are shown in Fig. 4. We see little or no difference in the features on non-annealed and annealed films. Since the longer decyloxy side chains substituted on phenylene-vinylene conjugated backbone of polymer Super Yellow will prevent aggregation, no significant changes in morphology are expected upon thermal annealing the films. AFM scans at different locations on the films showed little or no difference in roughness of the films. The measured RMS roughnesseses of the films are all in the range from 0.5 to 1.0 nm (Fig. S2, Supplementary Information). The smooth, irregular AFM morphology clearly reveals that the polymer is amorphous by nature and there is no presence of any ordering.
Discussion
From the results obtained, we can infer that both the electrical and optical properties of polymer Super Yellow are affected by thermal treatment, which consequently affects the overall performance of OLEDs. The changes in electrical properties are evident from the difference in current density of OLEDs at the same voltage. The increase in current density for devices annealed at higher temperatures has been observed in other polymer systems 10,12,25 . Even though the current density of our Super Yellow OLEDs at the same voltage increases with increasing annealing temperature, the efficiency starts to fall after annealing at 50 °C. This is related to the changes observed in thermal and optical properties upon annealing Super Yellow. Efficiency of an OLED is dependent Fig. (a). Inset shows magnification of the plot at low current density. Legend of (a) is valid for (b) and (c). on PL efficiency 3 and our results show that the PL intensities of Super Yellow films decreased for annealed films. The reduction in PL intensity is directly related to the degradation of polymer with annealing temperature. We see from Fig. 3(a) that the biggest drop in PL intensity is between the non-annealed film and the film annealed at 50 °C. This corresponds to the first weight loss region in TGA of Fig. 3(c), which has the steepest slope. The PL intensity continues to drop for films annealed up to 150 °C. This corresponds to the second region identified in TGA, which has a slower rate of weight loss as compared to the first region. There no noticeable difference in the PL intensity of films annealed at 150 °C and 200 °C. TGA suggests that the polymer degradation is much slower in this temperature range, consistent with the negligible difference in the PL intensities.
The DSC scan revealed a glassy transition for Super Yellow at ~83 °C. Though this T g was determined for the bulk polymer, we expect the T g of thin films in this study to be similar. The films used in this study had a thickness of 90 nm, which is in a film thickness range where T g of polymer films approaches bulk properties 27,28 . Once the polymer film is heated beyond its T g , the polymer chains have increased mobility and may aggregate. However, from the shapes of the PL spectra of Super Yellow films, we see no signature of any aggregation even when the films are annealed at 200 °C. Normalised PL spectra are shown in Fig. S3 of Supplementary Information. The long side chains of polymer Super Yellow restrict the polymer chains from aggregation. Aggregation of polymer chain is observed in PPV polymers with shorter side chains 12 . The lack of aggregation of the polymer chains in Super Yellow films is also evident from the AFM images of polymer films, which reveal no discernible surface morphological changes with annealing temperature. This property makes Super Yellow OLEDs highly colour stable as is seen in the shape of EL and CIE colour co-ordinates.
In conclusion, we have demonstrated that the annealing temperature for Super Yellow films directly affects the overall performance of OLEDs. OLEDs with annealed Super Yellow films have higher efficiency. However, the maximum current efficiency and EQE dropped from 4.09% to 2.72% and 12.03 cd/A to 8.09 cd/A, respectively, when annealing temperature was increased from 50 °C to 200 °C. The difference in efficiencies observed is related to the thermal degradation and glass transition of the emissive material and not with the morphology of the emissive layer. We believe that the difference in efficiencies will be more pronounced in optically enhanced OLEDs such as microcavity OLEDs, which is a subject of further studies in our group.
Methods
OLED devices were fabricated on pre-patterned ITO substrates (Kintec). The ITO substrates were cleaned using Alconox and de-ionized water. The substrates were rinsed several times with de-ionised water before ultra-sonicating in acetone and isopropanol consecutively for 10 minutes each. The substrates were dried by blowing with compressed air. PEDOT:PSS (Heraeus) filtered using a 0.45 μ m PVDF filter was spin coated on the ITO substrates at 5000 rpm for 30 seconds using a Laurell Technologies spin coater. After removing PEDOT:PSS from the contact pads, the films were annealed at 125 °C for 20 minutes to completely dry the film. The films were then transferred to a glove box system with low moisture and oxygen (O 2 < 0.1 ppm, H 2 O < 0.1 ppm). Super Yellow solution in anhydrous toluene was prepared a day earlier and kept stirring at room temperature to ensure the polymer is dissolved completely. The PEDOT:PSS films were spin coated with Super Yellow solution using a Speciality Coasting Systems spin coater at 1500 rpm for 30 seconds to obtain a thickness of ~90 nm. Once Super Yellow films were removed from the contact pads, the films were divided into five sets. One set was kept aside while one set each of the remaining four sets were annealed at 50 °C, 100 °C, 150 °C and 200 °C for 20 minutes each. This was followed by vacuum thermal evaporation of 6 nm of Ba (Sigma Aldrich) and 80 nm of Ag (Sigma Aldrich), without breaking vacuum, using a torpedo thermal evaporator at pressures ~10 −6 mbar. Each OLED pixel had an area of 10 mm 2 . The devices were encapsulated using customized glass caps and UV curable epoxy (NOA 61, Norland Products). Current-Voltage-Luminance (IVL) of the devices were measured using a sourcemeter (B2901A, Keysight Technologies) interfaced with a luminance meter (CS-200, Konica Minolta). The electroluminescence spectra of the devices were recorded using a UV-vis spectrometer (USB4000, Ocean Optics). EQE of the devices were calculated using methods for a Lambertian emitter.
AFM images were taken using an NT-MDT Solver in non-contact mode. Three to five images were acquired on different image areas on different days for each sample, using both small (< 5 micron) and large (> 5 micron) image areas. Images were processed by using the WSxM software 29 for background subtraction. WSxM was also used to measure the RMS roughness of the surfaces. Photoluminescence spectra and intensity of Super Yellow films and thickness measurements of all films were done using a Varian fluorescence spectrophotometer (Cary Eclipse) with an excitation wavelength of 400 nm and Bruker Dektak XT profilometer, respectively. Thermal analysis was performed using a Pegasus Q500TGA thermogravimetric analyser under a nitrogen atmosphere at a heating rate of 5 °C/min. Differential scanning calorimeter (DSC) was conducted under nitrogen using a Chimaera Q100 DSC. The sample was heated at 10 °C/min from 25 °C to 300 °C. Gel Permeation Chromatography (GPC) against polystyrene standards was performed in chloroform at 30 °C and a flow rate of 1 mL/min on a Waters GPC assembly equipped with a Waters 1515 isocratic HPLC pump, Waters 2707 autosampler with a 100 mL injection loop, and a Waters 2487 dual wavelength absorbance detector analysed at 254 nm in series with a Waters 2414 refractive index detector at 30 °C. Three consecutive Waters Styragel columns (HR5, HR4, and HR1, all 7.8x´− 300 mm, 5 μ m particle size), preceded by a Waters Styragel guard column (WAT054405, 4.6x´− 30 mm, 20 μ m particle size) were used during analysis. A typical concentration of 1 mg polymer dissolved in 1 mL tetrahydrofuran was used to run GPC samples.
|
2018-04-03T03:12:01.134Z
|
2017-01-20T00:00:00.000
|
{
"year": 2017,
"sha1": "7ea50a20f91fa386016432d56b9a95d0aef78ba9",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep40805.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7ea50a20f91fa386016432d56b9a95d0aef78ba9",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
7245423
|
pes2o/s2orc
|
v3-fos-license
|
The Frontier Fields Lens Modeling Comparison Project
Gravitational lensing by clusters of galaxies offers a powerful probe of their structure and mass distribution. Deriving a lens magnification map for a galaxy cluster is a classic inversion problem and many methods have been developed over the past two decades to solve it. Several research groups have developed techniques independently to map the predominantly dark matter distribution in cluster lenses. While these methods have all provided remarkably high precision mass maps, particularly with exquisite imaging data from the Hubble Space Telescope (HST), the reconstructions themselves have never been directly compared. In this paper, we report the results of comparing various independent lens modeling techniques employed by individual research groups in the community. Here we present for the first time a detailed and robust comparison of methodologies for fidelity, accuracy and precision. For this collaborative exercise, the lens modeling community was provided simulated cluster images -- of two clusters Ares and Hera -- that mimic the depth and resolution of the ongoing HST Frontier Fields. The results of the submitted reconstructions with the un-blinded true mass profile of these two clusters are presented here. Parametric, free-form and hybrid techniques have been deployed by the participating groups and we detail the strengths and trade-offs in accuracy and systematics that arise for each methodology. We note in conclusion that lensing reconstruction methods produce reliable mass distributions that enable the use of clusters as extremely valuable astrophysical laboratories and cosmological probes.
INTRODUCTION
Gravitational lensing has become an increasingly popular method to constrain the matter distribution in clusters. Strong lensing, as it turns out, is particularly suited to probing the dense central regions of clusters. Constraining the structure of the cluster cores and their density profiles is critical to our understanding of structure formation; probing the nature of dark matter and fully comprehending the interplay between baryons and dark matter. Lensing by massive clusters has proved to be an invaluable tool to study their properties, in particular the detailed dark matter distribution within the cluster, as well as the faint, distant background population of galaxies that they bring into view. The magnification provided by lensing therefore affords the determination of the luminosity function of these high-redshift sources down to faint luminosities, thus helping inventory and identify galaxies that might have re-ionized the universe (Vanzella et al. 2012(Vanzella et al. , 2014Bouwens et al. 2014;Robertson et al. 2015;Bouwens et al. 2015;Vanzella et al. 2015Vanzella et al. , 2016Huang et al. 2016;Livermore et al. 2016).
Over the past two decades the Hubble Space Telescope (HST) has revolutionized the study of cluster lenses; and, with the deployment of ever more sensitive cameras from the Wide-Field-Planetary-Camera2 (WFPC2) to the Advanced-Camera-for-Surveys (ACS), the data have become exquisite in terms of resolution. By 2005, mass distributions derived from lensing data were available for about 30 clusters. More recently, galaxy clusters were the primary targets of two multi-cycle treasury programs of the Hubble Space Telescope (HST) aiming at finding signatures of strong gravitational lensing in their cores. These are the "Cluster Lensing And Supernova survey with Hubble" (CLASH, PI M. Postman (GO 12065); see Postman et al. 2012)) and the ongoing Frontier Fields Initiative (FFI, PI: Lotz).
As part of the Frontier Fields program, HST is currently collecting data of unprecedented depth on fields that harbor six massive clusters that act as powerful gravitational lenses. This program utilizes orbits under the Director's Discretionary (DD) observing time. The FFI is a revolutionary deep field observing program aimed at peering deeper into the universe than ever before to not only help understand better these dramatic lenses and their properties, but also simultaneously bring into view faint, distant background galaxies that would otherwise remain unseen without the magnification provided by the foreground lens. These high redshift sources that can be accessed due to gravitational lensing provide a first glimpse likely of the earliest galaxies to have formed in the universe, and offer a preview of coming attractions that await unveiling by the upcoming James Webb Space Telescope. These Frontier Fields uniquely combine the power of HST with that of nature's gravitational telescopes -the high-magnifications produced by these massive clusters of galaxies.
Utilizing both the Wide Field Camera 3 (WFC3) and ACS in parallel in this current program, HST has been producing the deepest observations of clusters and the background galaxies that they lens; as well as observations of flanking blank fields that are located near these selected clusters. These images have revealed the presence of distant galaxy populations that are ∼ 10−100 times fainter than any previously observed (Livermore et al. 2016). The magnifying power of these clusters is proving to be invaluable in helping improve our statistical understanding of early galaxies that are likely responsible for the re-ionization of the universe, and are providing unprecedented measurements of the spatial distribution of dark matter within massive clusters. These six clusters span the redshift range z = 0.3 − 0.55. The program devotes 140 orbits to each cluster / blank field pair, achieving a limiting AB magnitude of M AB ≈ 28.7 − 29 mag in the optical (ACS) and near-infrared (WFC3) bands.
The fundamental ingredient for exploiting the science outlined above is the construction of robust and reliable lens models. The ongoing FFI is an unprecedented test-bed for lens modeling techniques. Given the depth of these HST observations, hundreds of multiple images, covering a broad redshift range, have been newly unveiled behind each of the observed clusters (Jauzac et al. 2014(Jauzac et al. , 2015Grillo et al. 2015;Diego et al. 2015b;Wang et al. 2015;Kawamata et al. 2016;Hoag et al. 2016). In a rather unique case, even time delay measurements from a serendipitously multiply imaged supernova "Refsdal" observed by the GLASS team (Treu et al. 2015) in the FFI cluster MACSJ1149.5+2223 became available for testing and refining the lens models (Kelly et al. 2015;Treu et al. 2016;Rodney et al. 2016). Most importantly, FFI data were made publicly available immediately. Five teams were contracted by STScI to produce gravitational lensing models for all six Frontier Fields clusters to be made available to the astronomical community at large to enable wide use of this incredible data-set. All teams share the latest observational constraints, including positions and redshifts of multiple images 1 before working independently to produce lensing models which are also made publicly available. 2 Several additional groups have also been working on the data and producing mass models. In short, the whole community of strong lensing modelers has been actively collaborating to maximally exploit the FFI data.
The process of converting the observed strong lensing constraints into matter distributions is called lens inversion. Several groups have developed algorithms which perform the lens inversion employing different methodologies and using various combinations of input constraints. These include other tracers of the cluster gravitational potential, such as weak lensing, galaxy kinematics, and the X-ray emission from the Intra-Cluster-Medium (see e.g., Bradač et al. 2005;Donnarumma et al. 2011;Medezinski et al. 2013;Newman et al. 2013;Umetsu 2013;Umetsu et al. 2014;Merten et al. 2015). Over the years, it has become clear that while all methods are equally well motivated, they do not always converge to consistent reconstructions, even when applied to the same lens system (e.g., Zitrin & Broadhurst 2009;Smith et al. 2009). In several cases strong-lensing masses for the same cluster lens were found to be in tension (by a factor 2-3) with other independent measurements, based e.g on the modeling of the X-ray emission by the intracluster gas (Ebeling et al. 2009;Richard et al. 2010;Donahue et al. 2014). The constraints from strong lensing need to be 1 The redshifts are mainly obtained in the framework of the GLASS and CLASH-VLT programs (Treu et al. 2015;Grillo et al. 2015) and with the intergral field spectrograph MUSE on the VLT (see e.g. Karman et al. 2015) 2 https://archive.stsci.edu/prepds/frontier/lensmodels/ combined and fit simultaneously with stellar kinematic data and with weak lensing measurements (Newman et al. 2011) to improve accuracy. Using constraints on the mass profile arising from probes other than lensing also helps break the mass-sheet degeneracy. Finally, in several clusters, lensing data alone seems unable to discriminate between various density profiles (Shu et al. 2008). Therefore, in some clusters the data favors steep inner density profile slopes, while, in others it favors extremely shallow density profiles. This is in contrast with the predictions from the cold-darkmatter paradigm (Sand et al. 2005;Newman et al. 2013, but see also Meneghetti et al. 2005) where a universal density profile is expected with minor modification due to the aggregation of baryons in the inner regions.
In this paper, we challenge these lens inversion methods to reconstruct synthetic lenses with known input mass distributions. The goals of this exercise are twofold. Firstly, we aim to provide concrete feedback to the lens modelers on how they may improve the performance of their codes. And secondly, we aim to provide potential users of the FFI models and the astronomical community at large a sharper, more quantitative view of how robustly specific properties of lenses are recovered and the sources of error that plague each method. Such a comparison with numerical simulations and contrasting of lens mapping methodologies has not been undertaken before.
The outline of the paper is as follows. In Sect. 2, we outline the lens modeling challenge. In Sect. 3, we briefly introduce the various lens modeling techniques that were employed by participants in this study. In Sect. 4, we discuss the results of the reconstructions. Sect. 5 is dedicated to the detailed comparison of the independent modeling techniques through suitably defined metrics. Finally, in Sect. 7, we summarize the main results of this study and present our conclusions.
THE CHALLENGE
The challenge that we presented to various groups of lens modelers comprised of analyzing simulated observations of two mock galaxy clusters and producing magnification and mass maps for them. In generating these simulated (mock) clusters, we attempted to reproduce the depth, color, and spatial resolution of HST observations of the FFI cluster images including the gravitational lensing effects. While the comparison of lensing reconstructions of real clusters using the same input observational constraints strongly indicate that currently developed lens inversion techniques are robust (Grillo et al. 2015), the analysis of simulated data involving a large degree of realism where the true underlying mass distribution is known can help the lens reconstruction community to greatly improve their understanding of the modeling systematics. This view of using mocks to calibrate methodologies is widely supported by a number of extensive investigations carried out in the last few years.
There are multiple advantages to such calibration exercises. First of all, we are able to produce reasonably realistic cluster mass distributions in simulations (although up to some limit) that can be used as lensing clusters. Building on an extensive analysis of N-body/hydrodynamical simula-tions to improve the knowledge of strong lensing clusters, we have identified the important properties of the lenses which need to be taken into account during the construction of a lens model: cluster galaxies (Meneghetti et al. 2000(Meneghetti et al. , 2003a, ellipticity and asymmetries (Meneghetti et al. 2003b), substructures , baryonic physics (Puchwein et al. 2005;Killedar et al. 2012), and the dynamical state (Torri et al. 2004). In fact, we can simulate the lensing effects of galaxy clusters accounting for all these important properties, using both state-of-the-art hydrodynamical simulations and semi-analytic models. Second, we have developed tools to produce mock observations of these simulated lenses. Our image simulator SkyLens (Meneghetti et al. 2008(Meneghetti et al. , 2010a can mimic observations taken with virtually any telescope, but here we have used it primarily to produce simulations of HST images taken with both the ACS and the WFC3. In a small scale realization of the experiment that we present here, we applied the lens inversion techniques to a limited number of simulated observations of our mock lenses. By doing so, we highlighted some key limits of the strong lensing methods. For example, we note that strong lensing alone is powerful at constraining the cluster mass within the Einstein radius (∼ 100 kpc for a massive cluster) but the addition of further constraints at larger radii are required in order to appropriately measure the shape of the density profiles out to the cluster outskirts (Meneghetti et al. 2010a;Rasia et al. 2012). In what follows, we describe in detail how we generate the mock data-set for the challenge, and what kind of high-level products were distributed to the participants.
Generation of mock cluster lenses
For the exercise reported here, we generated mass distributions for two massive cluster lenses. These two lenses are generated following substantially different approaches, as outlined below. In order to easily distinguish them, we assigned them names -Ares and Hera.
Ares
The mass distribution of the first simulated galaxy cluster, Ares, is generated using the semi-analytic code MOKA 3 (Giocoli et al. 2012a). This software package builds up mock galaxy clusters by treating them as being comprised of three components: (i) the main dark matter halo -assumed to be smooth, triaxial, and well fit with an NFW profile, (ii) cluster members -subhaloes, distributed to follow the main halo and to have a truncated Singular Isothermal Sphere profile (Hernquist 1990a) -and (iii) the brightest cluster galaxy (BCG) modeled with a separate Hernquist (1990a) profile. The axial ratios, a/b and a/c, of the main halo ellipsoid are randomly drawn from the Jing & Suto (2002) distributions requiring abc = 1. We note that the observed FFI clusters typically consist of merging sub-clusters that cause them to be particularly efficient and spectacular lenses.
In the attempt to generate a mass distribution that adequately replicates the complexity of the Frontier Fields clusters, Ares was produced by combining two large scale mass distributions at z = 0.5. The two clumps have virial masses M 1 = 1.32×10 15 h −1 M and M 2 = 8.8×10 14 h −1 M and their centers are separated by ∼ 400 h −1 kpc. In each of the two cases, we start by assigning the same projected ellipticity to the smooth component, to the stellar density and to the subhalo spatial distribution. This is motivated by the hierarchical clustering scenario wherein the BCG and the substructures are related to the cluster as a whole and retain memory of the directions of the accretion of repeated merging events (Kazantzidis et al. 2004(Kazantzidis et al. , 2008(Kazantzidis et al. , 2009Fasano et al. 2010). In order to introduce some level of asymmetry, we then added in a small twist to the surface density contours. The degree of twisting adopted reproduces variations of the orientation of iso-surface density contours measured in numerically simulated galaxy clusters (see e.g. Meneghetti et al. 2007). The two large-scale halos combined to create Ares are nearly aligned. The difference between the position angles of the two clumps is ∼ 21 degrees. The central region of Ares contains large baryonic concentrations to mimic the presence of BCGs. We account for the possible adiabatic contraction of the dark matter caused by the presence of BCGs for Ares (altough several empirical studies find no evidence of adiabatic contraction on these scales, see e.g. Newman et al. 2013;Dutton & Treu 2014). The adiabatic contraction as described by Keeton & Madau (2001) for Hernquist (1990b) was implemented. For further details of the MOKA code we refer to Giocoli et al. (2012a,b). MOKA also takes into account the correlation between assembly history and various halo properties that are expected in CDM: (i) less massive haloes typically tend to be more concentrated than the more massive ones, and (ii) at fixed mass, earlier forming haloes are more concentrated and contain fewer substructures. These recipes have been implemented in consonance with recent results from numerical simulations. In particular, we assume the Zhao et al. (2009) relation to link the concentration to mass and the Giocoli et al. (2010) relation for the subhalo abundance. When substructures are included we define the smooth mass as M smooth = M vir − i m sub,i and its concentration c s are defined such that the total (smooth+clumps) mass density profile has a concentration c vir , equal to that of the total virial mass of the halo.
Throughout the paper the quoted masses and concentrations are evaluated at the virial radius, M vir and c vir . For these definitions we adopt derivations from the spherical collapse model: where ρ c = 2.77 × 10 11 h 2 M /Mpc represents the critical density of the Universe, Ω 0 = Ω m (0) is the matter density parameter at the present time, ∆ vir is the virial overdensity (Eke et al. 1996;Bryan & Norman 1998) and R vir symbolizes the virial radius of the halo, i.e. the distance from the halo centre that encloses the desired density contrast; and: with r s the radius at which the NFW profile approaches a logarithmic slope of −2. The concentrations assigned to the two main mass components of Ares are c 1 = 5.39 and c 2 = 5.46, respectively. Ares is generated in a flat ΛCDM cosmological model with matter density parameter Ω m,0 = 0.272. The Hubble parameter at the present epoch is H 0 = 70.4 km/s/Mpc. In the left panels of Fig. 1, we show the convergence maps of Ares, calculated for a source redshift z s = 9. The cluster is elongated in the SE-NW direction and contains several massive substructures. Since Ares was generated using semi-analytical methods, the small scale substructures of its mass distribution are very well resolved, as shown in the bottom-left panel. The substructure mass function is shown in the right panel of Fig. 2. As expected, this scales as N(M) ∝ M −0.8 , consistent with results of numerical simulations (Giocoli et al. 2010) of the CDM model. The convergence profile, measured from the center of the most massive clump, is shown in the left panel of Fig. 2.
In the image simulations described later we also include the light emission from cluster members. MOKA populates the dark matter sub-halos with galaxies using the Halo Occupation Distribution (HOD) technique. Stellar masses and Bband luminosities are subsequently assigned to each galaxy accordingly to the mass of the dark matter (sub-)halo within which it formed, following Wang et al. (2006). The morphological type and the SED of each galaxy is then defined on the basis of the stellar mass so as to reproduce the observed morphology-density and morphology-radius relations in galaxy clusters (e.g. van der Wel 2008; Ma et al. 2010).
Hera
The mass distribution of the second galaxy cluster, Hera, is instead directly derived from a high-resolution N-body simulation of a cluster-sized dark matter halo. More precisely, Hera is part of the set of simulated clusters presented in Planelles et al. (2014). The cluster halo was first identified in a low-resolution simulation box with a periodic comoving size of 1 h −1 Gpc for a flat ΛCDM model with present matter density parameter Ω m,0 = 0.24 and baryon density parameter Ω b,0 = 0.04. The Hubble constant adopted was H 0 = 72km/s/Mpc and the normalisation of the matter power spectrum σ 8 = 0.8. The primordial power spectrum of the density fluctuations is P(k) ∝ k n with n = 0.96. The parent simulation followed 1024 3 collision-less particles in the box. Hera was identified at z = 0 using a standard Friends-of-Friends (FoF) algorithm, and its Lagrangian region was resimulated at higher resolution employing the Zoomed Initial Conditions code (ZIC; Tormen et al. 1997). The resolution is progressively degraded outside this region to save computational time while still providing a correct description of the large-scale tidal field. The Lagrangian region was taken to be large enough to ensure that only high-resolution particles are present within five virial-radii of the cluster.
The re-simulation was then carried out using the TreePM-SPH GADGET-3 code, a newer version of the original GADGET-2 code by Springel (2005) that adopted a more efficient domain decomposition to improve the workload balance. Although, the parent Hera halo exists in several flavors in various simulation runs (several assumptions for the nature of dark matter particles), including several baryonic processes, the simulation used in this paper uses only the version that utilized collisionless dark matter particles. This has allowed us to increase the mass resolution by about an order of magnitude compared to the hydrodynamical versions of the simulation. The particle mass is m DM = 10 8 h −1 M . Therefore, the virial region of Hera is resolved with ∼ 10 million particles, with a total cluster mass of M = 9.4 × 10 14 h −1 M , comparable to that inferred for observed cluster lenses. The redshift of this halo is z l = 0.507. During the re-simulation, the Plummerequivalent co-moving softening length for gravitational force in the high-resolution region is fixed to Pl = 2.3h −1 kpc physical at z < 2 while being fixed to Pl = 6.9h −1 kpc comoving at higher redshift.
The properties of cluster galaxies used for creating the simulated observations are derived from Semi-Analytic-Methods (SAM) of galaxy formation (De Lucia & Blaizot 2007). The process starts by using the algorithm SUBFIND (Springel et al. 2001) to decompose each FOF group previously found in the simulation into a set of disjoint substructures. These are identified as locally over-dense regions in the density field of the background halo. Only substructures that retain at least 20 bound particles after a gravitational unbinding procedure are considered to be genuine substructures. Merging histories are constructed for all self-bound structures, using the same post-processing algorithm that has been employed for the Millennium Simulation (Springel et al. 2006). The merger-tree is then used to construct a mock catalog of galaxies. The evolution of the galaxy population is described by a modified version of the semi-analytic model presented in De Lucia & Blaizot (2007), that included the implementation of the generation of Intra-Cluster Light described in Contini et al. (2014), given by the combination of Model Tidal Radius and Merger channels presented in that paper.
Note that, even in the case of Hera, the galaxy positions trace reasonably well the mass. Several reconstruction methods assume that light traces the mass, a reasonable assumption which is thus satisfied both in Ares and in Hera.
To increase the level of uncertainty, the galaxy shapes and orientations are chosen to be uncorrelated with the underlying mass distribution.
The convergence map of Hera with its complex morphology and the abundance of substructures is shown in the right panels of Fig. 1. The convergence profile and the substructure mass function are displayed in Fig. 2. Compared to Ares, the small scale structures of Hera are smoother as they are less well resolved. Nevertheless, the substructure mass function scales very similarly with halo mass. As in the case of Ares, Hera has a bi-modal mass distribution. A massive substructure (M ∼ 5 × 10 13 h −1 M ) is located ∼ 30" (∼ 130h −1 kpc) from the cluster center, producing a secondary peak in the convergence map and elongating the iso-density contours in the southwest-northeasterly direction.
Ray-tracing
In order to generate lensing effects in the simulated images, it is necessary to compute the deflections produced by the cluster. This allows us to then use ray-tracing methods to map the surface-brightness distribution of the sources on the camera of our virtual telescope, which is HST in this case. In practice, we shoot a bundle of light rays through a dense grid covering the field-of-view (FOV), starting from the position of the observer. Then, we use the computed deflection angles to trace the path of the light back to the sources. When simulating HST observations, we compute the deflection angles on a regular grid of 2048 × 2048 points, covering a FOV of 250" × 250" centered on the cluster.
In the case of Ares, MOKA produces a map of the convergence, κ(θ). This can be converted into a map of the deflection angles, α(θ), by solving the Eq.
Since this is a convolution of the convergence, κ(θ), with the kernel function this task can be achieved numerically by means of Fast-Fourier-Transform (FFT) methods. To do so, we make use of the FFT routines implemented in the gsl library.
In the case of Hera, the mass distribution of the cluster is described by a collection of dark-matter particles. Instead of mapping them on a grid to construct the convergence map, we use our consolidated lensing simulation pipeline (see e.g. Meneghetti et al. 2010b, and references therein) to compute the deflections. To briefly summarize, the procedure involves the following steps: • We project the particles belonging to the halo along the desired line of sight on the lens plane. To select particles, we define a slice of the simulated volume around the cluster, corresponding to a depth of 10h −1 Mpc; • Starting from the position of the virtual observer, we trace a bundle of light-rays through a regular grid of 2048 × 2048 covering a region around the halo center on the lens plane. In the case of strong lensing simulations (e.g. for HST observations) we restrict our analysis to a region of of 1 × 1 h −2 Mpc 2 . In the case of simulations extending into the weak lensing regime (e.g. for Subaru-like observations), the grid or light-rays covers a much wider area (∼ 8×8 h −2 Mpc 2 ); • Using our code GLFAST (Meneghetti et al. 2010a), we compute the total deflection α(x) at each light-ray position x, accounting for the contributions from all particles on the lens plane. Even in the case of strong-lensing simulations, where light rays are shot through a narrower region of the lens plane, the deflections account for all particles projected out to ∼ 4h −1 Mpc from the cluster center. The code is based on a Tree-algorithm, where the contributions to the deflection angle of a light ray by the nearby particles are summed directly, while those from distant particles are calculated using higher-order Taylor expansions of the deflection potential around the light-ray positions.
• The resulting deflection field is used to derive several relevant lensing quantities. In particular, we use the spatial derivatives of α(θ) to construct the shear maps, γ = (γ 1 , γ 2 ), defined as: The convergence, κ(θ), may also be reconstructed as: The lensing critical lines yield formally infinite magnification for a given source redshift. They are defined as the Figure 2. Key properties of Ares and Hera (blue and red colors, respectively). Left panel: Convergence profiles (for source redshift z s = 9). In both cases, the center has been chosen to coincide with the most massive dark matter clump in the simulation; Right panel: sub-halo mass function (built considering all sub-halos within 1h −1 Mpc from the center of the most massive clump. curves along which the determinant of the lensing Jacobian is zero (e.g. Schneider et al. 1992): In particular, the tangential critical line is defined by the condition (1 − κ − |γ|) = 0, whereas the radial critical line corresponds to the line along which (1 − κ + |γ|) = 0. In the following sections, we will often use the term Einstein radius to refer to the size of the tangential critical line. As discussed in Meneghetti et al. (2013), there are several possible definitions for the Einstein radius. Here, we adopt the effective Einstein radius definition (see also Redlich et al. 2012) given by, where S is the area enclosed by the tangential critical line and d L is the angular diameter distance to the lens plane.
SkyLens
We simulate observations of galaxy cluster fields using the code SkyLens, which is described in detail in Meneghetti et al. (2008) and in Meneghetti et al. (2010a). The creation of the simulated images involves the following steps: (i) we generate a past light-cone populated with source galaxies resembling the luminosity and the redshift distribution of the galaxies in the Hubble Ultra-Deep-Field (HUDF; Coe et al. 2006); (ii) we model the morphologies of the sources using shapelet decompositions of the galaxies in the HUDF (Melchior et al. 2007). Their spectral energy distributions were obtained as part of the photometric redshift measurements of these galaxies described in Coe et al. (2006); (iii) the deflection fields of the lensing clusters are used to trace a bundle of rays from a virtual CCD, resembling the properties of the Advanced Camera for Surveys (ACS) or of the Wide Field Camera3 (WFC3), back to the sources; (iv) by associating each pixel of the virtual CCD to the emitting elements of the sources, we reconstruct their lensed surface brightness distributions on the CCD; (v) we model the surface brightness distribution of the cluster galaxies using single or double Sersic models (Sérsic 1963). These are obtained by fitting real cluster galaxies in a set of low to intermediate redshift clusters (Gonzalez et al. 2005). The Brightest Cluster Galaxies (BCGs) all include a large scale component used to model the intra-cluster-light produced by the BCG stellar halos and by free-floating stars; (vi) the SEDs of the cluster galaxies are modeled according to prescriptions from semi-analytic models or from the Halo-Occupation-Distribution technique, as explained earlier; (vii) we convert the surface brightness distributions into counts per pixel assuming a telescope throughput curve, which accounts for the optics, the camera and the filter used in carrying out the simulated observations. In each band, we simulate the exposure times (in units of HST orbits 4 ) used to carry out the mock Frontier Fields observations; 4 We assume an orbital visibility period of 2500 sec (viii) the images are then convolved with a PSF model, obtained using the Tiny Tim HST PSF modeling software (Krist et al. 2011). Finally, realistic noise is added mimicking the appropriate sky surface brightness in the simulated bands. The noise is assumed to have a Poisson distribution, and it is calculated according to Meneghetti et al. (2008) Eq. 31, assuming a stack of multiple exposures, with the number varying from band to band 5 .
Images and catalogs
This is the first phase of a comparison project, and in the next phase we intend to include additional simulations with an even greater level of realism. For this first exercise, we proceed as follows.
• For both Ares and Hera, we generate simulated HST observations in all bands that are deployed for the FFI, mimicking the same exposure times (or number of orbits) as the real observations. The level of the background is set to values provided by the ACS and WFC3 exposure time calculators in each band. The details of these simulations are provided in Table 1. Each image covers a field of view of 204 × 204 arcsec 2 . All images are co-aligned and co-rotated. Effects like gaps between chips, pixel defects, charge transfer inefficiency, cosmic rays, etc. are not included. The resolution of the ACS and WFC3 simulations are 0.05 arcsec/pixel and 0.13 arcsec/pixel, respectively. These images were made available to the modelers.
• In addition to the images, we provided the list of all multiple images obtained from the ray-tracing procedure (see Fig. 3, central panels). Each multiple-image system is characterized by the redshift of its source, which is also provided to the modelers. Thus, in this exercise we assume that all images can be identified without errors and that all their redshifts can be measured "spectroscopically". This is certainly a very optimistic assumption which will never be satisfied in the real world. In the next round of this comparison project, the assumption will be relaxed, but for the moment we decided to release this information because our objective is to determine possible systematics of the various reconstruction algorithms. Other issues related to the approaches used to search for multiple images or the impact of redshift uncertainties on the results will be studied in a future work. Some Figure 3. Color composite images of Ares and Hera (left and right panels, respectively). In the upper panels, we overlay to the optical images the surface density iso-contours. In the central panels, we show the critical lines for z s = 1 (red) and z s = 9 (white). In addition, we display the location of the multiple image systems (numbered yellow circles). The galaxies identified as cluster-members are indicated by white circles in the lower panels. of these systematics have already been investigated for some lens modeling methods, i.e., Johnson & Sharon (2016, submitted).
• We also released a catalog of cluster members (circled in the right panels of Fig. 3), containing positions and photometry in all bands. Several reconstruction methods (in particular those employing the parametric approach) build the lens model by combining smooth dark matter halos with substructures associated to the cluster members akin to our construction of Ares . In this simplified test, modelers are provided with the list of all cluster members with m AB,F814W < 24. Again, this is an over-simplification which will be removed in the next round of simulations, and which implicitly favors those methods which make use of this information. In reality, such methods have to deal with the risks of misidentification of cluster members.
• For those groups which make use of weak-lensing measurements to complement the strong-lensing constraints, we produced a single Subaru-like R-band image of both Ares and Hera covering a much larger FOV of 30 × 30 arcmin 2 . The provided image contained only background galaxies (i.e. lensed by the clusters) and stars, so that shape measurements could be made using any weak-lensing pipeline without worrying about the separation of background sources from the cluster members or contamination by foreground galaxies. We also use the publicly available pipeline KSBf90 6 (Heymans et al. 2006) based on the Kaiser, Squires and Broadhurst method (Kaiser et al. 1995) to derive a catalog containing galaxy positions and ellipticities. The resulting number density of galaxies useful for the weak-lensing analysis is ∼ 14 gal/sq. arcmin. This is significantly smaller than the number density achievable with HST.
All these data for the mock cluster lenses were shared with lens modelers participating in the project via a dedicated website 7 . We emphasize that the input mass distributions of the lenses and the techniques used to generate them were initially kept blinded to all groups. The strong lensing constraints amounted to 242 multiple images produced by 85 sources in the case of Ares and 65 images of 19 sources in the case of Hera.
Submission of the models
A large fraction of lens modelers currently working actively on the analysis of the FFI data accepted the challenge and participated in this project. The two cluster simulations were not released simultaneously. We initially released only the data for Ares, and we received reconstructed models for this cluster from 7 groups. These groups performed a fully blind analysis of the data-set. Two additional models were submitted by A. Zitrin after the input mass distributions were already revealed, under the assurance that the reconstruction was actually performed blindly.
In a second stage of the comparison exercise, we released the simulation of Hera, and received 8 models from 6 participating groups. Also for this cluster, we received additional reconstructions after we revealed the input mass distribution of the lens. These models were submitted by A. Zitrin and by D. Lam. There are two general classes of inversion algorithms. They comprise parametric models wherein the mass distribution is reconstructed by combining clumps of matter, often positioned where the brightest cluster galaxies are located, each of which is characterized by an ensemble of parameters including the density profile and shape. The parameter spaces of these models are explored in an effort to best reproduce the observed positions, shapes and magnitudes of the multiple images and arcs. The second approach is called free-form (a.k.a. non-parametric): wherein now the cluster is subdivided into a mesh onto which the lensing observables are mapped, and which is then transformed into a pixelised mass distribution following several methods to link the observable to the underlying lens potential or deflection field.
Both these approaches were amply represented in the challenge. A summary of all submitted models, with the indication of whether they are parametric or free-form and built before or after the input mass distribution of the lenses was revealed is given in Table 2. Each model is given a reference name used throughout the paper. Each modeling technique is briefly described below.
SWUnited: The Bradac-Hoag model
The Bradac-Hoag model employs the method named SWUnited: Strong and weak lensing mass reconstruction on a non-uniform adapted grid. This combined strong and weak lensing analysis method reconstructs the gravitational potential ψ k = ψ(θ k ) on a set of points θ k , which can be randomly distributed over the entire field-of-view. From the potential, any desired gravitational lensing quantity (e.g. surface mass density, deflection angle , magnification, flexion, etc.) can be readily calculated. Such an approach therefore does not require an assumption of e.g. a particular model of the potential/mass distribution. The potential is reconstructed by maximizing a log likelihood log P which uses image positions of multiply imaged sources and source plane minimization (corrected by magnification); weak lensing ellipticities, and regularization as constraints. Current implementation also includes flexion measurements, however the data was not used in this paper.
Description of the method
The implementation of the method follows the algorithm first proposed by Bartelmann et al. (1996) and is described in detail in Bradač et al. (2005) and Bradač et al. (2009). From the set of potential values they determine all observables using derivatives. For example, the convergence κ is related to ψ via the Poisson equation, 2κ = ∇ 2 ψ (where the physical surface mass density is Σ = κ Σ crit and Σ crit depends upon the angular diameter distances between the observer, the lens, and the source). The details on how the derivatives on an non-uniform grid are evaluated can be found in Bradač et al. (2009). By using a reconstruction grid whose pixel scale varies across the field, the method is able to achieve increased resolution in the cluster centre (close to where we see Table 2. Models submitted by the groups participating in the project. The table lists the name of the submitting group/author of the reconstruction, the reference name of the model, the type of algorithm (free-form, parametric, or hybrid) and whether the model was submitted blind, that is before the input mass distribution of the lens was revealed.
strongly lensed images), and hence the magnification map in the regions of high magnification is more accurate. The posterior peak values of the potential ψ k are found by solving the non-linear equation ∂ log P/∂ψ k = 0. This set of equations is linearized and a solution is reached in an iterative fashion (keeping the non-linear terms fixed at each iteration step). This requires an initial guess for the gravitational potential; the systematic effects arising from various choices of this initial model were discussed in Bradač et al. (2006). The choice of particular grid geometry, the regularisation parameter, and the hyper-parameters that set the relative weighting between the contributions to log P all become critical when weak lensing data on large scales ( 1Mpc) are included, and we need a full-field mass reconstruction. This is not the case in this work, as we are only interested in the magnification of the inner region. The reconstruction is performed in a two-level iteration process, outlined in Fig. 4. The inner-level iteration process described above for solving the non-linear system of equations ∂ log P/∂ψ k = 0 is solved in iterative fashion and repeated until convergence of κ. The outer-level iteration is performed for the purpose of regularisation. In order to penalise small-scale fluctuations in the surface mass density, the reconstruction is started with a coarse grid (large cell size). Then for each n 2 step the number of grid points is increased in the field and the new reconstructed κ (n 2 ) is compared with the one from the previous iteration κ (n 2 −1) (or with the initial input value κ (0) for n 2 = 0), penalizing any large deviations. The second-level iterations are performed until the final grid size is reached and convergence is achieved.
Strengths and Weaknesses of the Method
The main strength of the method as discussed above is that instead of fitting a specific set of family of models to the data, the method is free of such an assumption. Furthermore the positions of the points where potential is reconstructed (θ k ) can be chosen arbitrarily, which allows them to use higher density of points in the regions where signal-tonoise is the highest (i.e. where multiple images are present), and they can employ coarser sampling in the areas where this is not the case (e.g. at large radii from the centre). They are also reconstructing the potential (rather than traditionally used surface mass density), since it locally determines both the lensing distortion (for weak lensing and flexion) as well as the deflection (for strong lensing) and there is no need to assume the surface mass denisty values beyond the observed field.
The main weakness of the method on the other hand is the fact that they are trying to maximize a function with a large number of parameters and the method is inherently unstable. The inversion of the matrix satisfying the equation ∂ log P/∂ψ k = 0 is also very noisy. The method is therefore very likely to diverge or land in a secondary minimum. Regularization needs to be employed, which adds an additional parameters (relative weighting of regularization term) to the rest of log P and a choice of regularization method itself. The optimal choices need to be determined using simulation data.
Improvements in progress
A recent improvement to the method is the addition of the measurement of flexion to the input constraints. The code has been adapted (see also Cain et al. 2015) and tested on simulated data. The group is currently testing it using HST data. In the future they plan to port the code into python to make the interface user friendly, at which point they plan to release it to the community.
WSLAP+: The Diego and the Lam models
All Diego models (Diego-multires, Diego-overfit and Diegoreggrid models) and the Lam model are built using WSLAP+, a free-form or non-parametric method that includes also a compact mass component associated to the cluster members (thus, classified as hybrid in this paper). The main part of the code is written in fortran and compiles with standard compilers (like gfortran) included in the most common linux distributions. Plotting routines written in IDL are available to display the intermediate results as the code runs. A script interface allows the user to define the input and output files, select the parts of the code to be run and control the plotting routines. A detailed description of the code and of its features can be found in the papers by (Diego et al. 2005(Diego et al. , 2007Sendra et al. 2014;Diego et al. 2016). The code is not publicly available yet but a companion code LensExplorer is available. LensExplorer allows the user to easily explore the lens models derived for the Frontier Fields clusters, search for new counter-images, compute magnifications, or predict the location and shape of multiple images. Here we present a brief summary of the main code WSLAP+. Note that the code includes certain features that were not taken into account in the analysis presented in this paper but will be included in the future "unblinded" version of this work. Among these features, the code incorporates spatial information about knots in resolved systems greatly improving the accuracy and robustness of the results (see Diego et al. 2016, for s practical demonstration). In the present work, only long elongated arcs were considered as resolved systems.
Description of the method
The algorithm divides the mass distribution in the lens plane into two components. The first is a compact one and is associated with the member galaxies. The member galaxies are selected from the red sequence. The second component is diffuse and is distributed as a superposition of Gaussians on a regular (or adaptive) grid. For the compact component, the mass associated to the galaxies is assumed to be proportional to their luminosity. If all the galaxies are assumed to have the same mass-to-light (M/L) ratio, the compact component (galaxies) contributes with just one (N g = 1) extra free-parameter which corresponds to the correction that needs to be applied to the fiducial M/L ratio. In some particular cases, some galaxies (like the BCG or massive galaxies very close to an arclet) are allowed to have their own M/L ratio adding additional free-parameters to the lens model but typically no more than a few (N g ∼ O(1)). For this component associated to the galaxies, the total mass is assumed to follow either a NFW profile (with fixed concentration and scale radius scaling with the fiducial halo mass) or be proportional to the observed surface brightness. The diffuse or 'soft' component is described by as many free parameters as grid (or cell) points. This number (N c ) varies but is typically between a few hundred to one thousand (N c ∼ O(100)-O(1000)) depending on the resolution and/or use of the adaptive grid. In addition to the free parameters describing the lens model, the method includes as unknowns the original positions of the lensed galaxies in the source plane. For the clusters included in the FFI program the number of background sources, N s , is typically a few tens (N s ∼ O(10)), each contributing with two unknows (β x and β y ). All the unknowns are then combined into a single array X with N x elements (N x ∼ O(1000).
The observables are both strong lensing and weak lensing (shear) measurements. For strong lensing data, the inputs are the pixel positions of the strongly lensed galaxies (not just the centroids). In the case of long elongated arcs near the critical curves with no features, the entire arc is mapped and included as a constraint. If the arclets have individual features, these can be incorporated as semiindependent constraints but with the added condition that they need to form the same source in the source plane. Incorporating this information acts as an anchor constraining the range of possible solutions and reducing the risk of a bias due to the minimization being carried in the source plane. For the weak lensing, we use shear mesurements (γ 1 and γ 2 ). The weak lensing constraints normally complement the lack of strong lensing constraints beyond the central region allowing for a mass reconstruction on a wider scale. When weak lensing information is used, the code typically uses an adaptive grid to extend the range up to the larger distances covered by the weak lensing data (Diego et al. 2015a) The solution, X, is obtained after solving the system of linear equations where the N o observations (strong lensing, weak lensing, time delays) are included in the array Θ and the matrix Γ is known and has dimension N o x(N c + N g + 2N s ) In practice, X, is obtained by solving the set of linear equations described in Eq. 10 via a fast bi-conjugate algorithm, or inverted with a singular value decomposition (after setting a threshold for the eigenvalues) or solved with a more robust quadratic algorithm (slower). The quadratic algorithm is the preferred method as it imposes the physical constrain that the solution X must be positive. This eliminates un-physical solutions with negative masses and reduces the space of possible solutions. Errors in the solution are derived by minimizing the quadratic function multiple times, after varying the initial conditions of the minimization process, and/or modifying the grid, and/or changing the fiducial deflection field associated to the member galaxies.
Strengths and weaknesses of the method
The code implements a free-form modeling component. This implies that no strong assumptions are necessary about the distribution of dark matter. This is particularly useful if DM is not linked to the galaxies or if the baryons are also dissociated from the galaxies. The later seems to be the case in the FFI clusters which are in a merging phase. Evidence that the solution obtained by the algorithm may be sensitive to the mass of the X-ray emitting plasma was presented in Lam et al. (2014); Diego et al. (2015bDiego et al. ( , 2016Diego et al. ( , 2015c Figure 5. Diagram showing the work-flow of WSLAP+ dient algorithm a solution can be obtained in seconds. Using the slower, but more reliable, quadratic optimization approach a robust solution can be obained in minutes. Other fast approaches have been implemented as well like singularvalue-decomposition. An adaptive grid can be used that transforms the method into a multiresolution code. Different adaptive grids can be implemented that introduce a small degree of freedom but also allows to explore other possible solutions and hence constrain better the variability of the solution. The code is prepared to combine weak and strong lensing. The relative weight of the two data sets is given by the intrinsic errors in the data sets (typically small in the strong lensing regime and larger in the weak lensing regime). Correlations between the lensing data can be incorporated through a covariance matrix that naturally weights the different data sets.
The minimization is made in the source plane which may result in biases towards larger magnifications. To avoid this, the minimization algorithm needs to be stopped after a given number of iterations. Even better, including information about the size and shape of the sources in the source plane seems to solve this problem and the solution remains stable and unbiased even after a very large number of iterations. These prior information on the size and shape of the source galaxies is only possible when well resolved lensed images are available and at least one of the multiple images is not highly magnified.
The compact component is pixelized usually into a 512×512 image that covers the field of view. For the small member galaxies this pixelization results in a loss of resolution that have a small impact on lensed images that happen to be located near this small member galaxies. A possible solution to alleviate this problem is to pre-compute the deflection field of these galaxies prior to the minimization at higher resolution and later interpolate at the position of the observed lensed galaxies. This approach has not been implemented yet but it is expected to eliminate this problem.
The code can also predict more multiple images than observed. This is not being factored in at the moment but will be the subject of the null space implementation described in section 3.3.3. One systematic bias is know to affect the results at large distances from the centre. The reconstructed solution systematically underpredcits the mass (and magnification) in the regions where there is no lensing constraints. These regions are normally located beyond the corresponding Einstein radius for a high redshift background source. Addition of weak lensing to then constraints can reduce or eliminate this problem.
Improvements in progress
The addition of time delays is being implemented to the reconstruction of the solution. Time delays will be included in a similar footing as the other observables (weak and strong lensing observables) with a weight that is proportional to their associated observational error.
The addition of the null space was proven to be a useful and powerful way of improving the robustness of the derived solution (Diego et al. 2005). This direction has not been explored fully and we plan to incorporate the null space as an additional constraint. This will eliminate additional counterimages that are predicted by the model but not observed in the data.
Modeling of Ares and Hera
The Diego models use both a regular grid with 32x32=1024 grid points (Diego-reggrid model) and a multi-resolution grid with approximately half the number of grid points (Diego-multires model). The compact component of the defection field is constructed based on the brightest elliptical galaxies in the cluster. We include 50 such bright ellipticals for each cluster. The mass profile for each galaxy is taken either as an NFW with scale radius (and total mass) scaling with the galaxy luminosity or directly as the observed surface brightness. This choice plays a small role in the final solution.
Depending on the number of iterations, different solutions can be obtained. Earlier work based on simple simulations (Sendra et al. 2014) showed how in a typical situation (similar to the one in Ares and Hera), after 10000 iterations of the code the solution converges to a stable point. The code can be left iterating longer reaching a point that we refer as "overfit" where the observed constraints are reproduced with great accuracy but sometimes at the expense of a model with fake structures. In the case of Hera we computed the solution also in the overfit regime (90000 iterations) for comparison purposes (Diego-overfit model).
The Lam model differs from the Diego models as follows. A regular grid of Gaussian functions is used instead of a multi-resolution grid (as in diego-multires). In order to thoroughly explore the parameter space and to estimate the statistical uncertainty, 100 individual lens models are constructed from random initial conditions. Also, the submitted model is an average of these 100 individual models. With the exception of the 10 brightest cluster galaxies, the relative masses of all cluster galaxies are fixed, and are derived using a stellar mass-dark matter mass relation found in the EAGLE cosmological hydrodynamical simulation (Schaller et al. 2015). The stellar masses of cluster galaxies are de-rived from fitting synthesized spectra to the measured photometry using FAST (Kriek et al. 2009). The contribution from cluster galaxies are parameterized by NFW halos with scale radii derived from the dark matter mass using again a relation found in the same simulation.
Grale: the GRALE model
The GRALE models are based on the reconstruction code Grale 8 . Grale is a flexible, free-form method, based on a genetic algorithm, that uses an adaptive grid to iteratively refine the mass model. As input it uses only the information about the lensed images, and nothing about cluster's visible mass (Liesenborgs et al. 2006). This last feature sets Grale apart from many other lens mass reconstruction techniques, and gives it the ability to test how well mass follows light on small and large scales within clusters. Grale has been used to reconstruct mass distributions in a number of clusters (Liesenborgs et al. 2008(Liesenborgs et al. , 2009Mohammed et al. 2014Mohammed et al. , 2015, quantify mass/light offsets in Abell 3827 (Mohammed et al. 2014;Massey et al. 2015), derive projected mass power spectra and compare to those of simulated clusters (Mohammed et al. 2016), and to study the relation between mass and light in MACS0416 (Sebesta et al. 2015). These papers used strong lensing constraints only, and so the analysis was confined to the central regions of galaxy clusters.
Description of the method
Grale starts out with an initial coarse uniform grid in the lens plane which is populated with a basis set, such as projected Plummer density spheres. A uniform mass sheet covering the whole modeling region is also added to supplement the basis set. As the code runs the denser regions are resolved with a finer grid, with each cell given a Plummer with a proportionate width. The initial trial solution, as well as all later evolved solutions are evaluated for genetic fitness, and the fit ones are cloned, combined and mutated. The final map consists of a superposition of a mass sheet and many Plummers, typically several hundred to a couple of thousand, each with its own size and weight, determined by the genetic algorithm. Critical curves, caustics and magnifications for any given source redshift are automatically available.
Multiple fitness measures are used in Grale. These are: (a) Image positions. A successful mass map would lens image-plane images of the same source back to the same source location and shape. A mass map has a better fitness measure if the images have a greater fractional degree of overlap. Using fractional overlap of extended images ensures against over-focusing, or over-magnifying images. (b) Null space. Regions of image plane that definitely do not contain any lensed features belong to the null space. (c) Critical lines. In some cases, it is known on astrophysical grounds that a critical line cannot go through certain image regions, but must pass between them. Grale can incorporate this type of constraint, but we have not used this fitness measure in the Frontier Fields work so far. (d) Time delay measurements. Though not used in the present work, time delays measurements can also be incorporated into the fitness (Liesenborgs et al. 2009;Mohammed et al. 2015).
Each Grale run with the same set of images will produce a somewhat different final map. The dispersion between these quantifies mass uncertainties which are due to mass degeneracies present when all image information is held fixed. The best known among these, the mass sheet degeneracy, is broken in most clusters because of the multiple redshifts of background sources. The other, more numerous and less known degeneracies-documented (Saha 2000;Liesenborgs & De Rijcke 2012) and not documented-are the ones that contribute to the uncertainties.
The clusters Ares and Hera were modeled with multiple images as inputs, and using two fitness measures: (a) image positions, and (b) null space for each source (image set) separately. For image sets where it was not entirely clear if or where the counterimages might be present, the nulls were allowed to have large holes corresponding to the regions of possible additional images. Grale can operate in two modes: with lensed images represented by points, or by extended images. The present reconstruction were done using the extended image mode.
Strengths and weaknesses
The main advantage of Grale is its flexibility, and hence ability to explore a wide range of lensing mass degeneracies. Another important feature, which can be viewed as strength, is that Grale does not use cluster galaxies, or any information about the distribution of luminous matter to do the mass reconstruction. This is useful if one wants to test how well mass follows light (Mohammed et al. 2014;Sebesta et al. 2015).
Grale's main weakness is that it is not an optimal tool for identifying lensed images. This is a direct consequence, or, one may say, the flip side of Grale's flexibility. A technical feature of Grale worth mentioning is that it requires significant computational resources: Grale runs on a supercomputer.
Improvements in progress
The Grale team has carried out numerous tests of the code, to optimize the set of genetic algorithm and other code parameters. In the near future Grale will be extend to include fitness measure constraints from weak shear and flexion.
LensPerfect: the Coe model
The Coe model for Ares uses LensPerfect 9 (Coe et al. 2008(Coe et al. , 2010. LensPerfect makes no assumptions about light tracing mass. The lens models perfectly reproduce the input observed positions of all strongly lensed multiple images. Redshifts may be either fixed to input spectroscopic redshifts or included in the model optimization based on input photometric redshifts and uncertainties.
The image positions, redshifts, and estimated source positions define the lensing deflection field sparsely at the multiple image positions. LensPerfect interpolates this vector field, obtaining a smooth model which exactly reproduces the image deflections at the input image positions. Based on this 2D deflection map, the mass distribution, magnification, and all other model quantities may be derived.
Description of the Method
The curl-free vector interpolation scheme (Fuselier 2006;Fuselier 2007) uses direct matrix inversion to obtain a model composed of radial basis functions (RBFs) at the positions of the input vectors (our multiple image locations). Each 2D RBF has two free parameters -amplitude and rotation angle -which are determined uniquely by the matrix inversion.
After setting the width of the RBF, the free parameters are the source positions and any uncertain redshifts. LensPerfect performs an optimization routine searching for those parameters which yield the mostâȂIJphysicalâȂİ mass model according to a set of criteria including positive mass smoothly decreasing outward from the center on average with rough azimuthal symmetry. See Coe et al. (2008Coe et al. ( , 2010 for more details.
Strengths and Weaknesses
In high-resolution HST ACS images, strongly lensed multiple image locations are observed and measured with accuracies of ∼1 pixel, or ∼0.05". By fully utilizing this information, LensPerfect is able to obtain relatively high resolution maps of galaxy cluster substructure without relying on any assumptions about light tracing mass. Large numbers of multiple images may be input, and the number of free parameters is always roughly equal to the number of constraints. The mass model spatial resolution increases with the density of multiple images on the sky.
Given current numbers of multiple images (up to ∼100 or so) for a single cluster (e.g., Coe et al. 2010), LensPerfect can accurately recover cluster mass profiles along with some larger subhalos. Magnifications, however, are influenced by local mass density gradients, which are not accurately reproduced by LensPerfect given current constraints. Furthermore, LensPerfect mass models are only well constrained within the area enclosed by the multiple images and should generally be disregarded outside this region.
Future improvements
LensPerfect is well suited to future datasets such as JWST imaging revealing still greater numbers of multiple images. Initial tests with hundreds to a thousand multiple images show great potential for resolving many individual cluster galaxy halos without assuming light traces mass. The biggest hurdle (seen in tests with up to 10,000 multiple images) may be accounting for multiple lens plane deflections due to mass along the line of sight.
One potential improvement would be to develop a hybrid method combining light traces mass assumptions with LensPerfect adding deviations to the mass distribution.
Lenstool: the CATS and Johnson-Sharon models
Lenstool as an inversion algorithm deploys both strong and weak lensing data as input constraints. Below, we first briefly outline the available capabilities of the tt Lenstool software package and then describe the specific versions and assumptions that were used to reconstruct Ares and Hera by two groups: CATS and Johnson-Sharon. The CATS collaboration developed the Lenstool algorithm collectively over a decade. The code utilizes the positions, magnitudes, shapes, multiplicity and spectroscopic redshifts for the multiply imaged background galaxies to derive the detailed mass distribution of the cluster. The overall mass distribution in cluster lenses is modeled in Lenstool as a super-position of smoother large-scale potentials and small scale substructure that is associated with the locations of bright, cluster member galaxies. Individual cluster galaxies are always described by parametric mass models, whereas the smoother, large-scale mass distribution can be flexibly modeled nonparametrically or with specific profiles. This available multiscale approach is optimal, in as much as the input constraints required for this inversion exercise are derived from a range of scales. Further details of the methodology are outlined in Jullo & Kneib (2009). In its current implementation in Lenstool, the optimization of the combined parametric and non-parametric model is computationally time intensive. And some degeneracies persist, despite the large number of stringent input constraints from the positions, shapes, brightnesses, and measured spectroscopic redshifts of several families of multiple images. However, these degeneracies are well understood, in particular for specific parameters of models used to characterize the mass distribution. In order to tackle this challenge an iterative strategy has been developed wherein initial models are derived with the best-fit values solely from the parametric model, which are then optimized using the underlying multi-scale grid. Both the multi-scale and the parametric models are adjusted in a Bayesian way, i.e.,their posterior probability density is probed with a MCMC sampler. This process allows an easy and reliable estimate of the errors on derived quantities such as the amplification maps and the mass maps. The CATS and the Sharon Johnson models are built using the Lenstool public modeling software (see e.g. Jullo et al. 2007). The public version of Lenstool deployed by Johnson-Sharon adopts the original modeling approach developed by Natarajan & Kneib (1997) wherein a small-scale dark-matter clump is associated with each bright cluster galaxy and a large-scale dark-matter clump with prominent concentrations of cluster galaxies. This technique of associating mass and light has proven to be very reliable and results in mass distributions that are in very good agreement with theoretical predictions from high-resolution cosmological N-body simulations. The Johnson-Sharon models follow the methods described in Sharon et al. (2012); Johnson et al. (2014).
Description of the method
Typically, cluster lenses are represented by a few clusterscale or group-scale halos (representing the smooth component, with σ in the range of hundreds to ∼ 1500 km s −1 ), with contribution from galaxy-scale halos (see below). Large scale dark matter halos are parametrized as Pseudo-Isothermal Elliptical Mass Distribution (PIEMD), where ρ 0 is a normalization, and r core and r cut define a region r core r r cut in which the mass distribution is isothermal, i.e., ρ ∝ r −2 . In Lenstool, PIEMD has seven free parameters: x, y are the coordinates on which the halo is centered, e and θ are the ellipticity and the position angle, respectively; r core ; r cut ; and effective velocity dispersion σ 0 which determines the normalization (we note that the σ 0 is not exactly the observed velocity dispersion, see Elíasdóttir et al. 2007).
The parameters of cluster-scale halos are kept free, with the exception of r cut which is usually unconstrained by the strong lensing data, and is thus fixed at an arbitrary value (typically 1500 kpc). CATS also model galaxies as PIEMD, whereas Johnson-Sharon model galaxies as circular isothermal distributions (see 3.6.5). To keep the number of free parameters reasonably small, the parameters of galaxy-scale halos are determined from their photometric properties through scaling relations assuming a constant mass-to-light ratio for all galaxies, The positional parameters, x, y, e, and θ, are fixed to their observed values as measured from the light distribution in the imaging data. CATS used the simulated strong lensing catalogs and the Lenstool software to perform a mass reconstruction of both simulated clusters, assuming a parametric model for the distribution of dark matter. The model is optimized with the Bayesian Markov chain Monte-Carlo sampler, described in detail in Jullo et al. (2007). The mass distribution is optimized in the image plane by minimizing the distance between the observed and predicted multiple image positions. Weak lensing information is not taken into account. The image-plane root mean squared (RMSi) distance of the images predicted by the model were used to compare with the observed positions as an accuracy estimator of the model (Limousin et al. 2007).
The CATS collaboration has modeled both clusters, Ares andHera have been modeled as bi-modal clusters with two smooth dark matter clumps and two BCGs lying in the centre of those main clumps. Each smooth component is modeled using a PIEMD profile. Cluster member galaxies are taken from the given simulated catalogs up to a magnitude of m f 160w < 22.0 for Ares and m f 814w < 24.0 for Hera. These are modeled with PIEMD profiles under the assumption that (i) their position, ellipticity and orientation corresponds to the brightness profile of their associated galaxy, (ii) their mass is proportional to the galaxy magnitudes in F160W band. In the provided models, it is assumed that they all have the same M/L ratio. All multiple images provided were used in this model. In addition, a few (massive) cluster galaxies in both clusters were more carefully modelled in order to improve the RMSi of nearby multiple images. Four central cluster galaxies were modeled in this way in Hera (of which one is considered to be a foreground) and three, also central, galaxies in Ares.
These reconstructions have a resulting RMS in the image plane of 0.87" for Ares and 0.95" for Hera.
The Johnson-Sharon models for Ares andHera were constructed using techniques similar to those in Johnson et al. (2014), using the catalogs of multiple images that were provided to the lens modelers as positonal constraints. The redshifts of the background sources were assumed to be known spectroscopically with no uncertainty or outliers. Both clusters were modeled with two PIEMD halos, to represent the smooth dominant dark matter components, each centered close to the two peaks in the light distribution in the mock HST images with their exact positions set by the MCMC minimization process.
Individual PIEMD halos were assigned to each galaxy in the provided catalog, with positional parameters, x, y, e, and θ, fixed to their observed values as measured from the light distribution in the mock imaging data. The parameters that describe the slope of the projected mass density were scaled with the light in the F125W band assuming a constant M/L ratio for all the galaxies, following the scaling relations in Equation (12). As both clusters are at z 0.5, the same scaling relations were used for the cluster member galaxies: σ 0 = 120 km s −1 , r core = 0.15 kpc, and r cut = 30 kpc, and m = 20.00, 19.87 for Ares and Hera, respectively. A few galaxies located near constraints were modeled independent of the scaling relations and their core radius and velocity dispersion were left as free parameters in the lens models. This includes the two bright cluster galaxies lying at the centers of the gravitational well of both clumps in the dark matter distribution in both clusters.
We note that the PIEMD functional form of the cluster galaxies used in Lenstool differs from the function that was used in the simulation of Ares in the treatment of the truncation radius. While the PIEMD profile transitions smoothly from isothermal and asymptotes to zero at large radii, the simulated mass distribution truncates the mass function sharply to zero at r = rcut. This discrepancy is what causes the sharp circular residuals seen in Figure 8. We thus do not expect the model to accurately reconstruct the mass distribution at radii larger than the truncation radius.
In addition, the Johnson-Sharon model assumes that the ellipticity and position angle of the light of each mock galaxy follows the underlying mass distribution. In practice, all the galaxies in the underlying simulated mass distribution had circular geometry (i.e., no ellipticity) and the galaxies were painted on with arbitrary ellipticities and position angles. This feature of the blinded analysis contributes to residuals on small scales in the mass reconstruction. Finally, the Johnson-Sharon model does not use weak lensing information and does not include cluster-scale halos outside of the main field of view if such halos are not required by the strong lensing constraints alone.
Strength and weaknesses
Lenstool strengths and weaknesses are typical of parametric models. This approach is useful in the sense that it directly compares physically motivated models to data, propagating errors in a fully consistent an Bayesian manner. It allows direct comparison with simulation outputs and the assessment of possible discrepancies. On the other hand, para-metric models can significantly differ from reality and their lack of freedom introduces biases in the estimated masses, matter densities or errors. Regarding practical aspects, errors estimation implies running MCMC sampling, which can only be performed on supercomputer. Lenstool calculations can last for a couple of weeks on shared memory machines depending on the model complexity and the amount of multiple images. In the case ofHera and Ares, optimization lasts for about 10 hours.
Improvements in progress
CATS is currently working actively on two improvements that should significantly improve the accuracy of their mass reconstructions. First, Lenstool in its current revision does not permit radial variation of the ellipticity for the mass distribution, and this restricts the flexibility of models that can be generated. Current code development aims to include this additional degree of freedom in the modeling. Secondly, in order to maximally extract information from the exquisite image resolution afforded by the HST FFI, flexion measurements will be included as input constraints in the modeling. Finally, a new MCMC engine with MPI support and a GPU-based Lenstool are under development to decrease the computing time.
GLAFIC: the GLAFIC models
The publicly available GLAFIC code (Oguri 2010) 10 is used for mass modeling in the GLAFIC models.
Description of the method
GLAFIC adopts the so-called parametric lens modeling in which the lens mass distribution is assumed to consist of multiple components, each of which is characterized by a small number of parameters such as the centroid position, mass, ellipticity, and position angle. Mass distributions of cluster member galaxies are modeled by the pseudo-Jaffe model. In order to reduce the number of parameters, the velocity dispersion σ and truncation radius r trunc of each member galaxy are assumed to scale with the galaxy luminosity L as σ ∝ L 1/4 and r trunc ∝ L η , and the normalizations of the scaling relations are treated as free parameters (see e.g., Oguri 2010). Ellipticities and position angles of individual member galaxies are fixed to values measured in the image. These parameters are optimized to reproduce positions of observed multiple images, either using the downhill simplex method or Markov-Chain Monte Carlo. Examples of detailed cluster mass modeling with GLAFIC are found in Oguri (2010), Oguri et al. (2012), Oguri et al. (2013), and Ishigaki et al. (2015); Kawamata et al. (2016). GLAFIC can also simulate and fit lensed extended sources. This functionality has been used to e.g, fit a lensed quasar host galaxy (Oguri et al. 2013), estimate a selection function of lensed high-redshift galaxies , and derive sizes of lensed high-redshift galaxies .
Strengths and weakness
An advantage of GLAFIC is a wide range of lens potential implemented in the code, which enables flexible modeling of cluster mass distributions. For example, in addition to the standard external shear, one can add higher-order perturbations with arbitrary multipole orders (see Oguri 2010). When necessary, in addition to observed multiple image positions, GLAFIC can also include flexible observational constraints such as time delays and flux ratios between multiple images, and (reduced) shear and magnification values at several sky positions measured by weak lensing and Type Ia supernovae, respectively.
The source plane χ 2 minimization is often adopted for efficient model optimizations. In doing so, GLAFIC converts the distance between observed and model positions in the source plane to the corresponding distance in the image plane using the full magnification tensor. In Appendix 2 of Oguri (2010) it has been shown that this source-plane χ 2 is accurate in the sense that it is very close to the image-plane χ 2 and therefore is sufficient for reliable mass modeling.
Of course GLAFIC supports the image plane χ 2 minimization as well. Adaptive-meshing with increased resolution near critical curves is used for efficient computations of multiple images for a given source position. Multiple images and critical curves are computed for the best-fit model from the source plane χ 2 minimization to check the robustness of the result.
A known limitation of GLAFIC is that it can only handle single lens planes. Lens systems for which multiple deflections at different redshifts play a crucial role are difficult to be modeled by GLAFIC.
Modeling Ares and Hera
Each halo component is modeled by the elliptical NFW profile. For Ares, five halo components are included, in addition to the member galaxies modeled by the pseudo-Jaffe model (see above). For Hera, two NFW halo components are placed around two brightest galaxies. These two brightest galaxies are modeled by the Hernquist profile, separately from the other member galaxies. Ellipticities and position angles of the brightest galaxies are fixed to observed values. To achieve better fit, external shear and third-order multipole perturbation are added for Hera. In modeling member galaxies, η in the scaling relation of truncation radii is fixed to 0.5 for Ares and is treated as a free parameter for Hera. Simulated F814W band images are used to measure luminosities, ellipticities, and position angles of member galaxies with SExtractor for both Ares and Hera. Overall, more elaborated lens model is adopted forHera compared with Ares, because in the initial exploration period of mass modeling it was found that the lens potential ofHera appears to be much more complex. The resulting best-fit models reproduce image positions very well, with rms of ∼ 0. 27 for Ares and ∼ 0. 43 for Hera .
LTM: the Zitrin-LTM models
The Zitrin light-traces-mass (LTM) method Broadhurst et al. 2005), was designed primarily to be a very simple, straightforward modeling method with a minimal number of free parameters, relying only on the observable light distribution of cluster members (namely their positions and relative fluxes) to supply a well-guessed and highly predictive solution to the mass distribution of the lens and the location of multiple image systems (e.g. Zitrin et al. 2012Zitrin et al. , 2013. Previous to the design of this method, it had been shown that (a) cluster galaxies must be included, typically with a mass in proportion to their luminosity, in order the solution to have predictive power to find multiple images and that (b) a dark-matter component should be added (see Kneib et al. 2004;Broadhurst et al. 2005). This simple parametrization, as we detailed further below, has allowed to identify systems of multiple-images in an unprecedented number of clusters, where the images are physically matched also by the initially-guessed model (which is then refined), and are not only matched by eye based on their color information as is often accustomed.
Description of the method
As mentioned above, this method was designed to include both a galaxy component and a dark-matter component, yet to successfully do so with a minimal number of free parameters. To form the galaxy component, cluster galaxies (found following the red sequence in a color-magnitude diagram) are assigned each with a power-law mass density distribution, where the normalization of each galaxy's weight is proportional to its (relative) flux, and the exponent is the same for all galaxies and is the first free parameter of this method. The superposition of all galaxy power-law mass distribution then constitutes the lumpy, galaxy component of the model. To describe the dark-matter distribution, the galaxy component is smoothed with either a Spline interpolation or usually a Gaussian kernel whose degree or width, is the second free parameter of this method. The smoothing yields a diffuse, smooth dark-matter component that depends on the initial light distribution; therefore the method is dubbed Light-Traces-Mass as both the galaxy and dark-matter components roughly follow simply the light distribution. Next, the two components are added with a relative weight (typically around few to a couple dozen percents for the galaxies), which is another free parameter in the modeling. The fourth parameter is an overall normalization of the lens model to a certain redshift or multiple-image system. In addition to the four parameters, we often introduce several other parameters that add some flexibility and help in refining the final solution given the set of input multiple images. These include a core and two-parameter ellipticity for the BCG(s), two parameter external shear (which mimics ellipticity for the critical curves), and chosen galaxies whose weights (or fluxes) are left free to be optimized in the modeling, meaning that they are allowed to deviate from the adopted mass-tolight relation. The minimization for the best-fit solution and related errors, given a set of multiple-images (often found with the aid of the initially guessed map from this method), is performed with a χ 2 criterion comparing the positions of multiple-images with the predicted ones, in the image plane, via a few-dozen thousand MCMC steps with Metropolis-Hastings algorithm.
Strengths and weaknesses
The resulting lens model from this procedure, as its name suggests, is strongly coupled to the input light distribution of the lens (cluster members positions and luminosities). This entails various strengths and weaknesses. The fact that the solution is coupled to the light distribution is what grants this method with the unprecedented prediction power to delineate the critical curves and locate multiple images in advance, even if no multiple-image system is used as constraint (see Zitrin et al. 2012). In fact, most of the free parameters in the initial solution are relatively well known, so that as a first step (i.e. to find multiple images) we can reduced these to one free parameter -namely the normalization of the lens, and obtain a well-guessed solution, that we have shown is not much different that the resulting solution for the same clusters when using many multiple images as constraints (Zitrin et al. 2012). This means that the method is capable to supply a well guessed solution also in cases where HST high quality data is lacking.
The simplistic nature of this method also means that the solution is often faster to converge and compute than other grid-based methods or parametrizations, allowing the analysis of many dozens of cluster lenses in a relatively short time.
Another advantage that this method encompasses is that the same very simple procedure applies to all clusters -from relaxed, small clusters and groups (such as the relatively smaller cluster lenses A383, MS2137 or A611, see Zitrin et al. 2015, for recent modeling), to the most complex merging clusters such as M0416, M1149 or M0717 (Zitrin et al. 2015), that often require multi-halo fits in other parametric methods..
But the coupling to the light distribution also means that the spatial flexibility of the model is small. While our parameterization does allow for a flexible mass profile in the sense that it is not limited to a certain analytic form, the solution is limited spatially by the light distribution. This means that the multiple-image reproduction accuracy is often smaller than in other more flexible parametric methods that model the dark-matter independently of the light (such as other well-known methods listed in this work including our own second method listed below). This is manifested usually in clusters that have a large number of multipleimages spread across the field; for these the LTM method often reaches a finite rms value of ∼ 1 − 2" on which it cannot improve further.
A second disadvantage stemming from our parameterization -since we do not model the dark matter independently of the light and since the critical curve's ellipticity in our modeling is for the most part generated by the external shearshear, is that there is no ellipticity assigned directly to the mass distribution. This creates some discrepancy between the lens and mass models: the mass distribution can be often significantly rounder than implied by the critical curves, whose ellipticity comes from the external shear that does not contribute ellipticity to the convergence map. In simple words, this reveals a degeneracy regarding the true ellipticity of the mass distribution -as the ellipticity of the lens can be attributed to intrinsic ellipticity or to external shear.
To summarize, we thus consider this method very re-liable and robust, supplying especially well-guessed initial maps for any given cluster regardless of its complexity, and with unprecedented prediction power to find multipleimages, but it can also be in some cases less accurate and spatially flexible. Also, given this is a light-tracing method, we do not expect this method to describe well numerical simulations whose mass to light relations are not completely representative.
Improvements in progress
We are always looking for ways to speed up the minimization procedure so that a larger parameter space could be for a refined final solution. We are also testing whether replacing the galaxy component with the more well-behaved PIEMD (see below), despite having a fixed isothermal slope, would be sufficient for our purposes. We have also implemented an option of smoothing with an elliptical gaussian which then introduces ellipticity into the matter distribution itself. Note that our calculations are performed on an input grid matching an actual image of the field, with its native pixel-scale. To speed the minimization procedure, we often reduce the resolution (especially in the case of HST which has high spatial resolution) by factors 4 to 10 on each axis. This contributes to the finite, non-negligible rms obtained often in this method (e.g. due to pixel coordinates round ups etc.). We intend to investigate this further and try to improve the resolution in the crucial places, such as near the critical curves and when delensing to the source plane, where this lower resolution might prevent a further improved solution.
Modeling of Ares and Hera
To model Ares and Hera we use the following setting in our LTM pipeline. We create a grid of 4080×4080 pixels covering the field-of-view, with an angular resolution of 0.5"/pixel. The calculation in practice is performed in two stages -first we run many individual random MC chains with a grid resolution lower by factor 10 on each axis. From this we find the global minimum area and extract the covariance matrix. A proper, long MCMC is then run with a grid of 4 times lower resolution than the original input image. The final solution is then interpolated to match the original pixel scale map. Errors were derived using 50 random models form the MC chain, with a positional uncertainty of 1.4" for the χ 2 term. We use the input list of galaxies supplied by the simulators scaled by their light. In Ares, we allow five galaxies to deviate from the nominal mass-to-light ratio and be freely weighted by the MC chain, and for two of them -especially important where radial images are seen in the data -we allow for a free core radius as well. In the case of Hera, only three galaxies were modeled in this way. The ellipticity (and direction) of these bright galaxies are also left as free parameters. As constraints, we use the full list of multiple images. No weak lensing constraints were used. The final rms of the model is 1.8" and 1.2" for Ares and Hera, respectively, which, as we mention above, is in part limited by the finite lower resolution of the grid we work on.
3.9 PIEMDeNFW: the Zitrin-NFW models Zitrin et al. (2013) expanded their pipeline to also allow for a fully parametric solution. This method in essence is similar to the other parametric techniques mentioned here such as Lenstool and GLAFIC. The main motivation for adding this parametric pipeline was to (a) allow for further flexibility and improved fits by having a semi independent solution in which the dark matter is modeled independently of the light, and (b) test for the magnitude of systematic differences between these methods (Zitrin et al. 2015).
Description of the method
As is usually accustomed in parametric modeling, in order to describe well the multiple-image positions with enough prediction power, this method also relies on a combination of galaxy and dark matter. The red sequence cluster galaxies are modeled each as PIEMDs based on the prescription and scaling relations used in Lenstool, and typically with a fixed mass-to-light ratio. Usually two or three parameters are left free to describe the galaxy component: the velocity dispersion, core radius and truncation radius, of an M* galaxy, which is used as reference for the scaling relations. The dark matter component is modeled also with an analytic, fully parametric recipe. We can choose either an elliptical Navarro et al. (1996) profile (eNFW), or, also PIEMD for the cluster's dark matter halo. Therefore, in this method, the dark matter is modeled with a symmetric analytic form, independent from the light distribution. Similar to our LTM method the same minimization engine is used here: a long MCMC with a χ 2 image-plane criteria. Also here we can add other parameters to be optimized in the minimization, such as the ellipticities of the BCGs, their mass can be allowed to deviate from the adopted scaling relation, and so forth.
Strengths and weaknesses
Compared to our LTM technique, for example, we have found that the fully parametric technique is more spatially flexible and can thus often supply a more accurate solution with a (somewhat) smaller image-plane rms. On the other hand, the higher flexibility reduces the prediction power for finding multiple images (especially before the model is initially constrained), and the reliability of the results, since they can for a wider range of (not necessarily physically viable) configurations.
In a similar sense, another main disadvantage of such parametrizations is the need to add dark matter halos to model sub halos for complex structures (such as merging clusters), without knowing if these are fundamental parameters, e.g. accounting truly for additional dark matter halos, or just nuisance parameters that help add flexibility and refine the fit. Additionally, each such added halo adds several (usually 4-6) free parameters to the minimization procedure rendering it significantly more cumbersome.
Note that since we developed this method with the same infrastructure used for our LTM method, and in part, for comparison with it, the solutions given by the PIEMDeNFW method, despite being analytic in nature, are also calculated on a grid the size of the input *.fits image, similar to our LTM procedure. This results in a somewhat slower procedure compared, for example, to our LTM technique, and also here to achieve higher converging speed we lower the grid resolution by factors of a few on each axis. Again this leads to a finite rms due to e.g. numerical round-ups in highmagnification regions.
Improvements in progress
The main improvement we wish to implement is to speed up the procedure. This for start can be achieved if part of the calculation is done completely analytically/numerically (say, only around the positions of multiple images) rather than on a full-frame grid. We intend to explore such possibilities. Also, recently we added the possibility for an external shear to allow for further flexibility.
Modeling of Ares and Hera
To model Ares and Hera, we use the following setting in our PIEMDeNFW pipeline. As done withe the LTM-gauss method, we create a grid of 4080×4080 pixels covering the cluster. We start by running many individual random MC chains with a grid resolution lower by factor 20 on each axis. From this we find the global minimum area and extract the covariance matrix. A proper, long MCMC is then run with a grid of 4 times lower resolution than the original input image. The final solution is then interpolated to match the original pixel scale map. Errors were derived using 50 random models form the MC chain, with a positional uncertainty of 1.4" for the χ 2 term. We use the input list of galaxies, scaled by their light. The brightest galaxies are optimized individually as done with the LTM-gauss pipeline. In this case, however, the ellipticity (and direction) of the four brightest galaxies in both clusters are also left as free parameters. In Ares, two cluster-scale DM halos in the form of elliptical NFW mass densities are introduced, with fixed centering on the respective BCGs. In Hera, we used three such large halos. As constraints we use the full list of multiple images. No weak lensing constraints were used. The final rms of the model is 1.8", which as we mention above is in part limited by the finite lower resolution of the grid we work on.
RESULTS
In this section we describe how the different methods perform at recovering several properties of the lenses.
Convergence maps
The reconstructed convergence maps of Ares and Hera are shown for all models in Figs. 6 and 7, respectively. The maps are all normalized to z S = 9. In both figures, the maps derived from the free-form algorithms are shown first (beginning from the upper-left panel). The last panel in each figure shows the true convergence map, for easy comparison. All maps cover the same field of view. This does not correspond to the size of the simulated images that were made available to the modelers. Indeed, for several technical reasons inherent to each methodology employed, the submissions by the different groups were different in size. To carry out a proper comparison between the models, we restrict our analysis to the area around each of the two lenses, which is covered by all the reconstructions. More precisely, we used as footprints for identifying the area of analysis the submissions by the GLAFIC and by the GRALE teams for Ares and Hera, respectively. In the first case, the FOV is ∼ 180" × 180". In the second, the reconstructed area is ∼ 110" × 110" wide.
Since Ares was constructed parameterically with light tracing mass, it is particularly well suited for reconstruction by parametric techniques. The parametric CATS, GLAFIC, Johnson-Sharon, and Zitrin models and the hybrid Diego model all include mass substructure at the observed positions of cluster galaxies, recovering the Ares mass distribution with high fidelity. The free-form GRALE, Bradac-Hoag, and Coe models do not assume light traces mass, reconstructing the mass distribution solely based on the observed lensing. They recover the main mass peaks, but smaller substructures are not constrained by the lensing data. The GRALE model accurately reproduces the cluster bimodailty. The Bradac-Hoag and Coe models are less smooth, including noisy smaller substructure, especially outside the region constrained by strongly lensed multiple images.
The Hera cluster, obtained from an N-body simulation, is less ideal for being reconstructed using parametric methods. Indeed, the performance of the parametric algorithms appears more consistent with that of the free-form ones. Hera is constructed assuming light traces its massive substructure, as assumed by the parametric and hybrid methods. The Bradac-Hoag and GRALE models do not make that assumption and thus recover fewer small subhalos.
The major differences between the models and between the models and the true convergence maps are found near substructures, but also the shape of the mass distributions, especially at large distances from the center, show inconsistencies. We will discuss them in more details in the next sections.
To better highlight the differences between the maps, we show the ratios between the reconstructed and the true convergence maps for Ares and Hera in Figs. 8 and 9, respectively.
One dimensional mass and convergence profiles
We begin discussing the results on the mass and convergence (or surface density) profiles. Meneghetti et al. (2010a) already showed using only one of the methods employed in this paper (Lenstool, employed by both the CATS and the Johnson-Sharon teams) that strong lensing can potentially measure the mass inside the Einstein radius with an accuracy of the order of a few percent. In the cases of Ares and Hera , the sizes of the Einstein radii are significantly different. In Fig. 10, we show how θ E grows as a function of the source redshift z s . The Einstein radius of Ares is ∼ 20 arcsec at z s = 1. Its size at z s ∼ 2 is more than doubled and it grows asymptotically to ∼ 55 arcsec at higher redshift. The reason of the steep rise between z s = 1 and z s = 2 is that Ares has a bi-modal mass distribution. For sources at low redshift (z s ∼ 1), each of the two mass clumps have their own critical lines. These are shown by the red curves in the upper middle panel of Fig. 3. To draw the plot in Fig. 10, we use the center Figure 6. Convergence maps (z s = 9) of Ares. The first nine panels show the results of the reconstructions, beginning with the free-form methods (panels 1-4) and concluding with the parametric models (panels 5-9). The lower left panel shows the true convergence map, for comparison. MNRAS 000, 1-?? (2016) Figure 7. Convergence maps (z s = 9) of Hera. The first eleven panels show the results of the reconstructions, beginning with the free-form methods (panels 1-6) and concluding with the parametric models (panels 7-11). The lower right panel shows the true convergence map, for comparison. MNRAS 000, 1-?? (2016) Figure 8. Ratios between the mass reconstructions and true Ares mass distribution.
of the most massive mass clump as reference, and only the critical line enclosing this point is used to measure θ E . By increasing the source redshift, the critical lines around the two mass clumps merge into a single, very extended critical line (see the white line in the upper middle panel of Fig. 3, which shows the critical line for sources at z s = 9).
In the case of Hera, the Einstein radius grows from ∼ 12 arcsec at z s = 1 to ∼ 30 arcsec at z s = 9. The critical lines for these two source redshifts are shown in the lower central panel of Fig. 3.
In Fig. 10 we also show the redshift distributions of the multiple images identified in the background of the two clusters (red and blue histograms). These multiple images are marked with numbered circles in the central panels of Fig. 3. The labels of each image are constructed as X.Y, where X is the ID of the source and Y is the ID of of the multiple images belonging to the same system. Being such a powerful lens, Ares produces many more multiple images than Hera, some of which originate from galaxies at redshift z s ∼ 6. The most distant multiple image system in the field of Hera is only at z s ∼ 3.5. In both cases, however, the redshift distribution of the multiple images overlaps with the redshift range where the size of the Einstein radii have the strongest growth. Indeed, the relative variation of θ E between z s = 3 and z s = 9 is only 10%. Thus, we expect that the models constructed using these constraints can be safely used to trace the growth of the cluster strong lensing region up to very high redshifts. Anologously, we expect that the mass profiles are recovered with higher precision in the radial ranges 20 θ 60 arcsec and 10 θ 30 arcsec for Ares and Hera, respectively. This is consistent with our findings. The upper panels of Figs. 11 and 12 show the projected enclosed mass profiles of Ares and Hera, respectively. The bottom panels show the projected mass density profiles in units of convergence κ for z S = 9. The profiles are computed with respect to the center of the most massive sub-clump in each cluster field. To facilitate the comparison between parametric and free-form methods, we show the results for these two classes of models separately (left and right panels).
The mass distribution of Ares is generated in a very similar manner as used by the parametric techniques (except Zitrin-LTM) to model the lenses -as a combination of parametrized mass components, including subhalos at the positions of cluster galaxies. Therefore, it is not surprising that these methods recover the true mass profile of Ares with very good accuracy. For example, the CATS, Johnson-Sharon, and GLAFIC profiles differ from the true mass profile by ±2%. Larger differences are found for the Zitrin-LTM-gauss and the Zitrin-NFW approaches (perhaps because these are calculated on a lower resolution grid), see discussion in Section 3.8 and 3.9), but even for these models, in the region probed by strong lensing, the deviations from the true mass profiles are within ∼ ±10%.
It is noteworthy that neither the CATS nor the Johnson-Sharon teams used the NFW density profile to model the smooth DM halos of the two main mass components of Ares. On the contrary, they used cored isothermal profiles, which can of course be tweaked to match the lensing properties of NFW halos. This is consistent with the findings of Shu et al. (2008), who showed that, in several cases, strong lensing clusters are equally well modeled with cored-isothermal and NFW density profiles. The additional constraints provided by complementary analysis, such as stellar-kinematics in the BCG could helping to break this degeneracy (Newman et al. 2013). Moreover, the adoption of an isothermal profile with core instead of the NFW profile does not prevent several models from recovering the correct slope of the surface mass density (i.e. convergence) profile over a relatively broad range of distances from the cluster center. The constraints available to carry out the reconstructions include both radial and tangential features, with the former particularly sensitive to the slope of the projected density profile.
Among the free-form methods, the reconstructed profiles generally deviate by 5 − 15% from the true mass and convergence profiles. Some models (e.g. GRALE) have a very similar performance to parametric methods. The best agreement between the true profile and the models is found between 20" and 60" from the lens center, which nicely corresponds to the size of the Einstein radius, as shown in Fig.10.
Hera is a less idealized test case for most of the parametric methods, but it still assumes light traces the mass substructure. So also for this lens, the parametric models reproduce the input mass profiles more closely than the freeform methods, though the differences between the two approaches are now reduced. We find that the mass profiles obtained with the parametric methods differ from the input mass profile by less than 10% within ∼ 80 arcsec from the assumed center. The same level of accuracy is reached by the free-form methods within 10 r 30 arcsec. This radial range corresponds to the size of the region probed by strong lensing. Both parametric and free form methods clearly converge to the true mass profiles within this range of distances, where the relative differences are of the order of few percent.
Shape and orientation
Having quantified the performance of the methods to reconstruct one-dimensional mass profiles, we discuss now their ability to recover the two-dimensional mass distributions of the lenses.
To be more quantitative about how well the methods employed recover the true shape and orientation of the two clusters, we consider the projected mass distributions of the lenses in terms of their iso-surface-density (or convergence, κ) contours. We use the following procedure: (i) From the convergence maps, we extract the contours corresponding to κ-levels in the range 0.5-3.0. Since both Ares and Hera have bi-modal mass distributions, we use the center of the largest mass clump as the reference center for this analysis and we consider only the contours enclosing it.
(ii) We fit an ellipse to each contour and measure its ellipticity and position angle. We also measure the size of each contour by means of an equivalent radius r κ , defined as Figure 11. Mass profiles in the inner 100 arcsec of Ares: enclosed mass (upper panels) and mass surface density (lower panels). Results for parametric and free-form methods are shown in the left and in the right panels, respectively. The insets on the bottom of each panel show the ratio between the reconstructed and the true mass profiles. The horizontal dashed lines correspond to ±2% and ±10% differences between lens models and input mass distribution. Figure 12. Mass profiles as in Figure 11 but for Hera. Figure 13. Ares mass iso-surface-density contours κ = 0.5, 1.0, 1.5, 2.0, 2.5, 3.0 for z s = 9 (jagged lines) and elliptical fits in red.
where a and b are the semi-axes of the best fitting ellipse.
(iii) Finally, we draw the radial profiles of both the ellipticity and the position angle. The radius used to produce the profiles is the equivalent radius of the iso-density contours. The procedure outlined above is shown in Fig. 13 for the cluster Ares.
The radial profiles of the ellipticity and of the position angle for the two clusters are shown in Figs. 14 and 15. As done in Figs. 11 and 12, the results for parametric and freeform methods are displayed separately (left and right panels, respectively).
In each panel, the true profile is given by the black dashed line. The two clusters investigated in this work exhibit quite different ellipticity profiles. Indeed, due to the larger spatial separation between the two mass clumps, Ares has a less elongated inner core (e = 1 − b/a ∼ 0.3) compared to Hera (e ∼ 0.7). Ares's ellipticity increases with radius, while Hera shows the opposite trend.
Despite the fact that we have introduced some modest radial variation of the ellipticity of two main mass clumps in Ares, the largest jumps in the ellipticity profile of this cluster are produced by massive substructures. These variations of ellipticity are generally well reproduced in the parametric reconstructions, and, to some extent, also in the free-form model of GRALE. Clearly, the parametric techniques produce better measurements of the core shapes, both in the cases of Ares and Hera. Indeed, due to resolution limits, the convergence maps produced by the free-form methods are noisier, resulting in more irregular iso-density contours. Under these circumstances, the ellipticity measurements are more uncertain.
Among the parametric reconstructions of Ares, the largest deviations from the true ellipticity profile are found for the Zitrin-NFW and for the Zitrin-LTM-gauss models within ∼ 40" and ∼ 20", respectively. Interestingly, these same algorithms provide some of the most accurate measurements of the core shape in the case of Hera. These algorithms generally find higher halo ellipticities compared to the other parametric methods. Such behavior is consistent with the results of Zitrin et al. (2015), where the Zitrin-NFW and Zitrin-LTM-gauss methods are both employed in the reconstruction of the galaxy clusters in the CLASH sample. As shown in their Fig. 3, the first of these two methods leads to more elliptical mass distributions. The most likely interpretation of this behavior is that external shear compensates the smaller ellipticity of the LTM models.
All parametric methods except the Zitrin-LTM-gauss tend to over-estimate the ellipticity of the mass distribution at large radii in the case of Hera. We shall recall that all these algorithms fit the data by combining multiple mass components, each of which has a fixed ellipticity. The results show that, within the region probed by strong lensing ( 40" for Hera), the combination of multiple mass clumps is effective in reproducing the overall ellipticity of the cluster. At larger radii, though, the models are unconstrained and the ellipticity is extrapolated from the inner region. Freeform methods do not show the same trend; their ellipticity profiles are more noisy.
Also the orientation angles of the iso-density contours in the parametric reconstructions deviate from Hera's true orientations at large radii. Being a numerically simulated cluster, Hera is characterized by asymmetries and twists of the iso-density contours that result to be much stronger than in Ares. For example, the position angle of the iso-density contours changes by ∼ 20 degrees between the very inner region of the cluster and a distance of ∼ 50".
As a result of the not perfectly reproduced shape and orientation of the cluster at large radii, the CATS, Johnson-Sharon, GLAFIC and Zitrin-NFW models have an excess of mass along the major axis of the cluster with respect to the true mass distribution of Hera (and consequently they lack mass in the perpendicular direction). Such peculiarities can be seen in Fig. 9, where the ratios between reconstructed and true convergence maps of Hera are shown.
Substructure
Figs. 8 and 9 show that significant differences exist between the models near sub-structures. Measuring the mass of substructures is an important task that several authors have performed via strong lensing (see e.g. Natarajan et al. 2007Natarajan et al. , 2009Grillo et al. 2015, and references therein). Therefore it is interesting to quantify the lens model precision near these secondary mass clumps.
From the perspective of strong lensing, substructures are often identified as massive halos around cluster galaxies. This is particularly true for parametric methods: they use the luminous galaxies as tracers of the underlying mass distribution. Instead, free-form methods can in principle detect any kind of mass substructure, even if not traced by light. However, they cannot distinguish between the projected mass belonging to the cluster halo and bound to the substructures.
Indeed, as part of their submissions, the groups did not provide estimates of the masses in substructures, nor substructure catalogs. Here, we perform the following analysis: • We start from the assumption that galaxies trace the substructures. This is not a strong assumption given the method employed to generate the galaxy populations of Ares and Hera. In both cases, galaxies tend indeed to coincide with dark matter substructures. In the case of Ares there is a one-to-one correspondence between luminous galaxies and dark matter sub-halos. In the case of Hera, we have excluded from the image simulations those galaxies which had their dark matter halos stripped off in the course of the cluster evolution.
• We create apertures centered on the cluster galaxies with m AB,F814W < 24, with radii equal to twice the effective radius of the galaxies, and we measure the projected mass within each aperture from both the reconstructed and the true convergence maps.
• In the following, we will refer to these masses as substructure masses, keeping in mind that these are however the sum of the substructure mass and of the projected mass of the underlying cluster dark-matter halo.
In Figs. 16 and 17, we show the distributions of the ratios between measured and true substructure masses. The two figures refer to Ares and Hera, respectively and show the results for all the models. We characterize the distributions of the ratios r by means of their median r and of their 25 − th and 75 − th percentiles, p 25 and p 75 . The analysis is carried out on the same areas covered by the maps in Figs. 6 and 7. Therefore, the same number of substructures have been used to build the histograms (282 and 278 for Ares and Hera, respectively).
The results found for Ares show that several methods recover nearly unbiased substructure masses with good accuracy. For example, the inter-percentile range found for the CATS model is only 0.21 and the median is r = 1. Similar results are found for the Zitrin-LTM-gauss model, although with a median slightly larger than unity. Some parametric models, such as those of Johnson-Sharon and Zitrin-NFW, and marginally GLAFIC, have skewed distributions with tails extending towards ratios larger than unity. Interestingly, Johnson-Sharon's model is based on the same modeling software employed by the CATS group.
Among the free-form models, the distributions are generally broader than for the parametric methods. The distribution for the Bradac-Hoag model has median r = 1 and inter-percentile range 0.32. Similar or slightly larger scatter is found for the GRALE and Coe models. The ratio distribution obtained for the Diego-reggrid model has a tail extending towards small values and its median is r = 0.8.
The results found for Hera are quite in agreement with those found for Ares. Parametric methods perform very similarly among each other, providing mass measurements accurate at the level of few percent. The Zitrin-LTM-gauss model has a median r = 0.89. The dispersions of the ratio distributions, as quantified by the inter-percentile ranges, are ∼ 0.2 − 0.25. This is quite remarkable given the very different methods used to populate Ares and Hera with substructures and the significant differences between the density profiles of the substructures themselves in the two simulations, as shown in Fig. 1. This seems to indicate that the methods are flexible enough to account for even large variations in the substructure properties, provided they are traced by light.
It is less surprising that the flexible free-form methods also behave so similarly in Hera and Ares.
Magnification maps
As one of the major goals of the Hubble Frontier Fields is to use the lensing power of galaxy clusters to detect and characterize very high redshift galaxies, we focus now on the magnification. Of course, the results shown in this section are not independent of those discussed earlier, since the convergence is one of the two quantities entering the definition of magnification. The other quantity is the shear, which was not discussed so far.
In Figs. 18 and 19, we show the magnification maps for z S = 9 obtained for Ares and Hera. As done previously, the results for each model are displayed in different panels. The last panel on the bottom shows the true magnification. The ratios between each reconstructed magnification map and the true magnification maps are shown in Figs. 20 and 21.
The largest discrepancies between reconstructed and true magnifications appear around the lens critical lines. These are the loci where the magnification formally diverges. Therefore, even a small misalignment of the true and reconstructed critical lines will result in potentially large magnification differences. Most of the models recover the shape and the size of the critical lines well. Others, as the Bradac-Hoag, the Coe, and the Diego-multires models are characterized by critical lines with very irregular shapes.
In Figs 22 and 23, the measured magnifications are plotted as a function of the true magnifications. As anticipated, the scatter around the median increases as a function of the true magnification for all models. The scatters for parametrically reconstructed models of Hera are factors of 2−3 larger than for the corresponding models of Ares, the mock cluster that was generated parametrically. Besides, we note that Hera was inherently less well constrained as the cluster had fewer multiple images than Ares. So a slightly lower fidelity in the reconstruction was anticipated and found as expected.
In the best scenario obtained for Hera (i.e. the GLAFIC model, see also Fig. 24), we find very high accuracy (a few percent bias at most) and precision: ∼ 10% uncertainty for µ = 3, growing to ∼ 30% at µ = 10, and increasing further at higher magnifications.
In other cases, median magnifications are biased low or high by as much as ∼ 40 − 50%. Some of these biases are due to the models' inability to reproduce the correct magnification patterns interior to the tangential critical lines. In other cases, the gradient of the magnification around the critical lines is significantly different from that in the true magnification maps, reflecting the incorrect shape and orientation of the projected mass distribution or the incorrect slope of the convergence profile.
Regions around substructures sometimes are characterized by large uncertainties on magnification estimates. For example, the large substructure located south of the cluster Hera is not well constrained by any of the models, which all systematically underestimate the magnification around it. As shown in the lower central panel of Fig. 3, there are no multiple images located near this substructure, which may explain why no model is able to constrain its mass properly.
In Fig. 24, we compare the precision of the magnification measurements over the whole map (in the case of the recon- struction provided by the GLAFIC team for Hera; solid line) and at the observed positions of the multiple images used to build the lens model (red dots). The figure shows that the precision achieved by the model at the location of the constraints is indeed higher than in other regions with similar magnifications. The horizontal error-bars indicate the sizes of the magnification bins used to estimate the precision of the magnification measurements.
RECONSTRUCTION METRICS
In order to be more quantitative in estimating the ability of the different methods employed in this work to measure several relevant properties of Ares and Hera, we have defined metrics for the lens properties discussed above. More precisely, we introduce metrics for the one-dimensional radial profiles of • the 2D projected mass enclosed within radius R, • the surface mass density, or convergence κ(R), • the ellipticity, as fit to iso-density contours, • and the orientation, as given by the position angle of the convergence contours.
We also define metrics to quantify the goodness of the reconstruction of the 2D convergence and magnification maps. Finally, we define a metric for the projected subhalo masses in apertures centered on the cluster galaxies.
Thus, we have seven metrics that can be used for a more quantitative comparison between the lens models of both clusters. We can also evaluate how the performance of each algorithm changes when switching from a simulation based on a lens obtained from semi-analytic methods ( Ares ) and one obtained from a fully numerical simulation (Hera) .
The metrics are defined as follows. Given a set of measured values v and a set of true values v true , we derive the distribution of v/v true . Then, we compute the median, ζ and the 25-th and 75-th percentiles of the distribution, p 25 and p 75 . The metric is finally defined as By adopting this definition, the metric penalizes those reconstructions which are biased and/or affected by a large scatter.
Of course, the metrics are not fully independent. For example, a model which is able to reproduce the convergence profile of the lens with a good accuracy will also provide a robust measurement of the mass profile. Similarly, models whose reconstructed convergence maps show little deviation from the true convergence maps will also have provide a good match with the simulation in terms of converge profile or shape (ellipticity and position angle). Nevertheless, the ranking among the models with respect to correlated lens properties is not always the same. For example, the Johnson-Sharon reconstruction of Hera ranks second in terms of convergence profiles and fourth in terms of mass profiles. In addition, the different lens properties which are discussed here are often used individually, and it may be interesting for the reader to establish which modeling technique is better suited to their scientific purposes.
In Fig. 25, we show radar plots which summarize the metric values recorded by each reconstruction. The overall performance of each model corresponds to the area of each polygon. When one model is good at measuring some of the lens properties, but less effective at capturing others, the polygon appears elongated towards one or more of the chart vertexes.
The first eight charts correspond to free-form or hybrid methods. The remaining five charts refer to parametric techniques. As we have pointed out several times earlier, there is larger discrepancy between the performances of parametric and non-parametric methods in the case of Ares than in the case of Hera. This leads us to the conclusion that, despite our attempts to make the Ares mass distribution less ideal for parametric methods (e.g. by simulating adiabatic contraction or by introducing some twist of the iso-density contours, including some radial dependence), the simple fact that this cluster is assembled by combining mass components traced by the cluster galaxies, consistently with the basic assumptions of most parametric techniques, gives a huge advantage to these methods. The good news, in this case, is the following. First, these algorithms work as they are supposed to. Second, they provide very accurate reconstructions even if the parametrisation chosen for the lens halo density pro-file is not fully consistent with the true profile of the lens. For example, none of the parametric techniques, except the Zitrin-NFW method, used the NFW profile for fitting the smooth dark-matter halo components of Ares. Even so, models such as those submitted by the CATS, Johnson-Sharon, and GLAFIC teams produce an overall better fit to the input mass distribution compared to the Zitrin-NFW reconstruction. This suggests that pseudo-elliptical, cored halo models provide the right flexibility to account for most of the effects we have introduced in the simulation, such as the adiabatic contraction, which steepens the density profile in the central region of the cluster. Alternatively, these results may be interpreted as evidence for a lack of sensitivity of lensing alone to the precise share of the halo density profiles, being mostly sensitive to the mass enclosed within the Einstein radius rather than the slope of the density profile. Another possible cause may be that the Zitrin's models are calculated on a low resolution grid and perhaps their accuracy is are limited by this resolution compared to higher resolution or completely analytic parametrizations.
When switching to a fully numerical simulation, the differences between parametric and free-form methods become weaker. At least for some of the metrics, some free-form / hybrid reconstructions of Hera (see e.g. the GRALE or Lam models) appear to be as good as the best parametric reconstructions of this cluster. This indicates that several parametric methods still cannot fully account for deviations of the mass distributions from a symmetric shape, which are, instead, more naturally captured by free-form methods. Asymmetries could be mimicked by suitable combinations of substructures in parametric models. Indeed, a degeneracy exists between these two properties of the mass distribution. However, the number of constraints in these simulations is high enough that this degeneracy is partially broken, as shown by how well the mass is constrained around the cluster galaxies in at least some of the parametric reconstructions.
The model provided by the CATS team for Hera has significantly smaller values of all metrics (except for the cluster orientation), compared to the model submitted by the same team for Ares. The metrics agree with those of other parametric reconstructions of the same cluster (e.g. Johnson-Sharon). On the contrary, the reconstructions provided by the GLAFIC team for the two clusters have quite consistently high metric values. One feature of GLAFIC, which was enabled in the reconstruction of Hera, is the in-clusion of external shear and third-order multipoles of the mass distribution. Apparently, these additional ingredients have provided the GLAFIC model extra degrees of freedom to properly account for the asymmetric mass distribution of Hera. . Magnification accuracy (dashed line) and precision (solid line) as a function of the magnification from the strong lensing constraints for the GLAFIC reconstruction of Hera. The precision is quantified by the difference between the 75-th and 25th percentiles of the distribution of µ−µ true , sampled on a 256×256 pixel grid. The accuracy is given by the median of µ−µ true . The red points show the uncertainties of the magnification measurements at the location of the multiple images. The horizontal error-bars indicate the sizes of the magnification bins used to estimate the precision.
The comparison between the metrics of parametric and free-form methods also shows that the latter techniques are generally less accurate in reconstructing the two-dimensional maps of convergence and magnification and in measuring the mass around substructures. In fact, the spatial resolution that can be achieved with these methods is generally lower. On the contrary, radial profiles of the convergence and of the enclosed mass are measured by several of the free-form methods employed in this experiment with accuracy comparable to parametric techniques.
LIMITATIONS OF THIS TEST
We would like to remark that the tests outlined in this paper suffer of some limitations. First of all, we make the assumption that the simulations reproduce the properties of real clusters. While some methods (e.g. the free-form ones) do not care about the correlation between dark-matter and baryons, other methods strongly rely on the assumption that light-traces mass. Both Ares and Hera implement this property, which, at least in some cases, has been questioned by observations (Wang et al. 2015;Hoag et al. 2015). In particular, the results we report on substructures are sensitive to this assumption. In a recent paper, Harvey et al. (2016) have explored how assuming that light traces mass in strong gravitational models can lead to systematic errors in the predicted positions of multiple images. They find that images can be shifted by up to ∼ 1", assuming physically motivated offsets between dark-matter and stars. They quote a ∼ 0.5" rms error in the position of the multiple images due to breaking the assumption that mass traces light. Note, however, that, to some extent, we introduced some misalignment be-tween matter and light in both Ares and Hera, by assigning to the observed galaxies a shape and an orientation which is not correlated with the underlying dark matter distribution.
Other limitations regard some observable properties of the galaxies in the simulated observations (e.g. luminosities and sizes) and their correlation with their halo masses. It is known that the SAMs are not fully consistent with observations in this respect (see e.g. González et al. 2009;Ascaso et al. 2015;Xie et al. 2015;Hirschmann et al. 2015), thus the standard scaling relations adopted by some parametric techniques to translate the light into the mass or the size of the host halo might not equally applicable to observations and simulations.
SUMMARY AND CONCLUSIONS
In this paper we used simulated observations of two synthetic galaxy clusters to evaluate the performance of several algorithms for mass reconstruction with strong lensing. Such algorithms are currently being used to deliver to the community the lens models for the six galaxy clusters being observed in the Frontier Fields programme of the Hubble Space Telescope.
The two clusters used in this study were obtained using very different techniques. Ares was generated using the semi-analytical code MOKA. Hera is instead the output of a cosmological N-body simulation at high resolution. The observable properties of the cluster galaxies are modeled using HOD and SAM techniques in Ares and Hera, respectively. In both cases, the clusters have complex mass distributions, characterized by disturbed and bi-modal morphology, similar to those of the FFI clusters.
We used the code SkyLens to simulate HST observations of the two mock clusters with both the ACS and the WFC3-IR camera. We produced images in all photometric bands used in the FFI, calibrating the exposure times such to reach the depth of the FFI observations. These HST simulated data were distributed to several groups of lens modelers for a blind analysis, i.e. without unveiling the true mass distribution of the lenses, neither the method used to simulate them.
The simulated observations include lensing effects on a realistic distribution of background galaxies. We identified many strongly lensed galaxies and built a catalog of multiple image systems, which was delivered together with the simulated observations. The catalogs also include the redshift of all the sources.
We complemented the HST simulations with a simulated observation in the R c band with the Subaru telescope. The main purpose of this additional simulation was to allow the inclusion of weak-lensing constraints at larger distances from the cluster center than those probed by HST. Together with the image, we also distributed a shear catalog obtained by processing the Subaru simulation through a public KSB pipeline.
We received nine reconstructions of Ares and eleven reconstructions of Hera, submitted by ten different groups. Seven groups employed their techniques to reconstruct both clusters. The remainder groups reconstructed just one of the two clusters or submitted reconstructions based on different set-ups of their methods. This is the first time that such a Figure 25. Radar plot showing the scores of each model for all metrics discussed in the paper. Larger polygons correspond to better overall performance. Each chart corresponds to a different lens model (see labels on the top) and shows results for both Ares (blue) and Hera (red), or whichever is available. The seven metrics are shown on the vertices of each chart. For each metric, the scores range from 0 (worst; plotted at the center of the chart) to 1 (best; plotted at the vertex), normalized to the maximum value recorded by all models. A filled polygon is obtained by connecting the plotted scores of all metrics for each reconstruction.
large number of algorithms have been tested against known mass distributions. Similar to the spirit of our experiment, in the recent collaborative effort presented in Treu et al. (2016), several of the methods used to reconstruct the galaxy cluster MACSJ1149.5+2223 and estimate the time-delays between the multiple images of the SN "Refsdal" were compared. The recent re-appearance of the SN, reported by Kelly et al. (2016), enabled the blind test of various model predictions, which were found to be in very accurate for several reconstructions. In addition, (Rodney et al. 2015) compared the magnification predictions from 17 mass models of Abell 2744 using a lensed supernova of type Ia.
The methods compared here include both parametric and free-form algorithms. We have investigated how they perform at recovering several properties of the lenses, namely: the radial profiles of the convergence and of the enclosed mass, the mass in substructures, the maps of the convergence and of the magnification. For each of these prop-erties, we defined a metric aimed at quantifying the performance of the method.
The key results of this phase of the comparison exercise of lens mapping methodologies can be summarized as follows.
• Parametric methods are generally better at capturing two-dimensional properties of the lens cores (shape, local values of the convergence and of the magnification). The free-form methods are as competitive as the parametric methods to measure convergence and mass profiles. It is worth mentioning, however, that, in both Ares and the Hera, the cluster galaxies were good tracers of the cluster mass distributions.
• The accuracy and precision of strong lensing methods to measure the mass within the Einstein radius (or more generally within the region probed by the strong lensing constraints) is very high. The measured profiles deviate from the true profiles by only a few percent at these scales. Of course, larger deviations are found at radii larger and smaller than the Einstein radius. The determination of the mass enclosed within the Einstein radius was extremely robust for all methods.
• The largest uncertainties in the lens models are found near substructures and around the cluster critical lines. For some of the parametric models, the total mass around substructures (identified by cluster galaxies) is constrained with an accuracy of ∼ 10%. However, other methods have much larger scatter. Uncertainties on the magnification grow as a function the magnification itself and are therefore more pronounced near the cluster critical lines. For the best performing methods, the accuracy in the magnification estimate is ∼ 10% at µ true = 3 and degrades to ∼ 30% at µ true = 10.
• Switching from Ares to Hera, i.e. from a purely parametric to a more realistic lens mass distribution, the gap between parametric and free form methods becomes smaller. Algorithms such as that used by the GLAFIC team, which include third order multi-poles in the lens mass distribution, have extra degrees of freedom which allow them to better reproduce asymmetries. These asymmetries, and possible variations of the halo ellipticity as a function of radius, seem to be the strongest limitations of parametric methods. The adoption of an hybrid approach, where parametric and free-form methods are combined also to describe the largescale component of the clusters, could lead to a significant improvement of the mass reconstructions.
• Some of the participating groups used the same code but adopted different set-ups to run them. For example, two groups (CATS and Johnson-Sharon) use the public code Lenstool with slight modifications. Similarly, Diego submitted several models of Hera using WSLAP+, which is the same code used by Lam et al. Despite using the same algorithms and making use of the same inputs (i.e. families of multiple images and redshifts), the reconstructions obtained by these groups are different, indicating that some choices made by the modelers when ingesting the data and hence set up priors influence the results. This is the first of a series of papers in which we address the issue of the accuracy of lens modeling. In a second paper, currently in preparation, we will discuss the results of the unblinded modeling of Ares and Hera. The feedback from the unblinding was used by modelers to not only tweak their best-fits to reach the best possible match to the input mass distributions of the lenses but to also incorporate and instigate improvements in their modeling procedure. This will provide information on the accuracy limits achievable by each method and will also give further hints on the steps that need to be taken to optimize reconstructions.
Despite their complexity and the inclusion of several observational effects, the simulations used in this paper are still idealized in many respects. For example, the lenses are isolated and no additional lensing by matter along the line of sight is included. In addition, we alleviated the work of the lens modelers by identifying the strongly lensed sources and even providing redshifts for all of them. In the case of Ares, the number of available multiple images with known redshifts exceeds by a factor of ∼ 3−4 what is available in any of the frontier fields (e.g. MACSJ0416). We will include the uncertainties due to possible mis-identification of multiple images and photometric redshifts as well as the noise added in by the intervening matter distribution along the line of sight in the next phase of this project in future work.
|
2016-06-14T20:06:46.000Z
|
2016-06-14T00:00:00.000
|
{
"year": 2016,
"sha1": "f0990463910a9f2a4b67381829b70c1b93b80ae1",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/472/3/3177/20133502/stx2064.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "f0990463910a9f2a4b67381829b70c1b93b80ae1",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
56460021
|
pes2o/s2orc
|
v3-fos-license
|
Preliminary design of a supercritical CO2 wind tunnel
The preliminary design of a test-rig for non-ideal compressible-fluid flows of carbon dioxide is presented. The test-rig is conceived to investigate supersonic flows that are relevant to the study of non-ideal compressible-fluid flows in the close proximity of the critical point and of the liquid-vapor saturation curve, to the investigation of drop nucleation in compressors operating with supercritical carbon dioxide and and to the study of flow conditions similar to those encountered in turbines for Organic Rankine Cycle applications. Three different configurations are presented and examined: a batch-operating test-rig, a closed-loop Brayton cycle and a closed-loop Rankine cycle. The latter is preferred for its versatility and for economic reasons. A preliminary design of the main components is reported, including the heat exchangers, the chiller, the pumps and the test section.
Introduction
Supercritical carbon dioxide (sCO 2 ) is currently being considered as working fluid in several industrial and power generation applications thanks to its relatively low critical pressure and its low critical temperature. The seminal work of Angelino and Feher points out the advantages of using sCO 2 for power generation [1,2]. Indeed, sCO 2 Brayton power cycles can be used to exploit solar, geothermal and waste heat thermal sources. Advantages over e.g. steam include compactness of turbomachinery and high thermal efficiency at low temperature. Moreover, fewer corrosion issues may be encountered except than at high temperatures and pressures [3]. For nuclear power generation, sCO2 is preferred over steam for safety reasons in sodium cooled fast reactions [4]. Rapid expansion of supercritical solutions (RESS) of CO 2 is used in the chemical, pharmaceutical and food industry for particle generation or extraction of chemicals, see [5,6,7] and reference therein. Other applications include sterilization and cleaning [8].
The fluid dynamics of sCO 2 flows departs significantly from the well-known gas dynamics of dilute gases, such as air in standard conditions. For example, a relatively low speed of sound is observed at very high, liquid-like densities. Moreover, non-equilibrium shock structure is possibly observed due to the relaxation of the internal vibrational modes [9,10]. In the supercritical and close-to-critical region, the heat transfer as well exhibits peculiar behavior [11,12].
The dynamics of compressible fluids in the close proximity of the vapor-liquid saturation curve and critical point or within the supercritical region are referred to in the following as Non-Ideal Compressible-Fluid Dynamics (NICFD).
Despite its widespread usage, a comprehensive understanding of the fundamental properties of CO 2 flows in supercritical conditions is not available, yet. Preliminary theoretical and numerical studies using accurate equations of state [13] and non-ideal flow solvers [14,15], are yet to Figure 1: Thermodynamic plane temperature-entropy (T -s) with the four exemplary supersonic expansions to be realized in the SCO 2 PRI facility. The region of interest (gray area), the saturation curve and three isobars are also reported. The symbol ♦ indicates the throat section, at M = 1.
be complemented with experimental data. Indeed, only recently experimental activities are being carried out to investigate the fundamentals of sCO 2 flows in supercritical conditions. For instance, Lettieri and collaborators at MIT assessed the condensation effects in sCO 2 compressors and defined a criterion to predict fluid condensation [16]. At KAIST, Lee's research team is developing an experimental facility to accurately take into account non-ideal gas effects during sCO 2 compressor design and performance analysis [17]. Finally, the university of Seville and Altran have designed a pressurized sCO 2 wind tunnel to improve the design of blade cascade of turbomachinery [18]. Diverse technology demonstrators of Brayton power cycles using sCO 2 are currently in operation in the USA [19,20,21], though the technology is not sufficiently developed to commercial exploitation [22]. For instance, heat transfer in sCO 2 cycles still raises some questions and requires further investigations to efficiently design heat exchangers [23,24].
The design of a novel sCO 2 wind tunnel, named SCO 2 PRI (Supercritical CO 2 for the PRocess Industry), is currently under-way at the CREA (Compressible-fluid dynamics for Renewable Energy Application) Laboratory of Politecnico di Milano. Fundamental studies of supersonic sCO 2 flows will be carried out in the close proximity of the critical point and the liquidvapor saturation curve, where sCO 2 compressors for power production are designed to operate. Moreover, the test-rig will be used as a calibration tunnel for non-ideal flows pressure probes and optical measurement techniques, including Schlieren and Laser Doppler Velocimetry (LDV).
The present work outlines the preliminary design of the plant and the technical specifications of the relevant components. To attain supersonic speeds, the test section consists in a convergentdivergent nozzle followed by a rectangular-section chamber for flow visualization. Three possible configurations are initially taken into account to drive the fluid through the nozzle: an open-loop batch-operating system, an "inverse" Joule-Brayton cycle and a Rankine cycle. A preliminary analysis indicates that the last configuration is the most suitable for the operating conditions of interest. Then, the main components of the plant, namely the pump, the heater, the chiller and the heat exchangers are designed and a preliminary cost analysis is also carried out.
The present paper is structured as follows. In Section 2, the region of interest for the experimental observation of sCO 2 flows is determined, including conditions suitable for studying critical-point flows and condensing flows in compressors. The design constraints are also reported. The diverse test-rig configurations are presented and discussed in Section 3. The preliminary design of the main components of the chosen configuration, namely, closed-loop Rankine cycle, is reported in Section 4. In Section 5, final comments are given.
Region of interest and design constraints
The preliminary design of the SCO 2 PRI test-rig starts with the definition of the thermodynamic region to be investigated, according to the relevant research and industrial applications of sCO 2 reported in the previous section. Figure 1 depicts this region in the thermodynamic plane T -s, where T is the temperature and s is the specific entropy. Since the research interest is limited to NICFD flows, a first parameter taken into account to define the region of interest is the compressibility factor Z = P v/RT , where v is the specific volume, P is the pressure and R is the gas constant. This quantity allows to estimate the deviation of the actual thermodynamic behavior from the ideal gas model, which predicts Z ≡ 1 [25]. Therefore, the region of interest is limited towards the vapor state by a maximum value of Z around 0.8 ÷ 0.85. On the opposite side, it extends to the critical point. Moreover, the maximum pressure and temperature of P max = 150 bar and T max = 150 • C are imposed as design constraints, to limit the cost of the test-rig. The lowest temperature is arbitrarily fixed at −30 • C to include also a portion of the two-phase region, below the Vapor-Liquid Equilibrium (VLE) curve.
Within the region of interest, four exemplary isentropic expansions are defined and a convergent-divergent nozzle is adopted to reach supersonic speeds. The four expansions, labeled A, B, C and D, are depicted in the plane T -s in Figure 1. Initial computations of thermodynamic states are performed by means of the quasi-one-dimensional theory, which describes a steady, one-dimensional and isentropic flow in a duct neglecting viscous and thermal effects [26]. Thanks to this theory, it is possible to predict the inviscid core of the flow.
Expansion A starts at the maximum pressure P = 150 bar, it crosses the critical point and eventually ends inside the two-phase region. This expansion is probably the most challenging from the point of view of the control. At the exit of the nozzle, the pressure P = 45 bar is imposed, corresponding to a Mach number M = 1.72. The measurement of thermodynamic properties along the expansion process, and possibly in the close proximity of the critical point, would give a significant improvement in the fundamental knowledge of sCO 2 behavior in the NICFD regime. The second expansion (B) occurs completely in the supercritical region, starting from the maximum pressure of 150 bar down to 75 bar, namely to an isobar just above the critical one. The initial temperature is 81.1 • C, so that a slightly supersonic expansion can be realized (exit Mach M = 1.05) and the compressibility factor at the nozzle inlet and outlet is around 0.5. Measurements of the fluid states through expansion B are relevant for investigating sCO 2 transport properties, for validating of CFD models and for assessing the isentropic expansion coefficient in non-ideal conditions. Finally, conditions that typically occurs in turbo-machinery are reproduced in expansions C and D. Condition C replicates the expansions that may occur in sCO 2 compressors near the leading edge of the impeller, which were investigated also by Lettieri et al. [16]. In this case, the design exit Mach number is M = 1.49. The fourth expansion (D) is representative, in terms of reduced conditions, of flows in Organic Rankine Cycle (ORC) turbine. The experimental investigation of this kind of flows is typically challenging because of the high temperatures, which might be close to the thermal stability limit of the fluid. However, the behavior of the organic fluid under consideration can be reproduced by an other fluid with a similar compressibility factor at the critical point, provided that their behaviors can be described by the same equation of state. Indeed, according to the principle of corresponding states, all fluids behave alike at the same thermodynamic conditions made dimensionless with respect to the critical-point values [25]. Thus, a flow of sCO 2 can reproduce, for instance, a siloxane flow in an ORC turbine in terms of reduced conditions at a lower temperature, which allows to measure more easily quantities of interest. In expansion D, the initial and final pressures are 62.1 and 20 bar, while the exit Mach number is M = 1.42.
To realize the outlined goal expansions, a plant suitable for feeding the converging-diverging nozzle has to be designed. The plant can possibly operate in a continuous fashion or as a batch facility, provided that the test time is sufficient to perform quasi-steady measurements. In this regard, some observations can be drawn according to the peculiar aim and scope of the test-rig. First, the SCO 2 PRI is a research facility that is expected to operate discontinuously A heated vessel, filled by CO 2 cylinders, discharges the fluid at supersonic speeds through a convergentdivergent nozzle into a low pressure tank. and for few hours per year, therefore the initial and maintenance costs are more relevant than operating costs. Moreover, it should be installed in current available spaces within the CREA laboratory and this imposes limits on the size and on the power requirement of the test-rig, which roughly can not exceed 400 kW. Finally, the overall costs for designing and constructing the SCO 2 PRI test-rig must be compatible with funding opportunities that are available for fundamental research. According to these peculiarities, three different plant configurations are investigated in the next section.
Assessment of the test-rig configurations
Three different test-rig configurations are now considered as possible solutions to realize the exemplary expansions A, B, C and D introduced in the previous section. These are an open-loop, batch operating test-rig and and two closed-loop, continuous operating plants. In the following subsections a brief description and the initial sizing of the main components are reported for each configuration. As a preliminary step, a nozzle throat area of 200 mm 2 was chosen. However, as explained in the following, the flow rate associated to such a relatively large throat area is not compatible with the design constraints and a more realistic throat area of 20 mm 2 is finally chosen.
Batch operation
The first configuration considered for the SCO 2 PRI test-rig consists in the simple open-loop facility sketched in Figure 2. An heated pressurized vessel is filled with CO 2 cylinders up to a pressure greater than the one at the beginning of the expansions. Then, through a control valve, the selected expansion is realized in the nozzle, which discharges into a dump tank equipped with a vacuum pump to set the desired back pressure. Carbon dioxide is eventually released into the atmosphere. This set-up is very similar to the one currently available at MIT [16].
The main advantage of this solution is the simplicity and, thus, the low initial and maintenance costs. On the other hand, the batch operation imposes a limit to the experiment duration, which strictly depends on the volume and initial pressure of the vessel. A target test duration of 100 s is pursued. Table 1 reports the time and the mass released during the expansion B using different vessels and initial pressure, computed through the quasi-1D theory and assuming the process adiabatic. Unfortunately, with the initial nozzle with A th = 200 mm 2 , it is possible to perform only short tests, even by using a quite large vessel of 5 m 3 . According to these results and to the similar ones obtained for the other expansions but not reported here for brevity the flow rate is reduced by one-tenth so that A th = 20 mm 2 . With this geometry, the target test duration can be achieved. A further drawback, inherently related to batch operation, Table 2: Thermodynamic cycles for the four expansions in the "inverse" Joule-Brayton configuration. W c is the estimated power required by the compressor for A th = 20 mm 2 .
concerns the difficulties in reaching steady state: this requires an accurate and fast control of the throttling valve, because of the emptying of the pressurized vessel and of the filling of the dump tank. The inaccurate control of the thermodynamic state in the steady flow could jeopardize the possibility of using the test-rig in the close proximity of the liquid-vapor critical point and its usage as a pressure probe calibration facility. In conclusion, the analysis of the strengths and limitations of this configuration suggests to discard batch-operation option and to investigate a different solution.
"Inverse" Joule-Brayton cycle
The first closed-loop configuration can be described as an "inverse" Joule-Brayton cycle, in which a compressor is used to restore the pressure after the nozzle expansion. Since the compressor represents the main (and the most expensive) component of the test-rig, the compressor is sized first. For simplicity, a single machine that permits to realize all expansions is preferred. For this reason, the limit temperatures T min = 0 • C and T max = 120 • C are imposed for the suction and discharge sections, respectively, and the end states of expansions A, C and D are chosen according to them. Moreover, for expansion A, a pressure P 2 = 45 bar is imposed to avoid excessively large compression and cooling power, while for expansion D, a gas cooler is added to reduce the inlet (and so the outlet) temperature of the compressor. Figure 3 displays the sketch of the plant along with the thermodynamic cycles in the T -s plane, which are detailed in Table 2. The table also reports the estimates of the electrical power consumption, computed assuming an isentropic efficiency η = 0.6 for the compressor and neglecting all other losses, for the reduced throat area. For the initial area A th = 200 mm 2 , the power required by the compressor considerably exceeds the available one.
The preliminary computations allow to define the operating range of the required compressor, Table 3: Thermodynamic cycles in the Rankine configuration, with A th = 20 mm 2 . W p and W c are the electrical powers required by the pump and the chiller unit,Q h andQ c are the thermal powers exchanged in the heating and cooling sections.
which is found to be a volumetric one, and to proceed with the selection of the machinery among the commercial available solutions. The machinery design is commissioned to a specialized company, which proposes an ad hoc reciprocating compressor, which a particular massive structure, since the high densities reached by the CO 2 in the specific operating conditions generate considerable axial stresses. The resulting compressor size is much larger than the one usually required by similar pressure ratios, flow rates and powers in a standard thermodynamic region, i.e. far from the critical point. Therefore, this extremely large (and noisy) machinery cannot be installed at the laboratory. Furthermore, the purchase of this component only would exhaust almost all the initial available fund. For these reasons, compressor-based test-rig is not deemed to be adequate.
Rankine cycle
A pump-based plant is now considered for the SCO 2 PRI test-rig. This configuration results in a transcritical Rankine cycle with phase transition, as sketched in Figure 4. After the nozzle expansion and the subsequent deceleration, the flow is condensed and liquid CO 2 is pumped to the inlet nozzle pressure, then it is heated until the required temperature. The imposed nozzle exit states are the same used for the previous configurations (i.e. points 1, 2 and 3 in Table 2), except that expansion B has been extended until the VLE curve to make condensation possible (T 2 = 25.9 • C). To realize expansion D, the pressure at the exit of the pump must be higher than the critical one, and a throttling process is performed before entering the nozzle to reach the correct pressure. All thermodynamic cycles are shown in Figure 4; relevant data are reported in Table 3.
First, the pump is characterized. As expected, the power consumptions, reported in Table 3 and computed assuming an isentropic efficiency η = 0.6, are much smaller than the ones computed for the compressor. Indeed, the specific enthalpy drop in the liquid region is smaller than the one in the gas region for the same pressure ratio. From a preliminary analysis, a volumetric pump is needed. With this regards, possible problems may occur due to the relatively high compressibility of sCO 2 with respect to standard liquids, especially in test A and B. For this reason, in these two cycles, an additional cooling is performed at the end of the condensation to reach a lower pump inlet temperature, and therefore reducing the liquid compressibility.
Differently from the "inverse" Brayton cycle, the heating and cooling processes are relevant both from the technical and the economic point of view, therefore a preliminary assessment of the test-rig configuration has to include them. Table 3 reports also the heating and cooling powers (included de-superheating and condensation). The results are reported only for the nozzle with A th = 20 mm 2 because the available power is not sufficient if the larger area of A th = 200 mm 2 is considered.
Despite the quite large thermal power involved, relatively standard components can be used to perform the required processes and no particular technical problems are expected in this regard. From a first estimate on the basis of the commercial available components similar to the ones required by this plant, the space required to house all the parts of the test rig is congruous with the laboratory capacity. Therefore, a more accurate investigation and a preliminary design of the components of the pump-based configuration are performed. As final remark, a further advantage provided by a phase-transition configuration concerns the possibility of carrying out research using different fluid phases, such as liquid or two-phase flows of CO 2 .
Preliminary design of the SCO 2 PRI test-rig
The previous analysis of the candidate plant configurations led to the selection of the pump-based closed-loop Rankine cycle. This section presents a preliminary design of the main components and of the test section of the SCO 2 PRI. The components are selected among commercially available (possibly customized) solutions with the primary aim of realizing all four expansions by means of the same balance of plant. In this respect, it should be clarified that the different expansions will be investigated in separate experimental campaigns, therefore the possibility of connecting the different cycles is not required. For what concerns the test section, four different nozzles are designed and a modular solution is adopted so that they can be installed within the same arrangement.
Analysis of test-rig components
The first component that is analyzed is the pumping section. Since the flow rates in the four tests are rather different, a parallel configuration that includes two pumping groups working at constant speed is adopted and one device can be excluded through a by-pass valve. Each pumping group is composed by a plunger pump that has a maximum discharge pressure of 150 bar and a maximum inlet pressure of 65 bar, a 18.5 kW electrical motor, a pulsation damper and other safety accessories.
The heating section is now detailed. The temperature of the sCO 2 flow is increased by means of an heater exchanger that is fed on the hot-side by a thermal oil flow from a mono-bloc gas heater, already equipped with control and safety equipments. Plate & Shell Heat Exchangers (PSHE) are exploited to obtain benefits in terms of compactness and heat transfer efficiency at the operating temperatures and pressures. More specifically, a fully welded pack of circular plates of stainless steel with a thickness of 1.5 mm is contained in a carbon steel shell of approximately 0.13 m 3 . Thanks to the peculiar design realized by a specialized company, this kind of heat exchangers can safely operate up to 170 bar. The issue of dealing with different flow rates and different inlet/outlet temperatures is solved by dimensioning all components on the most demanding case, e.g. test B. Since the described device cannot guarantee fine control of the sCO 2 temperature at the nozzle inlet, which is of paramount important for expansion A, a 2 kW electrical heater is added before the nozzle for control purposes.
For the cooling section, the use of a chiller is mandatory due to the low temperatures. A direct cooling process with a chiller unit specially designed for the SCO 2 PRI test-rig is used to perform de-superheating, condensation and sub-cooling processes, depending on the performed test. This technical solution requires the design of an additional cycle for the refrigerant fluid which, according to the required tasks and operating conditions, is ammonia. The design choice consists in the standard, widely-used single-stage vapor-compressor cycle, which is composed by a compressor, a condensing section, an expansion valve and an evaporator. The two main steps of the design process concern the definition of the condensation and evaporation temperatures, while the design of the refrigerant cycle is commissioned to a specialized supplier. The condensation temperature is set at 38 • C and a water-cooled condenser is used to exploit a cooling tower already available at the laboratory. For the evaporation temperature, different values are established according to the test: for test A, B and C, a temperature difference of 10 − 15 • C is imposed between the two fluids; while in test D this difference is reduced to 2 • C. The heat required by CO 2 de-superheating and condensation is absorbed by ammonia through a PSHE with a design thermal power of 400 kW, while a smaller dry expansion evaporator (design power of 50 kW) accounts for the additional cooling required in test A and B.
Finally, after a preliminary sizing of the SCO 2 PRI plant, including also piping, the mass of CO 2 circulating within the plant is estimated. This quantity varies between 57 and 78 kg, depending on the considered test. Therefore, an accumulation tank of 50 L is included in the plant for storage purposes. Accounting also for this vessel, the area required to place all components is estimated to be less than 15 m 2 . Moreover, the thermal oil heather and the chiller unit, which result to be the largest parts, can be safely placed in an external area adjacent to the laboratory.
Design of the test section
The test section, which includes the nozzle along with the downstream slow-down section, is now detailed. In particular, the geometry of the nozzles delivering the desired expansion ratios at the diverse operating conditions is designed. Since the exemplary supersonic expansions are defined considering chocked flows, four different convergent-divergent nozzles are required.
Before the detailed design of the geometry, the flow inside the nozzle is investigated more thoroughly. Figure 5 shows the variation of the Mach number with the density during all expansions, computed according to the quasi-1D theory for isentropic flows. The NIST databse REFPROP [27] is exploited to perform the thermodynamic evaluations by using the reference equation of state of Span and Wagner [13]. As expected, a non-monotone variation occurs during the test A. This behavior is typical of fluids characterized by a value of the fundamental derivative of gas-dynamics Γ between zero and one and it is related to close-to-critical-point effects. Expansions B, C and D lies in the thermodynamic region where Γ > 1 and the Mach number is monotonically increasing to supersonic values, similarly to what observed in the ideal flows of dilute gases. However, non-ideal compressible-fluid effects are relevant-including the non-ideal dependence of the sound speed on the density in isothermal transformations-and strongly influence the dynamics of the fluid in expansions B, C and D as well.
For the nozzle geometry design, a rectangular cross-section is adopted to produce a twodimensional flow and to perform flow visualization more easily. With this regards, one wall of the test section consists in a planar glass windows. Since the throat of the nozzle is chocked, the thermodynamic state there is completely defined. Thus, the geometry of the nozzle can be computed in two independent steps. First, the methods of characteristics [28] is used to compute the length and the shape of the divergent section, in order to obtain an uniform flow at the exit with velocity parallel to the nozzle axis and the design pressure ratio. The state-of-the-art multi- Figure 4.
Density is scaled with respect to the critical value.
As expected, a non-monotone variation occurs during Expansion A, where Γ is below 1. Figure 6: Divergent part of the nozzle for expansion D. Pressure and Mach contour plot computed by means of the method of characteristics, with the multi-parameter equation of state [13], are shown. parameter thermodynamic model for CO 2 proposed in [13] is used. Additional details about the design procedure including code validation can be found in [29,30]. Figure 6 displays the results obtained for test D, both in terms of pressure and Mach distribution. Then, the convergent part can be designed. Since this part of the geometry has a limited influence on the flow downstream the chocked throat, a polynomial profile is designed to realize the gradual area variation from the inlet section to the throat. To enhance flow regularity, the first and second derivatives of the divergent profile are matched at the throat. As a final remark, it should be pointed out that the geometry here presented refers to the inviscid core of the nozzle flow, thus additional CFD simulations will be performed to accurately predict the boundary layer in a further step of the plant design process.
Discussion and conclusions
Despite the wide use of sCO 2 in energy and chemical industry, the knowledge about carbon dioxide behavior in the supercritical region and near the critical point is still deficient, both in terms of fundamentals and of experimental data required for validation. In order to contribute to fill this lack, a new test-rig, named SCO 2 PRI, is under-design at CREA Laboratory of Politecnico di Milano.
Within the non-ideal compressible fluid dynamics region, four different supersonic expansions through a convergent-divergent nozzle have been defined and different plant configurations have been taken into consideration to realize them. Due to the flow rates and the necessity of performing steady measurements, a closed-loop continuous-operating plant has been preferred. Moreover, the initial desired throat area had to be scaled to comply with the power availability of the laboratory. The "inverse" Joule-Brayton solution revealed to be inadequate due to the size and cost of the volumetric compressor, which has to operate in a region where the fluid density has high values, resulting in high stresses on the piston. Thus, the pump-based Rankine cycle configuration has been analyzed and it resulted to be feasible according to the economic and logistic constraints. Moreover, the choice of the Rankine cycle configuration results in a more flexible plant layout, which allows also for investigating the fluid flow at different conditions, including liquid and two-phase.
Similarly to the standard Rankine cycle, the main components of the SCO 2 PRI test-rig are a pumping group, a heating section, the test section (expander) and the cooling section. The availability of these components has been verified from suppliers and customized solutions have been designed when they were not commercially available. The pumping section is composed by two volumetric pumps which can operate independently to deal with the different flow rates that characterize the four expansions. On the contrary, heat exchangers in the heating and cooling sections have been sized on the most demanding case. An auxiliary cycle for the refrigerant (i.e. ammonia) was designed which includes a chiller. The chiller unit resulted to be the most expensive component in the test-rig, because of the low temperatures to be reached in evaporators. However, since low temperatures (less than 0 • C) are required only in test conditions D, the feasibility of a less expensive test-rig in which only the chiller is sized on the three less demanding test cases was assessed. Such a pilot plant includes all the original components, except that the chiller unit that has been replaced by a smaller commercial available one.
In conclusion, the feasibility of the SCO 2 PRI test-rig has been assessed for diverse test conditions and against different experimental techniques, including flow visualization and pneumatic measurements of non-ideal compressible fluid dynamics. The measurements of thermodynamic quantities along an expansion through the critical point, where non-monotone variation of the Mach number with density is expected, would supply important information that could fill the gap in sCO 2 fundamental knowledge. Future research activities will concern the design of all the other components of the test-rig, as well as dynamic simulations of its operation, including filling and starting. Moreover, the control and measurements systems will be detailed and designed.
|
2019-04-22T13:05:46.986Z
|
2017-03-01T00:00:00.000
|
{
"year": 2017,
"sha1": "703734afe85c66057acd26a2912d49439b4297ec",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1088/1742-6596/821/1/012027",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "01654dd1decfb5f6dedb3b93477c6e050b916390",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
270801007
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of injective related reactions following ofatumumab and ocrelizumab in patients with multiple sclerosis: data from the European spontaneous reporting system
Introduction In 2021 ofatumumab, a recombinant human anti-CD20 monoclonal antibody (mAb) already authorized for the treatment of chronic lymphocytic leukemia, received the marketing approval for the treatment of relapsing forms of multiple sclerosis (MS). Differently from ocrelizumab, that is administered intravenously, ofatumumab if the first anti-CD20 mAb to be administered subcutaneously without a premedication. Methods and objectives In this study we aimed to describe and compare the main characteristics of Individual Case Safety Reports (ICSRs) describing the occurrence of Injective Related Reactions (IRRs) following the treatment with ocrelizumab and ofatumumab reported in the Eudravigilance (EV) database during years 2021–2023. Results A total of 860 ICSRs with either ofatumumab and ocrelizumab as suspected drug were retrieved from Eudravigilance, of which 51% associated with ofatumumab and 49% with ocrelizumab. The majority of patients who experienced IRRs following ocrelizumab belonged to the age group of 18–64 years (73%), while the age-group was mostly not specified (55%) in ICSRs reporting ofatumumab as suspected. The distribution of gender was almost similar in the two groups, with the majority of ICSRs related to female patients. “Pyrexia” was the Preferred Term (PT) most reported for ofatumumab, while “Infusion related reaction” were more frequently reported with ocrelizumab. Premedication drugs were reported in 148 ICSRs. Out of 89 ICSRs for which the Time to Event (TTE) was calculated, 74 reported IRRs that occurred the same day of the drug administration. Discussion Based on the results of this study, although a risk of ofatumumab-induced IRRs cannot be excluded, it should be considered as manageable considering that the drug seems to be mostly associated with the occurrence of fever. Thus, it is important to continue to closely monitor the use of these in clinical practice to improve the knowledge on their long-term safety.
Introduction
The pharmacological armamentarium of multiple sclerosis (MS) has improved thanks to the approval of new disease modifying therapies (DMTs) acting through different biological mechanisms (1)(2)(3).Considering the key role of the immune system in MS, highefficacy DMTs that target the immune system have emerged over the past decades, such as anti-CD20 monoclonal antibodies (mAbs) acting through the depletion of CD20+ B and CD20+ T cells (4).Anti-CD20 mAbs class currently includes ocrelizumab (approved by the EMA in 2018), rituximab (not officially approved for the treatment of MS), ublituximab (approved by the EMA in 2023) and ofatumumab (approved in 2021) (5,6).These drugs mainly differ for the route of administration.Indeed, while ocrelizumab, rituximab, ublituximab are injected intravenously and require patients to receive a premedication in order to avoid injection-related reactions (IRRs) (7, 8), ofatumumab is meant to be administered by subcutaneous injection by the patients themselves, without any premedication since ofatumumab-induced IRRs are manageable (9).In addition, as reported in the product's European Public Assessment Report (EPAR), the pre-treatment is not recommended also because steroids may reduce the frequency of fever, myalgia, chills, and nausea but conversely increase the occurrence of flushing, chest discomfort, hypertension, tachycardia and abdominal pain (9).The subcutaneous administration is made with an auto-injector pen that is administered at four-week intervals with the first three doses delivered on days 1, 8, and 15.Subcutaneous ofatumumab has been developed to reduce the occurrence of specific adverse events (AEs) like IRRs, considering that the subcutaneous route allows for a slower and more controlled absorption of the medication into the body, minimizing the risk of systemic reactions (4).In this regard, the indirect comparison between subcutaneous ofatumumab and ocrelizumab, showed that ofatumumab was less likely associated with IRRs (10,11).Data from clinical studies showed that these events were generally mild-tomoderate in severity and no life-threatening IRRs were observed.The most common IRRs' symptoms were fever, headache, chills, fatigue, erythema/redness and pain.Data from the post-marketing experience highlighted 6 serious IRRs, including a case of anaphylaxis (12,13).Lastly, the results of the phase 2 MIRROR study reported that the most common AEs associated with ofatumumab were IRRs, mainly classified as not serious (serious IRRs occurred in 3 patients out of 121 patients treated with the drug, including one patient who experienced a cytokine-release syndrome within hours of the first ofatumumab injection) (14).As recently reported by Ovchinnikov and Findling, the availability of subcutaneous anti-CD20 mAb represents a huge advantage and a simplification of therapy for many patients (15).Notwithstanding few disadvantages need to be considered, such as for instance the possibility of a reduced patient compliance and the occurrence of ADRs at the injection site.
Considering the recent marketing approval of ofatumumab, the main purposes for which a subcutaneous formulation was developed and that recommendations on premedication to prevent IRRs are different between subcutaneous and intravenous anti-CD20 mAbs' formulations, we aim to describe and compare the main characteristics of IRRs associated with ocrelizumab and ofatumumab using data from the European spontaneous reporting system database, EudraVigilance (EV).
Data source
ICSRs reporting ofatumumab and ocrelizumab as suspected drugs were retrieved from the EV website.This is the European spontaneous reporting system, publicly accessible at www.adrreports.eu,which allows the collection, management and analyses of ICSRs related to medicines or vaccines authorized or being studied in clinical trials in the European Economic Area (EEA).The EV is managed by the European Medicines Agency (EMA).
In EV, AEs are coded using event-related information according to the Medical Dictionary for Regulatory Activities (MedDRA).MedDRA was developed by the International Council for Harmonization of Technical Requirements for Pharmaceuticals for Human Use (ICH) and it is a hierarchical dictionary that is utilized to code signs, symptoms and diagnoses, surgical and medical procedures, investigations, medical/social history and therapeutic indications and for the registration, documentation and safety monitoring of products during the marketing authorization process (16-18).
Selection of ICSRs
Steps that were taken to access to ICSRs reporting ofatumumab and ocrelizumab as suspected drugs are shown in Figure 1.For both drugs we used the line listing function and searched for the following Preferred Terms (PTs): infusion related hypersensitivity reaction; infusion related reaction; injection related reaction; immediate post-injection reaction; anaphylactic reaction; anaphylactic shock; anaphylactoid reaction; anaphylactoid shock; influenza like illness; pyrexia.The search was made for years 2021-2023 (until November 3rd).ICSRs were downloaded as an excel file that was used to performed the descriptive analyses.
Descriptive analyses
Information on patient characteristics [age group (2 months-2 years, 12-17 years, 18-64 years and 65-85 years) and sex], AE (type, outcome and seriousness), therapeutic indication, primary source qualification, primary source country for regulatory purposes, number of suspected drugs other than ocrelizumab/ ofatumumab, and number of concomitant drugs was provided for all ICSRs.To detect the use of a premedication, each ICSR was searched for the presence of corticosteroids, antihistamines and acetaminophen among suspected and concomitant medications and for the therapeutic indications "Allergy prophylaxis" or "Premedication." The seriousness degree was defined according to the ICH E2D guidelines (19).Thus, a "serious" case included ADRs that were life threatening, resulted in death, required or prolonged hospitalization, resulted in persistent or significant disability/incapacity, determined a congenital anomaly/birth defect, or resulted in some other clinically important conditions.The outcome was classified as favorable ("Recovered/Resolved" and "Recovering/Resolving"), unfavorable ("Recovered/Resolved with Sequelae, " "Not Recovered/Not Resolved, " "Fatal") and not reported ("Unknown").Lastly, the time to event (TTE) was calculated only for ICSRs that reported both the duration of the therapy and the drug withdrawal or the dose reduced as action taken after the occurrence of the ADR.
Data were analyzed using Microsoft Office Excel program.Box plots were done using STATA 18.
Ethical standards
Safety data extracted from the spontaneous reporting system comply with ethical standards and are anonymous.Therefore, no ethical measures were enforced further.
Overall results
A total of 860 ICSRs were retrieved from Eudravigilance, of which 441 associated with ofatumumab and 419 with ocrelizumab.As reported in Table 1, the majority of patients who experienced ocrelizumab-induced IRRs belonged to the age group of 18-64 years, while the age-group was mostly not specified in ICSRs reporting ofatumumab as suspected drug.The distribution of gender was almost similar in the two groups, with the majority of ICSRs related to female patients.Similarly, no relevant differences were found between ICSRs related to ocrelizumab and ofatumumab in terms of the primary source qualification (mainly represented by Health Care Professionals) and for the majority of ICSRs, ofatumumab and ocrelizumab were the only suspected drugs (Table 1).
IRRs' signs and symptoms, premedication and TTE
As shown in Table 2, "Influenza like illness" and "Pyrexia" were most frequently reported with ofatumumab, while "Infusion related reaction" and "Anaphylactic reaction" were the most common reported PTs with ocrelizumab.The screening of ICSRs for the presence of premedication drugs (reported either as concomitant or suspected drugs) revealed the presence of such medications in 148 ICSRs, of which 131 related to ocrelizumab and 17 to ofatumumab (data not shown).Specifically, as reported in Table 3, the use of premedication drugs was more commonly reported in ICSRs describing cases of infusion related reaction (57% of all cases) and pyrexia (30.2% of all cases) (Table 3).Lastly, 89 ICSRs reported both the duration of the therapy and the drug withdrawal or the dose reduction as action taken after the occurrence of the ADR (73 related to ocrelizumab and 16 related to ofatumumab); thus, for these ICSRs, the TTE in days was calculated (Figure 2).The mean TTE was 56.4 days (Standard Deviation -SD: 242.5) for ofatumumab and 58 days (SD: 124.9) for ocrelizumab.Out of 89 ICSRs for which the TTE was calculated, 74 reported IRRs that occurred the same day of the DMT administration.
Discussion
To our knowledge, this is the first study that has compared infusion and injection reactions, including their potential symptoms, between ofatumumab and ocrelizumab through the analysis of data from the European spontaneous reporting system.
The main driver of targeting B cells in MS was based on the recognition of abnormally produced antibodies in the central nervous system of patients with MS (20,21).Indeed, the increase in B cells in the cerebrospinal fluid of MS patients is positively associated with intrathecal inflammation and Ig synthesis (22).The availability of anti-CD20 monoclonal antibodies and the demonstration of their efficacy as selective B-cell-depleting therapies led to the recognition of the key role of B and T cell interactions in MS pathogenesis (23, 24).Monoclonal antibodies targeting CD20 deplete it through different mechanisms, including antibody-dependent cellular cytotoxicity, complement-dependent cytotoxicity, antibody-dependent cellular phagocytosis and induction of cell apoptosis (25).After their administration, the depletion of CD20+ B cells is observed within hours, mainly in the liver, reaching the nadir after 8 weeks and sustained for several weeks to months (26).As previously reported, among anti-CD20 drugs, those approved for the treatment of MS include ocrelizumab, which received the marketing authorization by the EMA in 2018 for the use at a dose of 600 mg to be administered intravenously twice yearly (27), and ofatumumab that is, instead, authorized in a formulation to be subcutaneously injected.Compared with the intravenous administration, the subcutaneous one should lead to more efficient and selective targeting of B cells in the lymphatic circulatory system and isolate the drug into the hypodermis, preventing its spread into the systemic circulation (9, 28).
In our study we described the main characteristics of 860 ICSRs reported to the EV database during years 2021-2023, of which 441 related to ofatumumab and 419 related to ocrelizumab, and reporting cases of IRRs.We found that the majority of these ADRs occurred in female patients aged 18-64 years.These results are not surprising.Indeed, MS mainly affects adult women (29).Moreover, there are also gender differences in the frequency of RRMS forms (females are more susceptible than males in experiencing relapses) (30).In addition, apart from sex-differences in MS prevalence and progression, ADRs in general occur more commonly in women due to sex-related factors affecting pharmacokinetic and pharmacodynamic processes that in turn affect the safety profile of drugs (31).For instance, the pharmacokinetic processes diverge between women and men due to a different expression of metabolic enzymes and to a reduced renal clearance of drugs in females because of a lower glomerular filtration rate compared to males (32,33).These differences could explain the increased rate of ADRs that is generally observed in women compared to men.There is also a gender difference in reporting ADRs; as a matter of fact, compared to men, women show greater interest in reporting ADRs and tend to report more detailed information (34,35).As reported by Florou et al. ( 7), cytokine release makes IRRs the main AEs occurring after the administration of anti-CD20 monoclonal antibodies, especially after the first infusion, with symptoms that include urticaria, angioedema, headache, nausea, fever, chills, and rarely bronchospasm.This is in line with our results related to the most commonly reported PTs but also those related to the TTE suggesting that the risk of IRR is higher immediately after the first infusion (even though this last result was presented for a limited number of ICSRs).
Although we have not found a substantial difference in the number of ICSRs reporting IRRs between ofatumumab and ocrelizumab (441 vs. 419), looking at IRRs' signs and symptoms a difference between these drugs can be found.For instance, almost 70% of ICSRs related to ofatumumab reported cases of pyrexia, while more than half of ICSRs related to ocrelizumab reported cases of IRRs (reported as PT).Anaphylactic reactions were more frequently reported with ocrelizumab than ofatumumab (6.9% vs. 1.1%, respectively).This is in line with data from phase III trials that suggested that IRRs occur with a lower incidence with ofatumumab (20.2%) compared to ocrelizumab (34.3%) (36).Specifically, safety data derived from OPERA I and II studies reported that IRRs occurred in 34% of the RRMS patients treated with ocrelizumab compared with 10% of those treated with INFβ-1a or placebo.Similarly, data from the ORATORIO trial reported a prevalence of IRRs equal to 40% with ocrelizumab and 26% with placebo in PPMS patients (37).On the other hand, safety data from ASCLEPIOS I/II ALITHIOS studies reported no difference, after the first injection, in frequency and severity of IRRs between ofatumumab and teriflunomide groups (10,13), even though among AEs of special interest (AESIs) identified for ofatumumab there were systemic injection-related reactions (9, 10).In particular, symptoms of systemic injection-related reactions, which occurred in almost 14% of patients at the first injection and in <3% of patients from the third injection onward, included fever, headache, myalgia, chills and fatigue.IRRs were generally mild to moderate in Anaphylactic shock ( 8 Box plots of median time to event in days for ocrelizumab and ofatumumab-induced IRRs.Since IRRs are recognized as common ADRs associated with ocrelizumab intravenous infusion, a premedication with antihistamine drugs, acetaminophen or glucocorticoids, is highly recommended to prevent their occurrence.As reported in the summary of product characteristics (27), the premedication treatment includes the administration of: 100 mg intravenous methylprednisolone 30 min prior to each infusion; antihistamine 30-60 min prior to each infusion; a premedication with an antipyretic may also be considered 30-60 min prior to each infusion (27).As previously reported, the premedication is not recommended before ofatumumab subcutaneous injection (9).In line with this, we found concomitant or suspected drugs used as premedication in 131 ICSRs related to ocrelizumab vs. 17 ICSRs related to ofatumumab.
Lastly, the majority of ICSRs retrieved for both suspected drugs were reported by healthcare professionals.This is in line with previous studies carried out on data from spontaneous reporting systems (31,39,40).Nowadays patients are increasingly covering a proactive role in the collection of spontaneous reports of ADRs.Indeed, patients' contribution to pharmacovigilance systems is undeniable considering that reports coming from drugs' users bring their unique perspectives and experiences as well as information that are generally not provided by HCPs, including suspected reactions to over-the-counter medicines and different presentations of reactions, including those affecting patients' quality of life (41).Notwithstanding this, HCPs still represent the main report of ADRs, especially serious ones, playing a crucial role in characterizing safety data during the pharmacovigilance surveillance, helping the identification of new potential AEs.In particular, the systematic review carried out by Inácio et al., and other studies as well, suggest that healthcare professionals tend to report more frequently serious events rather than not serious ones (42).
Our study has many strengths.First of all, we used data from the European spontaneous reporting system which collects ICSRs spontaneously reported by citizen/patients and HCPs reflecting the real-world experience with drugs.This represents an added value because information gathered with spontaneous reporting system cannot be easily examined during premarketing clinical studies due to ethical/methodological considerations (frail population are excluded from the classic RCTs, the study's duration is limited and does not allow to detect long-term or rare ADRs, etc.).In addition, in the context of spontaneous reporting databases, the EudraVigilance represents the largest pharmacovigilance database that gathers heterogeneous data from many demographics and nations (43).
Our study has many limitations too.First of all, spontaneous reports often lack of demographic and clinical data.For instance, the age group was missing in 350 ICSRs.In addition, although the premedication is recommended before ocrelizumab infusion, drugs used for this indication were reported only in 31% of ICSRs reporting ocrelizumab as suspected drug, suggesting again that ICSRs are not properly filled out.Second, we should take into account the underreporting that represents the main intrinsic limitation of spontaneous reporting systems, being one of the major disadvantages of pharmacovigilance methods that can lead to an underestimation of the frequency of some ADRs and hide potential drug related risks.Moreover, information about drug withdrawal or drug reduction and therapy duration were not available in the majority of ICSRs.Thus, we were able to calculate the TTE only for 89/860 ICSRs, suggesting that this result should be interpreted carefully.
Conclusion
The analysis of data reported in the EV database showed a similar number of ICSRs reporting IRRs following ofatumumab and ocrelizumab treatments although few differences were noted in terms of IRRs' signs and symptoms.Indeed, ofatumumab was mainly associated with the occurrence of pyrexia while ocrelizumab was mainly reported as suspected in cases of infusion related reactions (reported as PT).Thus, although a risk of ofatumumab-induced IRRs cannot be excluded, it should be considered as manageable considering that the drug seems to be mostly associated with the occurrence of fever.Nevertheless, MS patients should be properly informed of the possibility of IRRs' signs and symptoms following the subcutaneous administration of ofatumumab, which as any other mAb, can be also related to the occurrence of serious infusion reactions (44).Thus, the monitoring of DMTs, especially those that were recently approved, is highly recommended in order to better characterize their safety profile.Indeed, the collection and analysis of new data on this potential serious consequence of the subcutaneous administration of ofatumumab might lead to a new knowledge that could potentially bring to the development of new guidelines that can guide neurologists in their daily activities when administering this drug.
FIGURE 1
FIGURE 1Selection of individual case safety reports (ICSRs) reporting cases of ofatumumab-and ocrelizumab-induced infusion related reactions (IRRs) from the EudraVigilance database.
TABLE 1
Demographic characteristics and distribution of individual case safety reports (ICSRs) reporting ocrelizumab or ofatumumab as suspected drugs and PTs related to injection/infusion related reactions (IRRs) by primary source, number of suspected drugs other than ocrelizumab/ofatumumab and number of concomitant drugs.
*Exposure via breast milk.
TABLE 3
Individual case safety reports (ICSRs) reporting cases of injection/infusion related reactions (IRRs) and suspected or concomitant drugs indicated as premedication agents.
|
2024-06-29T15:07:15.476Z
|
2024-06-27T00:00:00.000
|
{
"year": 2024,
"sha1": "3c282fae32967b3ec64a34c21d5dd19deccfdc2f",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "667c02786c37dc689bb87d3c5c76e23c9a66d2e6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266160475
|
pes2o/s2orc
|
v3-fos-license
|
Prognostic and clinicopathological value of systemic inflammation response index (SIRI) in patients with breast cancer: a meta-analysis
Abstract Background Many studies have explored the value of the systemic inflammation response index (SIRI) in predicting the prognosis of patients with breast cancer (BC); however, their findings remain controversial. Consequently, we performed the present meta-analysis to accurately identify the role of SIRI in predicting BC prognosis. Methods PubMed, Embase, Cochrane Library, and Web of Science databases were comprehensively searched between their inception and February 10, 2024. The significance of SIRI in predicting overall survival (OS) and disease-free survival (DFS) in BC patients was analyzed by calculating pooled hazard ratios (HRs) and corresponding 95% confidence intervals (CIs). Results Eight articles involving 2,997 patients with BC were enrolled in the present study. According to our combined analysis, a higher SIRI was markedly associated with dismal OS (HR = 2.43, 95%CI = 1.42–4.15, p < 0.001) but not poor DFS (HR = 2.59, 95%CI = 0.81–8.24, p = 0.107) in patients with BC. Moreover, based on the pooled results, a high SIRI was significantly related to T3–T4 stage (OR = 1.73, 95%CI = 1.40–2.14, p < 0.001), N1–N3 stage (OR = 1.61, 95%CI = 1.37–1.91, p < 0.001), TNM stage III (OR = 1.63, 95%CI = 1.34–1.98, p < 0.001), and poor differentiation (OR = 1.25, 95%CI = 1.02–1.52, p = 0.028). Conclusion According to our results, a high SIRI significantly predicted poor OS in patients with BC. Furthermore, elevated SIRI was also remarkably related to increased tumor size and later BC tumor stage. The SIRI can serve as a novel prognostic biomarker for patients with BC.
Introduction
Breast cancer (Bc) has the highest morbidity among female malignancies worldwide and is also a leading cause of morbidity and mortality [1].according to GlOBOcaN estimates, there were 2,261,419 newly diagnosed Bc cases and 684,996 deaths globally by 2020 [2].there has been an increase in Bc incidence worldwide, which has increased the burden on the healthcare system [1].Four histological subtypes of Bc exist: triple-negative, overexpressed heR2 (human epidermal growth factor receptor 2), luminal a, and luminal B [3]. in the last several decades, intensive fundamental and clinical studies have significantly improved the efficacy of surgical treatment, radiotherapy, chemotherapy, immunotherapy, and targeted therapy in the treatment of Bc [3].Nonetheless, advances in the prediction of prognosis for Bc remain disappointing.therefore, the development of creditable, operable, and cost-effective prognostic biomarkers for Bc is necessary for guiding clinicians and individualized treatment plans.
current evidence shows that chronic inflammation is inseparable from tumorigenesis, proliferation, infiltration, metastasis, and apoptosis at various stages [4,5].Recent studies have demonstrated that diverse blood-based inflammation parameters, such as platelet-to-lymphocyte ratio [6], lymphocyte-to-monocyte ratio [7], c-reactive protein-to-albumin ratio [8], prognostic nutritional index [9], and systemic inflammation response index (siRi) [10] have significant value in predicting the prognosis of many solid tumors.the siRi can be determined as siRi = neutrophil count × monocyte count/lymphocyte count.it represents a novel prognostic marker that was first introduced by Qi et al. in 2016 for pancreatic cancer prognosis [11].Recent studies have reported the prognostic effect of siRi for various cancers, including nasopharyngeal carcinoma (NPc) [12], colorectal [13], ovarian [14], gastric [15], and bladder cancers [16].Many previous studies have explored the significance of the siRi in predicting Bc prognosis, but no consistent findings have been obtained [17][18][19][20][21][22][23][24].For instance, an increased siRi has been reported in certain studies as a significant prognostic marker for Bc [17,18,20], but others did not find any obvious association between siRi and Bc prognosis [19,22].consequently, this meta-analysis was performed to identify the accurate role of siRi in predicting Bc prognosis.Furthermore, we assessed the relationship between siRi and clinicopathological characteristics of Bc.
Study guideline
the present study was conducted following the Preferred Reporting items for systematic Reviews and Meta-analyses (PRisMa) guidelines [25] (supplementary File s1). the study protocol was registered with iNPlasY under registration number iNPlasY2023120039. the registration is available at https://inplasy.com/inplasy-2023-12-0039/.
Ethnic statement
informed consent and ethical approval were not required in this study because it was a meta-analysis and did not involve individual participants.
Search strategy
in this study, we comprehensively searched PubMed, embase, cochrane library, and Web of science databases between inception and February 10, 2024, using the following search strategies: (system inflammation response index, systemic inflammation response index, or systemic inflammatory response index) and (breast carcinoma, breast tumor, breast cancer, or breast tumors).the detailed search strategy for each database is shown in supplementary File 2. the language used was restricted to english.a manual search of relevant studies and reviews was conducted to identify additional eligible studies.
Selection criteria
the following studies were included: (1) Bc was diagnosed based on pathology; (2) reports of the association of siRi with Bc prognosis; (3) outcomes of interest, including but not limited to overall survival (Os), cancer-specific survival, disease-free survival (DFs), and progression-free survival (PFs); (4) studies with available or calculable hazard ratios (hRs) and 95% confidence intervals (cis); (5) studies identifying the threshold siRi; and (6) english publications.the following studies were excluded: (1) reviews, meeting abstracts, comments, case reports, and letters; (2) studies with duplicate patients; and (3) animal studies.
Data collection and quality evaluation
Data were collected in each qualified study by two researchers (sZ and tc).any discrepancy was settled by negotiation until a consensus was reached.the following data were collected from each publication: first author's name, publication year, age, country, sample size, study design, study period, tumor-node-metastasis (tNM) classification, treatment, threshold siRi, follow-up, survival endpoints, survival analysis, cut-off value determination method, and hRs with 95%cis.the primary and secondary survival end points were Os and DFs, respectively.the Newcastle-Ottawa scale (NOs) was employed to evaluate the enrolled study quality [26], with a score ranging from to 0-9, and a NOs score >6 suggested high-quality studies.
Statistical analysis
We computed combined hRs and 95%cis to estimate the role of siRi in determining the Os and DFs of patients with Bc.Between-study heterogeneity was evaluated using cochran's Q and higgin's i 2 tests.i 2 > 50% or p < 0.10 stood for obvious heterogeneity, so we adopted the random-effects model; otherwise, we employed the fixed-effects model.subgroup analysis was performed to detect potential sources of heterogeneity. the correlation between siRi and the clinicopathological characteristics of Bc was assessed by pooling the ORs and 95%cis.to assess the stability of the combined data and to determine the cause of heterogeneity, a sensitivity analysis was conducted.Publication bias was estimated using Begg's and egger's tests.statistical analysis was conducted using stata software (version 12.0; statacorp lP, college station, tX, Usa).statistical significance was set at p < 0.05.
Study retrieval procedure
as shown in Figure 1, there were 420 articles obtained from the primary database search.Duplicates were removed to obtain a total of 303 studies.through title and abstract screening, we eliminated 277 studies owing to irrelevance or animal studies.the remaining 26 studies were examined by full-text reading.seventeen studies were excluded for the following reasons: not on siRi (n = 15), duplicate patients (n = 2), and no survival data (n = 1).Ultimately, this meta-analysis recruited eight articles involving 2,997 Bc patients [17][18][19][20][21][22][23][24] (Figure 1).
Sensitivity analysis
We conducted a sensitivity analysis by omitting each study in turn.the hR for each component analysis was within the predicted range in the remaining studies.consequently, the reliability of the meta-analysis was verified (Figure 6).
Publication bias
to assess publication bias, we pooled hRs and 95% cis for Os and DFs using funnel plots and the Begg's and egger's tests.as shown in Figure 7, funnel plots exhibited symmetry for Os and DFs.Moreover, obvious publication bias was not detected for Os p = 0.452 and 0.709 upon Begg's and egger's tests separately) or DFs (p = 0.806 and 0.606 upon Begg's and egger's tests separately) (Figure 7).
Discussion
the significance of siRi in predicting Bc prognosis is inconsistent among previous studies.We included eight articles involving 2,997 patients in this work [17][18][19][20][21][22][23][24] and aggregated the hRs and 95%cis.according to our results, an increased siRi significantly predicted poor Os in patients with Bc.Moreover, elevated siRi was significantly associated with t3-t4 stage, N1-N3 stage, tNM stage iii, and poor tumor differentiation in patients with Bc. as revealed by the publication bias test, the findings of this study were reliable.taken together, siRi is an effective and reliable factor for predicting the long-term prognosis of Bc patients.to the best of our knowledge, this study is the first meta-analysis to explore the value of siRi in predicting Bc prognosis.an increased siRi can be the result of increased neutrophil and monocyte counts, and/or decreased lymphocyte counts.although the precise mechanisms underlying the role of siRi in predicting Bc prognosis have not been clarified, they can be interpreted as follows.First, neutrophils have been extensively recognized to have a critical effect on the promotion of tumor cell proliferation, migration, invasion, and immunosuppression during the carcinogenesis process [27,28].as a result of the release of chemokines and cytokines such as vascular endothelial growth factor (VeGF), neutrophils can accelerate angiogenesis, enhance tumor cell adhesion, and promote distant metastasis of tumor [29].second, monocytes, particularly cells that differentiate into tumor-associated macrophages (taMs), participate in tumorigenesis.through the production of proinflammatory cytokines and stimulation of tumor angiogenesis, taMs can accelerate tumor growth [30].third, among the body's immune system components, lymphocytes are capable of inhibiting tumorigenesis and relapse as well as regulating immune function through cytokines and cytotoxic death [31].lymphocytes such as cD4+ and cD8+ cells are important for cellular immunity.tumor-infiltrating t lymphocytes inhibit tumor cell growth and invasion by enhancing their apoptosis [32].therefore, a high siRi can be a factor in predicting poor prognosis in patients with Bc.
Notably, the cut-off values of siRi are not uniform in included studies.all enrolled studies used ROc curves to determine the optimal cut-off value of siRi. the cut-off values ranged from 0.465 to 1.6 [17][18][19][20][21][22][23][24], with a median value of 0.725.Due to the heterogeneity of recruited patients, the ROc curve identified various cut-off values in each study.a standard cut-off value of siRi is still needed to improve the applicability of this index.the application of various cut-off values of siRi in the included studies could be a reason of the different results in these studies [17][18][19][20][21][22][23][24].
Recently, many articles have mentioned the significant role of siRi in predicting the prognosis of different cancer types through meta-analysis [33][34][35][36].Recently, as reported in a meta-analysis involving 3,187 patients, siRi independently predicted the dismal Os of NPc [33]. in another meta-analysis including 30 studies, an increased siRi was markedly related to poor Os and DFs in patients with gastrointestinal tumor patients [34].according to Zhou et al. a high siRi was related to short Os and DFs/recurrence-free survival/ PFs in solid tumor patients in their meta-analysis comprising 10,754 cases [36].Our Bc results conformed to the significance of siRi in predicting additional cancer types.some limitations of the present study should be noted.First, it had a small sample size.although we retrieved the latest literature.second, all the qualified articles were published in asian regions.consequently, our results may be applicable to asian patients with Bc. third, the threshold siRi was not uniform among the included studies.a standard cut-off value of the siRi for Bc is still needed.
Conclusions
in summary, this meta-analysis demonstrated that increased siRi notably predicted poor Os in patients with Bc. additionally, elevated siRi was also related to increased tumor size and an advanced Bc tumor stage.siRi is a novel prognostic biomarker in patients with Bc.Bc patients with high pretreatment siRi experience high risk of poor survival and tumor progression.as revealed in this meta-analysis, for Bc patients with siRi ≥0.80, systematic therapy including Nact and surgery may be beneficial.Furthermore, a standard siRi cut-off value should be determined in future studies.Owing to some limitations, large-scale international multicenter trials should be conducted for further validation of our results.
Figure 1 .
Figure 1.PRisMA diagram of literature search and study inclusion.
Figure 2 .
Figure 2. forest plot of meta-analysis of the relationship between siRi and os in patients with Bc.
Figure 3 .
Figure 3. forest plot of meta-analysis of the relationship between siRi and dfs in patients with Bc.
Figure 5 .
Figure 5. forest plots of the correlations between siRi and clinicopathological features in Bc. (A) eR status (positive vs negative); (B) PR status (positive vs negative); (c) HeR2 status (positive vs negative); and (d) differentiation (poor vs well/moderate).
Table 1 .
The basic characteristics of included studies.
Table 2
. subgroup analysis of the prognostic value of siRi for os in patients with breast cancer.
Table 3 .
subgroup analysis of the prognostic value of siRi for dfs in patients with breast cancer.subgroups no. of studies no. of patients effects model HR (95%ci) p TnM: tumor, node, metastasis; nAcT: neoadjuvant chemotherapy; siRi: systemic inflammation response index; dfs: disease-free survival.
Table 4 .
The association between siRi and clinicopathological features in patients with breast cancer.
|
2023-12-11T16:06:37.780Z
|
2023-12-09T00:00:00.000
|
{
"year": 2024,
"sha1": "230841ea7a236c7528931d4e9d9c8a0e94d9b758",
"oa_license": "CCBYNC",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/07853890.2024.2337729?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4d4cfa77e15cf1d5944a0d5ccccb019e5da4af3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
199472559
|
pes2o/s2orc
|
v3-fos-license
|
Reference frames which separately store non-commuting conserved quantities
Even in the presence of conservation laws, one can perform arbitrary transformations on a system given access to an appropriate reference frame. During this process, conserved quantities will generally be exchanged between the system and the reference frame. Here we explore whether these quantities can be separated into different parts of the reference system, with each part acting as a `battery' for a distinct quantity. In particular, for systems composed of spin-$\frac12$ particles, we show that it is possible to separate the three components of angular momentum $S_x$, $S_y$ and $S_z$ in this way, which is of particular interest as these conserved quantities do not commute. We also give some extensions of this result to higher dimension.
I. INTRODUCTION
Conservation laws are amongst the most important and widely used aspects of physics, greatly restricting the possible transformations that an isolated system can undergo [1][2][3]. However, when a system is not isolated but allowed to interact with other systems, with conservation laws applying only globally, much greater freedom is possible [4][5][6][7][8][9]. The situation is particularly interesting in quantum theory, where different conserved quantities may not commute (such as the different components of angular momentum), and interference effects are crucial.
Perhaps surprisingly, it has been shown that any transformation of a quantum system can be implemented, as long as one has access to an appropriate ancillary system [6][7][8][9]. This additional system plays a dual role of providing a reference frame for the transformation, and acting as a reservoir which can exchange conserved quantities with the system.
During the system's transformation it will generally exchange many different conserved quantities with the reference frame. An interesting question is whether these different conserved quantities can be separated into different parts of the reference system, with each part effectively acting as a 'battery' for a specific conserved quantity. When the conserved quantities commute, in some cases we can construct a reference frame that separates the charges physically [10]. The question, however, becomes substantially more complex when the conserved quantities do not commute, and this is the case we address in this manuscript.
As a concrete example, consider rotations of a spin- 1 2 particle in the presence of angular momentum conservation. We would like to be able to localise any changes in the three components of spin s x , s y and s z of the spin- 1 2 particle within different subsystems in the reference frame. In this paper, we show that this is indeed possible. Furthermore, our approach will generalise to any unitary transformation on any number of spin- 1 2 systems. In fact, as the spin operators plus the identity provide an operator basis for a two-dimensional system, the above procedure can be understood as separating out a basis of extensive conserved quantities for any two dimensional systems. In an appendix, we show that a similar approach can be used to separate extensive conserved quantities for any system of dimension 2 n .
These results address a fundamental aspect of conservation laws in quantum theory. Furthermore, they are of particular importance in quantum thermodynamics, which has recently been extended to multiple conserved quantities [10][11][12][13][14][15][16], and where batteries for storing conserved quantities are of particular interest.
SYSTEMS
We consider the case of spin-1 2 particles, and the possibility of separating the different conserved components of angular momentum s x , s y and s z . We will show that by considering a fixed reference frame composed of multiple spin-1 2 particles, and interacting with it in a particular way: (i) the to-tal angular momentum of the system and reference frame is conserved, (ii) any unitary transformation can be implemented on the system with arbitrary precision, and (iii) any changes in the average angular momentum components of the system are stored in different parts of the reference system (up to arbitrarily small correction terms). Essentially, we can think of the reference system as being partitioned into three separate batteries, each of which stores a different component of angular momentum.
The reference system must define the x, y and z directions in order to allow rotations of the system about these directions via a rotationally invariant interaction. One way to construct a reference frame for the x-direction would be to prepare a number of spins all pointing in the x-direction (as in [9]). This could be used to implement a rotation of the system about x, but each spin in the reference frame would generally accumulate changes in both s y and s z , thus not separating the conserved quantities. The key intuition behind our approach is that there is an alternative way to define the x-direction. Instead of aligning each spin in the reference frame along the x-direction, we prepare pairs of spins pointing in the y and z direction, and implement a cross product via the interaction. In this way, each spin in the reference frame accumulates changes in only one component of spin, and thus the different conserved quantities can be separated.
Our result is formalised in the following theorem. There, the total spin of the system is denoted by s, with components {s k } k:x,y,z ; the total spin of the j-th component of the reference frame is denoted by s (j) , and has components {s (j) k } k:x,y,z ; and finally, the total spin operator of the system and frame is denoted by S, and its components given by S k = s k + s R k , k = x, y, z. Theorem 1. Let the system S be a spin-1 2 particle. Then for every ǫ > 0 and δ > 0, there exists a reference frame R (composed of a large number of spin-half particles) with a fixed state R , such that for every unitary U S on the system there exists a joint unitary V on the system and reference frame with the following properties: • Conservation: V conserves all components of total angular momentum S, i.e., [V, S] = 0.
• Accuracy: V effectively implements U S on the system with precision ǫ, i.e., for any initial system state ρ S , • Separation: Each component of angular momentum of the system is exchanged only with the corresponding part of the reference system, up to precision δ: where ∆s j is equal to the change in the average angular momentum of the system in the j-direction, and ∆s (k) j is equal to the change in the average angular momentum of the k-th part of the reference system in the j-direction.
Proof. The key is to consider the following operator acting on three spin-1 2 particles where s is the vector of spin operators of the system and s (k) corresponds to the spin operators of reference frame particle k. The sub-index j in the spin operators denotes the spin component in the j direction, and ǫ jkℓ the Levi-Civita tensor. For simplicity we have set = 1, so each spin operator is equal to half the corresponding Pauli operator. We can now construct a unitary interaction between three qubits given by Note that T is a 'scalar' in R 3 and hence commutes with rotations there. The generator of these rotations is precisely the total spin operator S = s + s (1) + s (2) , hence T and V α commute with S (we will present an alternative proof of this fact in Appendix B 1). In particular this means that V α preserves the conserved quantities S x , S y and S z , and hence satisfies [V α , S] = 0.
In addition, consider the particular operator basis for the Hilbert space of a spin-1 2 particle given by B = I ∪ {τ x , τ y , τ z }, where τ j is the projector onto the eigenvector of s j with eigenvalue Suppose that we want to implement a small rotation of the system about the x-direction, given by U α,x = exp −i α N s x . To do this we prepare a reference frame in the state τ (1) y ⊗ τ (2) z and act with V α on the system and reference frame. Then In this way, the state τ defines a reference frame for the direction x.
Similarly, due to the cyclic symmetry of the spin operators, we can generate a small rotation of the system about the y-direction or z-direction by acting with V α on the system and a reference frame in the state τ We now consider how the angular momentum in the reference system changes under this transformation. For the x-rotation of the system given by Eq. (6), where in the last step we have noted that the expression is only non-zero when ℓ = z, k = x and j = y, and used the spin commutation relation [s a , s b ] = iǫ abc s c . Similarly Hence to leading order, the first reference system only picks up z-spin, and the second reference system only picks up y-spin. As total angular momentum is conserved, it follows that the change in the system's angular momentum obeys Similar results hold for small rotations about the other axes, with the two components of angular momentum that change (perpendicular to the axis of rotation) being separated into the two different reference systems. For example, when performing a small y-rotation with the frame τ x , to first order in 1 N only the x-spin of the first reference particle and the z-spin of the second reference particle are modified. Intuitively, neither reference system changes its spin-component parallel to the axis of rotation of the system, and each reference spin does not change its spin-component parallel to the direction it was originally pointing in, as it is maximal in this direction and we are considering only first order changes.
Neglecting an irrelevant global phase, a general single-qubit unitary can be written as U S = exp (−iH), where H = k=x,y,z α k s k . Similarly to Ref. [9], U S can be implemented by performing a sequence of N small rotations U H = exp −i H N . To implement each U H we use 6 spin-1 2 reference systems, in the state The reference system τ (j|k) is used in small rotations about the k-axis, and only experiences significant changes to its j-angular momentum (i.e., it belongs to the battery for j-angular-momentum).
In particular we can implement U H by first applying V αx to the system and the first two reference spins, then V αy to the system and the next two reference spins, then V αz to the system and the last two reference spins. Following a similar approach to Eq. (6), we thereby obtain By iterating this procedure N times using the reference frame ρ R = τ ⊗N R , we will approximately implement the desired transformation U S = (U H ) N . Defining the full sequence of transformations by V , we note that as this is a sequence of V α transformations, each of which conserve the three components of angular momentum, V will satisfy the conservation property [V, S] = 0.
The error in implementing U S is bounded by the sum of the errors from each step, giving a total error of N O 1 . It follows that To prove the separation property, first re-order the reference frame systems such that all particles labelled τ (j|k) are in a separate component of the reference system ρ (j) (the j-angular momentum battery). Then, Eq. (9) and its equivalents for y and z rotations, together with the fact that there are only 2N spins in each component of the reference system, imply that For any δ, we can therefore choose a sufficiently large N such that Eqs. (2) and (3) terms in this section may be found in Appendix A.
These results can be extended to systems composed of any number of spin-1 2 systems, as we argue in the following. First, the above proof shows that we can implement any unitary on a single spin. Then, we can also implement interactions between spins via an interacting unitary such as √ SW AP or e −iθ s (1) ·s (2) . These commute with all extensive conserved quantities and do not require the use of a reference system. Thinking of our spin-1 2 systems as qubits, we know that the ability to perform all single-qubit unitaries plus any particular interacting two-qubit unitary is computationally universal [17]. Hence we can construct a circuit to approximately implement any unitary transformation on any number of spin-half systems, whilst storing any changes to angular momentum in different batteries.
III. BEYOND QUBITS
We will now consider whether we can separate changes in conserved quantities for higher dimensional systems, with dimension d > 2.
Let us first consider angular momentum conservation of spin s systems, with dimension d = 2s+1. In this case, we can use essentially the same procedure as above (now taking τ j as the eigenstate of s j with maximum spin in the j-direction, and V α = exp{−i α s 2 N }) to approximately implement any single system rotation of the form e −iθ n·s on a spin-s system, whilst conserving the three components of total angular momentum S x , S y and S z . Any changes in angular momentum will be separated into three different 'batteries' as before. Furthermore, we could again implement interacting unitaries between different spins such as e −iθ s (1) ·s (2) , as this is rotationally invariant.
However, unlike in the case of spin-1 2 particles, spatial rotations of the form e −iθ n·s no longer rep-resent the complete set of local unitary transformations for spin-s particles. It would be interesting to explore what additional transformations are required in order to give a universal gate set, and whether these can be implemented in such a way as to localise any changes to angular momentum.
Moving from angular momentum conservation to more general conservation laws, we would expect a maximum of K = d 2 − 1 independent conserved quantities for a d dimensional system, as we need this many operators plus the identity to form an operator basis. An interesting question is whether we can construct a complete basis of conserved quantities, such that arbitrary unitary transformations of a system can be performed whilst separating the changes in all of these conserved quantities into different 'batteries'. In appendix B, we generalise our previous proof to show that this is possible in any dimension in which there exists a set of K Hermitian operators O k (for k = 0, 1, . . . K − 1) with the following properties: These operators correspond to a complete set of conserved quantities, and together with the identity they form an orthogonal operator basis for the d-dimensional system. When we can find such an operator basis, it is possible to implement any unitary transformation on any number of systems with arbitrary accuracy using a fixed reference system, whilst conserving all extensive conserved quantities, and separating any changes in the expectation values of each conserved quantity O k into a different part of the reference system (up to arbitrarily small corrections).
One example of a case in which this is possible is when the system has dimension 2 n , when we can take the operators O k to be tensor products of the spin operators and I 2 . We provide an extensive discussion of this example in Appendix B. Note that this is not the same as considering n spin-1 2 particles, as there are many more conserved quantities for the d-dimensional system than simply the three components of total angular momentum. It remains an open question whether it is possible to find such a basis in other dimensions, or whether a different approach will work in such cases.
IV. CONCLUSIONS
We have shown that it is possible to perform an arbitrary unitary transformation on any number of spin-1 2 particles whilst respecting angular momentum conservation, in such a way that any changes in the three components of angular momentum are separated into different 'batteries'. Any errors in this procedure can be made arbitrarily small by making these batteries sufficiently large.
The study of quantum thermodynamics has recently been extended to other conserved quantities besides energy, and this result allows one to consider explicit batteries for the angular momentum. The fact that the different components of angular momentum can be separated in this way is particularly surprising given that they do not commute.
Similar results can be straightforwardly obtained for spatial rotations (but not arbitrary unitaries) of higher spin systems. Two possible approaches to extend these results to arbitrary unitaries for such systems would be (i) to see whether a complete set of single system unitaries could be performed using a sequence of single qubit rotations and rotationally invariant interactions on the system and ancillas (which are used catalytically, and returned arbitrarily close to their initial state), or (ii) to note that the spin operators for any dimension obey the desired properties of the operators O k described above, and to investigate whether they can be complemented by other operators obeying these properties in order to obtain a complete operator basis.
For systems of dimension d = 2 n , a complete set of conserved quantities can be constructed such that arbitrary transformations on the system can be implemented while separating the changes in each conserved quantity in a different battery. An interesting open question is whether this can be achieved in arbitrary dimension, and which noncomplete sets of conserved quantities allow for arbitrary unitaries to be performed whilst separating any changes in those conserved quantities into different batteries. Here we provide more detailed technical proofs giving explicit bounds for the O 1 N 2 and O 1 N terms in our bounds for spin-1 2 systems, obtained using similar techniques to those introduced in [9]. (6) To provide an explicit bound for Eq. (6), we must first show that
Proof of Equation
Expanding the exponentials we obtain where in the fourth line we have used the fact that there are 6 non-zero terms in T = j,k,ℓ∈{x,y,z} ǫ jkℓ s j ⊗ ℓ , each of which has operator norm 1 8 (as s j = 1 2 ), giving T ≤ 3 4 . Hence for N ≥ 6α, On the other hand, applying Eq. (D5) of Ref. [9] to our setup yields where U α,x = exp −i α N s x . Combining Eqs. (A3) and (A4) through the triangle inequality, we obtain where we have used the fact that tr R [T, ρ S ⊗ τ (7) and (8) Applying similar arguments to those in Eqs. (A1) to (A3) to the changes in spin given by Eqs. (7) and (8) we obtain
Proof of Equations
3. Proofs of Equations (12), (13) and (14) Let us start by proving Eq. (12). When considering a general small rotation, we must obtain a bound on Following a similar argument to that below Eq. (D7) of Ref. [9], expanding the unitaries and collecting terms in 1 N n for n ≥ 2 we obtain at most 6 n contributions (as each of the 6 unitaries could provide each power of 1 N ), with a constant coefficient upper bounded by (4α max ) n where α max ≤ π. Each term also contains the trace norm of an operator with n copies of T , acting on different parties, distributed either before or after the initial state. Following the argument above Eq. (A3), such a term is upper bounded by 3 4 n . Combining all of these observations we obtain (A9) For N ≥ 36π we therefore obtain ζ ≤ 648π 2 1 N 2 . Combining this with Eq. (D11) of Ref. [9], for D = 3, we obtain for sufficiently large N . Iterating this transformation N times and using the inductive argument in Appendix C of Ref. [9], gives This completes the proof of Eq. (12).
To now prove Eqs. (13) and (14), we can apply a similar argument as above. This leads to ∆s j + ∆s from which the argument follows.
Appendix B: Separating conserved quantities for higher dimensional systems Here we present a generalisation of the reference frame of Section II, and of the operator T of Eq. (4), to higher dimensional systems. In particular, we give sufficient conditions to be able to construct a complete basis of extensive conserved quantities for such systems, and a reference frame that allows us to perform arbitrary unitary transformations of the system whilst storing any changes in these conserved quantities in different batteries (up to arbitrarily small corrections).
Consider a system S, of dimension d > 2. In this case there exist K = d 2 − 1 possible linearly independent conserved quantities, where without loss of generality we do not include the scalar. In what follows, we will consider the case in which there exists a particular choice for these conserved quantities given by K Hermitian operators M = {O k } k=0...K−1 with the following properties Together with the identity these operators form an orthogonal operator basis for the d-dimensional system. In (iii), note that the constant of proportionality could depend on k and ℓ, and that m = ℓ as tr First let us consider how to perform the small unitary transformation U α,0 = exp(−i α N O 0 ). To do this, we first prepare a reference frame consisting of D = K − 1 particles of the same type as the system, initialised in the state where Notice that for d = 2, f is just the antisymmetric symbol ǫ ijk . In addition, when d = 2 and M = {s k } k=x,y,z , we recover the interaction defined in Eq. (4) and the batteries defined in Section II.
The main requirement that T should satisfy is [T, C] = 0 for any extensive conserved quantity C. We will show this in the next subsection, but for now we focus on using it to perform transformations of the system. The operator T allows us to define a global unitary interaction among K systems given by similarly to Eq. (5).
The effective action of this global unitary V on our system A is given by tr R V ρ S ⊗ ρ R V † . Computing this trace, we find that only the term in T containing f 012...D gives a non-zero contribution to first order in 1 N , leading to Similarly we can perform the small unitary transformation U α,r = exp(−i α N O r ), generated by an arbitrary operator O r by preparing the reference frame such that ρ (k) = I d + c (k+r)modD O (k+r)modD and applying V , such that the only non-zero term in the trace corresponds to f r (r+1)...D 0 1...(r−1) . As in the main paper, by considering a sequence of small transformations generated by each operator O r we can implement an arbitrary small transformation exp(−i H N ). By iterating this process N times, we can then implement an arbitrary transformation exp(−iH) on the system. As the error per step is O 1 N 2 and there are O (N ) steps, the overall error can be made as small as desired by choosing N sufficiently large.
Next, let us see how the the conserved quantities are stored in the batteries (i.e., the reference frame). For simplicity, we again consider implementing the small transformation U α,0 generated by O 0 on the system. The change in the conserved quantity O k for particle r in the reference frame is given by where O r k := I ⊗ ... ⊗ O k ⊗ . . . ⊗ I is the operator that is I everywhere except for the r-th frame particle where it is O k . Expanding V to first order in 1 N we obtain Now notice that the sum in Eq. (B5) consists of two terms: {a = 0, a r = r} and {a = r, a r = 0}, since any other assignment of values to a or a r will render f a,1,...,r−1,ar,r+1,D = 0. Hence, Using ρ (r) = 1 d I + 1 O k O k and f r,1,...,r−1,0,r+1,D = −1, we obtain So far, we have only used the first two properties of the operators O k . Given the third property that the operators are closed under commutation, we find that either [O 0 , O r ] = 0, in which case the r th subsystem in the reference frame accumulates no changes in any conserved quantity (i.e., ∆O r k = 0 for all k), or [O 0 , O r ] ∝ O m . In the latter, the r th subsystem will only accumulate changes in the conserved quantity O m (i.e., ∆O r k = 0 unless k = m). As each subsystem accumulates changes in at most one conserved quantity, it is possible to separate all of the conserved quantities into different batteries. A similar result will apply for small rotations generated by any O k and thus for the overall transformation.
T preserves extended conserved quantities
In this subsection, we show that T commutes with all extensive conserved quantities. Let us begin by revisiting the case with d = 2, where our conserved quantities are s x , s y and s z , as presented in Section II. Here, T acts on three particles as T = a,a1,a2 f aa1a2 s a ⊗ s a1 ⊗ s a2 . Now consider the conserved quantity corresponding to the total angular momentum in the x-direction, S x = s x ⊗ I ⊗ I + I ⊗ s x ⊗ I + I ⊗ I ⊗ s x , which is the sum of the angular momentum of the three individual particles in the x-direction. We will now see how each term in each sum is either zero or cancelled out by a similar term appearing in a different sum but with the opposite sign. First, note that all terms in the final answer must contain the same spin operator on exactly two particles and a different spin operator on the third particle. This is because a, a 1 , and a 2 must all be distinct in order for f aa1a2 to be non-zero, and the spin operator generated by the commutator is different from the spin operators appearing within it. As an example, In particular, we find where we have used an approach similar to that in the derivation of Eq. (A2), assuming that N > 2K! O0 O1 ... OD η1η2...ηD α. Applying now Eq. (D5) of Ref. [9] to this setup yields Combining Eqs. (B10) and (B11) through the triangle inequality, we find Finally, following a similar approach we obtain In the particular case considered above of dimension d = 2 n , in which the operators O k are products of spin-1 2 operators and I/2, note that η k = O k = 1 d and hence these expressions simplify considerably.
|
2019-08-07T16:34:38.000Z
|
2019-08-07T00:00:00.000
|
{
"year": 2020,
"sha1": "a3f2ae2f7e1105416e9e246e5a2a44d05771d1ba",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1908.02713",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "a3f2ae2f7e1105416e9e246e5a2a44d05771d1ba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Physics"
]
}
|
244135018
|
pes2o/s2orc
|
v3-fos-license
|
Salinity and Water Temperature as Predictors of Bottlenose Dolphin (Tursiops truncatus) Encounter Rates in Upper Galveston Bay, Texas
Bottlenose dolphins (Tursiops truncatus) that inhabit urban estuaries like Galveston Bay, Texas, are exposed to cumulative stressors including pollution, fisheries, shipping, freshwater inflows, and construction operations. With continuing development, it is imperative to understand the key environmental variables that make the Galveston Bay estuary suitable habitat for this protected species. The Galveston Bay Dolphin Research Program conducted monthly photo identification surveys of bottlenose dolphins in a previously understudied 186 km2 area in upper Galveston Bay (UGB). To understand occurrence patterns in this region, we calculated monthly encounter rates of dolphins (dolphins/km) for four consecutive years (2016–2019). Using multiple linear regression models, we investigated the relationship between encounter rates, and water temperature and salinity. Monthly encounter rates ranged from 0.00 to 1.23 dolphins/km with an average of 0.34 dolphins/km (SE = 0.05). Over 80% of the variance was explained by the predictor variables water temperature and salinity (R2 = 0.820). Water temperature had a positive linear effect on encounter rates at over 23.37°C (SE = 1.42). Accordingly, higher encounter rates occurred during months with warm temperatures (May–September) compared to cooler months (November–April), indicating a predictable yearly movement pattern. Moreover, salinity was a highly significant predictor variable, with encounter rates dropping linearly with decreases in salinity. Higher numbers of dolphins are found in UGB during summer, but an exodus of dolphins occurs with low salinity levels, regardless of the time of year and water temperature. These findings should be considered during infrastructure projects (i.e., flood gate system) that may alter dolphin habitat and prey availability.
However, due to coastal population growth and rapid development, many estuaries are severely impacted by anthropogenic activities (Kennish, 2002). Bottlenose dolphins that inhabit urban and industrialized estuaries are exposed to diverse stressors including pollution, commercial and recreational fisheries, shipping, dredging and construction, algal bloom, and freshwater inflows (Phillips and Rosel, 2014).
Galveston Bay, Texas is one of the most industrialized estuaries in the United States. Its watershed supports half the population of Texas, and it contains the highest concentration of oil refineries and petrochemical plants in the world [Houston Advanced Research Center (HARC), 2020;GBRC, 2021]. These industries rely on the Houston Ship Channel (HSC) for transportation, leading to heavy year-round ship and barge traffic. Severely impaired water quality prior to 1970 placed portions of upper Galveston Bay (UGB) on the list of the Environmental Protection Agency's top 10 most polluted water bodies (Youngblood, 2010). Corrective measures have improved water quality over time, but concerns remain over high concentrations of contaminants in sediment and biota [Phillips and Rosel, 2014; Houston Advanced Research Center (HARC), 2020; and sources therein]. Moreover, proposed large infrastructure projects, involving coastal protection barriers (USACE and TGLO, 2021), could further impact wildlife habitat in Galveston Bay.
Prior to this study, most research on common coastal bottlenose dolphins (herein referred to as "dolphins") in Galveston Bay focused on describing dolphin activity in the southern portion of Galveston Bay, primarily in or adjacent to "Bolivar Roads" (the inlet that connects the Bay to the Gulf of Mexico) (e.g., Henningsen and Würsig, 1991;Bräger et al., 1994;Fertl, 1994;Moreno and Mathews, 2018). This area has been consistently described as having a high concentration of dolphins (Henningsen and Würsig, 1991;Moreno and Mathews, 2018;Ronje et al., 2020). On the other hand, few studies focused on waters of the inner estuary and upper portions of the Bay, and observational surveys conducted in the 1980s-1990s resulted in few or no dolphin sightings in UGB (Jones, 1988;Henningsen and Würsig, 1991;Blaylock and Hoggard, 1994). An apparent "South-North decrease" of dolphins in the Galveston Bay estuary has been attributed to the higher concentrations of prey near the mouth of the estuary, as well as higher levels of contaminants in UGB (Moreno, 2005). Nevertheless, anecdotal data from longterm Galveston Bay users (i.e., boaters, fishers) suggests that dolphin abundance may have increased in UGB over the last few decades (Fazioli et al., 2016). Reconnaissance surveys conducted in 2013-2014 confirmed the frequent presence of dolphins in western UGB and led to the establishment of a long-term monitoring program focused on this previously understudied area.
Increased dolphin activity in this area may reflect the success of efforts to protect Galveston Bay; however, development in the region continues and habitat degradation persists (Youngblood, 2010;Phillips and Rosel, 2014;and sources therein). Accordingly, the Galveston Bay bottlenose dolphin stock [as defined by National Oceanic and Atmospheric Administration Fisheries (NOAA Fisheries)] has been designated as high priority for research and monitoring (Phillips and Rosel, 2014). Information on habitat use, abundance, site fidelity, and stock structure are all needed to better inform management and conservation of these federally protected dolphins (Phillips and Rosel, 2014;Hayes et al., 2019). Moreover, in the face of anthropogenic and climatic changes affecting the Bay, it is imperative to understand the key environmental factors that make the estuary suitable dolphin habitat.
A variety of biotic and abiotic factors influence bottlenose dolphin abundance and distribution. These may include but are not limited to prey and predator distribution, intraspecific social dynamics, topography, turbidity, temperature, and salinity (Irvine et al., 1981;Hastie et al., 2006;Heithaus and Dill, 2006;Mazzoil et al., 2008;Huther, 2010;Hornsby et al., 2017;Moreno and Mathews, 2018). In Galveston Bay, Moreno (2005) observed dolphins further north of Bolivar Roads only June through August, the months with the warmest annual water temperatures. Ronje et al. (2020) indicated a summer increase in overall abundance for Galveston Bay, encountering more dolphin groups in UGB and shallow waters of the estuary during these months, and noting a density shift to deeper channel and Gulf pass habitats during the winter. Anecdotal data gathered from Bay-users also indicated a perceived increase in the number of dolphins present in UGB during summer months (Fazioli et al., 2016). This apparent increase coincides with the months of greater fish diversity in UGB (Bechtel and Copeland, 1970). In terms of salinity, UGB is marginal dolphin habitat, often fluctuating to below estimated 8-11 ppt physiological tolerance thresholds (Ewing et al., 2017;Hornsby et al., 2017;Fazioli and Mintzer, 2020;McClain et al., 2020). A study focused on the effects of Hurricane Harvey, identified a decline in dolphin encounter rates in UGB during the prolonged period of low salinity after the storm (Fazioli and Mintzer, 2020). Herein, we focused on evaluating the relationship between dolphin presence in western UGB and two likely influential abiotic factors: temperature and salinity.
Given the expected continued anthropogenic development in Galveston Bay, we aim to increase knowledge about this understudied dolphin stock to inform future management and conservation efforts. To understand the occurrence patterns of dolphins in UGB, we calculated monthly encounter rates of dolphins (dolphins/km) for four consecutive years. Specifically, we aimed to (1) investigate if dolphins are found year-round in western UGB, and (2) evaluate the effect of temperature and salinity on dolphin presence in UGB.
Study Area and Survey Protocols
This study took place in a 186 km 2 area in western UGB, with land and the HSC delineating the western and eastern boundaries of the study area (Figure 1) Temperatures in the study area vary in accordance with the subtropical climate, with the lowest temperatures occurring December through February and the warmest in July and August [Houston Advanced Research Center (HARC), 2020]. Freshwater inflow, primarily from the Trinity and San Jacinto Rivers, largely affects the salinity of the study area (Orlando et al., 1993). April through June has been noted as a time of year with high inflow and corresponding low salinity (Orlando et al., 1993). The low flow season has been identified as July through October but can be interrupted with tropical storms (Ward and Armstrong, 1992). Moreover, these patterns are influenced by global-scale climatic processes (e.g., El Niño and the Southern Oscillation; Tolan, 2007).
The Galveston Bay Dolphin Research Program (GDRP) began to conduct boat-based dolphin surveys in UGB in 2013. In 2015-2016, we standardized the primary study area and survey protocols to allow for consistent long-term monitoring of the dolphin population. Herein, we utilized data gathered during continuous monthly boat-based surveys conducted from January 2016 to December 2019 under NOAA Fisheries Scientific Research Permit #18881.
Monthly photo-identification surveys were conducted by 3-5 trained observers from a 7 to 8 m center console boat traveling at 18.5-32 km/h. Meandering survey routes were determined daily based on existing survey coverage and weather conditions Urian et al., 2009) and were designed to achieve balanced coverage of nearshore, open bay and deep channel habitats within the study area ( Supplementary Figures 1-4). Initial route direction was randomized to avoid diurnal biases (Rosel et al., 2011). It typically took 2-3 field days of effort to achieve coverage of the study area (Figure 1). Field days were completed consecutively, or as close together as weather would permit. When a dolphin was spotted, the crew stopped and took photographs of each dolphin's dorsal fin (for individual identification) and recorded data including location, dolphin activity, human interactions, group size, presence of calves, surface water quality (water temperature and salinity), tide, sea state, weather, and sighting conditions. Observers evaluated and rated sighting conditions from excellent to poor, by considering the combination of sea state, glare, and weather to determine the overall likelihood of seeing dolphins, if present, within 0.5 km of the boat. A sighting "group" was defined as all dolphins within approximately 100 m of each other, engaged in similar activities (Irvine et al., 1981). Standard photo-analysis tools and methodology were used to rate photo quality and fin distinctiveness and catalog individual dolphins in each sighting (Würsig and Würsig, 1977;Scott et al., 1990;Adams et al., 2006;Rosel et al., 2011;Urian et al., 2015).
During each survey day, the crew utilized a YSI Pro-DSS or 600 XLM Sonde to collect profiles (at 0.3 m from the bottom, mid-column and 0.3 m from the surface) of water temperature and salinity at "environmental profile stations" (Figure 1). Three to five stations were chosen each survey day to represent the areas surveyed regardless of the presence or absence of dolphins (Fazioli and Mintzer, 2020). To limit possible biases associated with stratification, only the average mid-column temperature and salinity readings were utilized for analysis.
Encounter Rate Regression Analysis
We calculated monthly dolphin encounter rates from January 2016 to December 2019. Monthly encounter rates were defined as the number of individual dolphins sighted per linear kilometer surveyed (dolphins/km or d/km). Number of dolphins for each month were calculated by summing the best estimate, after photo-analysis, of dolphins in each group/sighting within the corresponding month (Supplementary Material 1). Resightings of individual dolphins within a given month were excluded. Overlay and measuring tools in ArcGIS Pro 2.5.0. were used to calculate the monthly linear effort (km) within the study area polygon (Figure 1). Only sightings and linear effort that took place under sighting conditions rated as "good" or "excellent" were included in the calculations Wells et al., 1996;Fazioli et al., 2006).
Using regression analyses, we investigated the relationship between encounter rates, and water temperature and salinity in UGB. We ran a multiple linear regression model, in Program R (R Core Team, 2020), to predict encounter rates based on mid-column water temperature and salinity. Exploration of the data revealed a non-linear relationship between temperature and encounter rates. It also identified a likely interaction effect between salinity and temperature at high values. Consequently, we transformed the temperature values (temperature ∧ 2) to meet the linearity assumption of regression analysis, and we included the interaction effect. Thus, the model equation was: To further define the relationship between temperature and encounter rates, we also fit a regression model with segmented relationships to identify possible breaking points (Muggeo, 2008). A significance level of 0.05 was applied.
RESULTS
During the 4 years of this study (January 2016-December 2019), we completed 6655 km of survey effort in 105 field days. This resulted in the observation of 2388 dolphins in 355 groups (Figure 1 and Table 1 while the warmest average water temperature was recorded in 2017 at 23.59 • C (SE = 1.56).
The multiple regression equation was highly significant [F(4,43) = 49.02, p < 0.000]. Over 80% of the variance in encounter rates was explained by the predictor variables water temperature and salinity (R 2 = 0.820). The interaction effect was significant (p < 0.001) confirming the need to include the interaction effect along with the main predictors (Supplementary Material 1). The segmented linear model identified a breaking point in the relationship between encounter rates and temperature, with temperature having a positive linear effect on encounter rates only at over 23.37 • C (SE = 1.42, Figure 3). The significant salinity coefficient (p < 0.001) in this model predicted a 0.02 d/km encounter rate increase with every 1.00 ppt increase in salinity (Figure 3 and Supplementary Material 1).
As indicated by the significant interaction effect between the two variables, the simultaneous/joint effect of temperature and salinity on encounter rates was significantly greater than the sum of the parts. The highest dolphin encounter rates in UGB were predicted to occur when both temperature and salinity were at their highest values. This was the case in August 2018 when encounter rates reached the peak of 1.23 d/km (temperature = 29.97 • C, salinity = 22.37 ppt; Figure 2). In contrast, encounter rates were below average (<0.34 d/km) in the warm months of June 2016, September 2017, and September 2019 when salinity reached below 3.5 ppt due to heavy precipitation and the corresponding increase in freshwater inflows (Figure 2). This pattern was mirrored in the annual averages when in 2019 the lowest annual salinity (x = 9.44 ppt, SE = 1.26) corresponded to the lowest annual encounter rates (x = 0.28 d/km, SE = 0.06).
DISCUSSION
Our findings indicate that bottlenose dolphins can be found in UGB year-round, but most leave during the cooler months. Annually, encounter rates rise during months with the warmest water temperatures (>23 • C). Peak encounter rates will typically occur June-September; however, during periods of low salinity, encounter rates will likely decrease regardless of water temperature. Concurrent high temperature and salinity represent optimal environment conditions for dolphin presence in UGB.
As endothermic animals, dolphins depend on blubber and internal metabolic processes to maintain a stable body temperature. During the study period, dolphins experienced water temperature ranging from 10 to 32 • C. Coastal bottlenose dolphins generally tolerate this range with changes in integument thickness and whole body conductance (Meagher et al., 2008;Carmichael et al., 2012). However, temperature may be a limiting factor for smaller dolphins (i.e., juveniles, calves and their mothers) (Yeates and Houser, 2008;Carmichael et al., 2012). Exploratory analyses showed that mother/calf pairs can be found in the study area year-round, but the proportion of groups with calves was higher in the warm months (>23 • C) compared to the cold months (<23 • C). Peak calving season for dolphins in Texas coastal waters is in the spring Fernandez and Hohn, 1998), coinciding with a time when fewer dolphins are present in UGB. Furthermore, sightings of early neonates in UGB are rare (GDRP, unpublished data). If mothers with neonates frequent the study area only during warm months due to their offspring's metabolic constraints, this could, in part, explain the effect of temperature on encounter rates. The drivers of movements and habitat use of mother/calf groups and calving females should be studied. Prey migration is likely an important underlining mechanism for the annual encounter rate patterns related to temperature fluctuations (Irvine et al., 1981;Scott et al., 1990;Wilson et al., 1997). In and near Sarasota, Florida, for example, dolphins are found inside bays year-round, but many shift their distribution toward the Gulf during cooler months (Scott et al., 1990). Mullet migration has been suggested as a primary factor for this change, as mullet migrate from inshore areas to the Gulf to spawn in the fall and return to the bays in the spring (Scott et al., 1990). A similar pattern occurs in Texas where mullet leave the bays, from October to January, to spawn 40-50 miles offshore (Boyd, 2011). For many Galveston Bay fish species, two migration patterns have been recorded: the migration of spawning adults leaving the Bay and the migration of postlarvae and juveniles entering the Bay (Bechtel and Copeland, 1970). Although the exact timing of these migrations varies by species, most correspond with seasonal temperature changes and many enter the Bay as the temperature increases (Bechtel and Copeland, 1970). Additionally, a commercial shrimp fishery operates within the estuary, with trawler activity increasing in UGB during warm months, following shrimp life cycle and migration patterns [TWPD, 2002;Houston Advanced Research Center (HARC), 2020]. The strong association of foraging dolphins with this fishery (Henningsen and Würsig, 1991;Fertl, 1994;Moreno, 2005;Piwetz, 2019) has the potential to affect dolphin movements. Accordingly, dolphins likely return to UGB with rising water temperatures due to a combination of factors related to food availability.
Our results suggest that the UGB study area does not encompass the entire home ranges of the observed dolphins. Many individuals known to frequent UGB have been sighted south of our study area, in lower Bay, during the cooler months of the year (GDRP, unpublished data). However, it is unknown if most remain in the lower Bay or if they leave the estuary to utilize nearby coastal waters, or travel to other estuaries. Travel between Texas bays has been documented in other studies for some individuals (Blaylock and Hoggard, 1994;Maze and Würsig, 1999;Lynn and Würsig, 2002;Ronje et al., 2020). Previous studies have identified changes in abundance aligning with annual temperature fluctuation in studies in Galveston and other Texas estuaries (Shane, 1980;Henningsen and Würsig, 1991;Fertl, 1994;Wilson et al., 1997;Ronje et al., 2020). As in UGB, dolphin abundance peaks during the summer in other northern Texas coastal locations and decreases with cooler temperatures (Fertl, 1994;Wilson et al., 1997;Ronje et al., 2020), while in central and south Texas the opposite pattern has been documented (Shane, 1980;McHugh, 1989). Telemetry studies will be required to map detailed range patterns, but future winter observational studies focused on lower Galveston Bay could help determine if most UGB dolphins remain in the estuary during the cooler months and could identify calving hotspots.
During the study period, most dolphins left the study area during low salinity events (i.e., when salinity dropped below 8-11 ppt) and returned once salinity level had increased. This trend was evident in June 2016 ("Tax Day Flood"), September 2017 (Hurricane Harvey), and September 2019, when heavy precipitation led to salinity levels below 3.5 ppt and encounter rates dropped well below expected levels for the time of year. 2018 was the only year covered in this study with no major low salinity event, and it had the highest annual encounter rate of 0.44 d/km. On the other hand, 2019, a year with an El Niño event, had an exceptionally wet spring and early summer (TWDB, 2021) likely explaining the relatively lower encounter rates during these months and the annual average of only 0.28 d/km. Although more research is needed to understand the mechanisms behind the apparent exodus of dolphins from the study area at low salinity, it is likely that reduced prey availability is a driver since many estuarine fish species emigrate to higher salinity water during freshwater events (Greenwood et al., 2006;Taylor et al., 2014).
Due to major river inflows and weak tidal influence, most of the Galveston Bay estuary may experience prolonged low salinity throughout the water column with heavy precipitation (Du et al., 2019); however, the HSC acts as a conduit for tidal waters. Water stratification in channels can lead to differences as large as 15 ppt between the surface and bottom (Bechtel and Copeland, 1970). After Hurricane Harvey, the average midcolumn salinity in channel habitat was more than 4 ppt higher than in open bay habitat, and habitat-specific encounter rates suggested that dolphins moved toward deeper channels (Fazioli and Mintzer, 2020). However, studies in Pensacola Bay, FL and Barataria Bay, LA found that dolphins exposed to low salinity did not move to areas with higher salinities (McBride-Kebert and Toms, 2021;Takeshita et al., 2021). Future studies should examine fine-scale habitat distribution of dolphins within the Galveston Bay estuary to further evaluate movements in response to flooding.
It is important to reiterate that some dolphins remained in the study area during each freshwater event. Preliminary site fidelity analyses suggest that there is a resident population of dolphins that utilize UGB regularly as a portion of their range (Fazioli et al., 2017). Dolphins that demonstrate high site fidelity within an estuary are known to move within their home range as a response to environmental factors, but are unlikely to abandon it, even in unfavorable conditions (Mazzoil et al., 2008;Wells et al., 2017;McBride-Kebert and Toms, 2021;Takeshita et al., 2021). Dolphins are physiologically adapted to inhabit brackish to oceanic coastal waters with salinities that typically range from 15 to 35 ppt (Ewing et al., 2017;McClain et al., 2020;Booth and Thomas, 2021). Those that remain in an area subject to a low salinity event may suffer from freshwater intoxication due to oral ingestion and/or skin absorption, leading to serious negative health consequences (Ewing et al., 2017;Deming et al., 2020;Duignan et al., 2020;Fazioli and Mintzer, 2020;McClain et al., 2020;McBride-Kebert and Toms, 2021). Effects of freshwater exposure on dolphins can include development of hydropic degeneration and ulcerative or erosive skin lesions (e.g., Wilson et al., 1999;Mullin et al., 2015;Duignan et al., 2020;Fazioli and Mintzer, 2020;McClain et al., 2020;Toms et al., 2020;Townsend, 2020;Takeshita et al., 2021), corneal edema (Deming et al., 2020), and changes in blood chemistry and electrolytes (Ewing et al., 2017;Deming et al., 2020;McClain et al., 2020). Some of these effects were evident when both prevalence and extent of skin lesions increased significantly in the study population after Hurricane Harvey (Fazioli and Mintzer, 2020). Further effects on dolphin health and mortality are likely to occur during freshwater events due to the energetic costs associated with reduced prey availability (Meager and Limpus, 2014;Booth and Thomas, 2021).
Dolphins in UGB are subject to multiple stressors and could be particularly vulnerable to the effects of freshwater (Booth and Thomas, 2021). Epidermal degeneration may heighten exposure to disease and infection, compounded by the increase of pollutants, bacteria and other toxic substances in the water during flood events (Wilson et al., 1999;Kiaghadi and Rifai, 2019;Bacosa et al., 2020;Steichen et al., 2020). Additionally, immunosuppression and adrenal compromise caused by longterm accumulation of toxic pollutants and exposure to petroleum products (Schwacke et al., 2012(Schwacke et al., , 2014 could make dolphins more susceptible to secondary infection and less capable of physiologically adapting to salinity changes in their environment (McClain et al., 2020). More research is needed to understand the population-level effects of freshwater events in Galveston Bay, and to identify which individuals or groups (i.e., age classes and residents) are more susceptible, either physiologically or due to high site fidelity and reluctance to leave the affected area.
In the United States, coastal bottlenose dolphin stocks are protected under the Marine Mammal Protection Act of 1972, and the results of this study have implications for the management of the Galveston Bay stock. Importantly, this study revealed that dolphins use UGB year-round. Continued monitoring is warranted to identify changes in the survival and health of UGB dolphins related to ongoing threats. Seafood advisories, legacy contaminants in sediment, chemical and hydrocarbon spills, and flood events, all make UGB a "high-risk" environment [Houston Advanced Research Center (HARC), 2020; and sources therein]. Heavy precipitation and flood events are expected to increase in intensity due to global climate change (Easterling et al., 2000;Knutson et al., 2010), and as occurred with Hurricane Harvey, these could severely decrease the salinity of Galveston Bay dolphin habitat (Fazioli and Mintzer, 2020). Furthermore, future dredging and infrastructure projects, including the planned widening of the HSC and proposed storm barriers (e.g., USACE and TGLO, 2021), could have considerable short and long-term effects (e.g., noise exposure, increased vessel traffic, and habitat availability). The proposed "Galveston Bay Storm Surge Barrier System" could lead to temporary or permanent changes to salinity and prey assemblages (USACE and TGLO, 2021). The results of our study, emphasizing the year-round presence of dolphins and the importance of salinity, should be considered during the development of these large-scale projects. Mitigation measures will likely be necessary to protect this population, but more information is needed on how dolphins utilize Galveston Bay, the nearshore waters of the Gulf of Mexico, and other Texas bays to identify critical habitats utilized during cooler months and freshwater events.
DATA AVAILABILITY STATEMENT
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
ETHICS STATEMENT
This work was conducted under the National Oceanic and Atmospheric Administration Fisheries (NOAA Fisheries) Scientific Research Permit #18881. Animals were not handled during our field work, so no further reviews were necessary for this non-invasive work.
AUTHOR CONTRIBUTIONS
VM and KF designed and executed the study and wrote the manuscript. KF collected and managed the data. VM performed the analysis. Both authors contributed to the article and approved the submitted version.
FUNDING
This work was conducted by the Galveston Bay Dolphin Research Program (GDRP), a cooperative agreement between the Environmental Institute of Houston at the University of Houston -Clear Lake and the Galveston Bay Foundation. Parts of this study were completed with grant support from the Gulf of Mexico Alliance, the SeaWorld Busch Gardens Conservation Fund, the SeaWorld Busch Gardens Emergency Fund, Restore America's Estuaries, the Trull Foundation, and individual GDRP donors.
|
2021-11-17T14:16:54.126Z
|
2021-11-17T00:00:00.000
|
{
"year": 2021,
"sha1": "134590bb55ca28d4502645aded99f9eb733b7ec3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2021.754686/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "134590bb55ca28d4502645aded99f9eb733b7ec3",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
27748594
|
pes2o/s2orc
|
v3-fos-license
|
Higher retention and viral suppression with adolescent-focused HIV clinic in South Africa
Objective To determine retention in care and virologic suppression among HIV-infected adolescents and young adults attending an adolescent-friendly clinic compared to those attending the standard pediatric clinic at the same site. Design Retrospective cohort analysis. Setting Government supported, hospital-based antiretroviral clinic in KwaZulu-Natal, South Africa. Participants Two hundred forty-one perinatally HIV-infected adolescents and young adults aged 13 to 24 years attending an adolescent-friendly clinic or the standard pediatric clinic from April 2007 to November 2015. Intervention Attendance in an adolescent-friendly clinic compared to a standard pediatric clinic. Outcomes measures Retention in care defined as one clinic visit or pharmacy refill in the prior 6 months; HIV-1 viral suppression defined as < 400 copies/ml. Results Overall, among 241 adolescents and young adults, retention was 89% (214/241) and viral suppression was 81% (196/241). Retention was higher among those attending adolescent clinic (95%) versus standard pediatric clinic (85%; OR 3.7; 95% confidence interval (CI) 1.2–11.1; p = 0.018). Multivariable logistic regression adjusted for age at ART initiation, gender, pre-ART CD4 count, months on ART, and tuberculosis history indicated higher odds of retention in adolescents and young adults attending adolescent compared to standard clinic (AOR = 8.5; 95% CI 2.3–32.4; p = 0.002). Viral suppression was higher among adolescents and young adults attending adolescent (91%) versus standard pediatric clinic (80%; OR 2.5; 95% CI 1.1–5.8; p = 0.028). A similar multivariable logistic regression model indicated higher odds of viral suppression in adolescents and young adults attending adolescent versus standard pediatric clinic (AOR = 3.8; 95% CI 1.5–9.7; p = 0.005). Conclusion Adolescents and young adults attending an adolescent-friendly clinic had higher retention in care and viral suppression compared to adolescents attending the standard pediatric clinic. Further studies are needed to prospectively assess the impact of adolescent-friendly services on these outcomes.
Introduction
In 2013, an estimated 870,000 adolescents and young adults aged 15-24 years were living with HIV in South Africa. [1] Depression/anxiety, perceived stigma, behavioral and conduct problems are common during adolescence and pose potential barriers to HIV care. [2][3][4][5][6] As adolescents age, they face developmental challenges, decreasing parental/guardian support, and experimentation with substance use that may act as additional barriers to care. [5,7,8] Changing care providers, rigid scheduling, increasing responsibilities, and decreasing involvement of adult caregivers may further contribute to the challenges of HIV care for adolescents. [7,9] To sustain viral suppression and achieve clinical benefits while receiving antiretroviral therapy (ART), individuals living with HIV/AIDS must remain engaged in medical care and maintain high adherence to medical regimens. [10] Numerous observational cohort studies in South Africa report significantly poorer retention in care and viral suppression rates for adolescents compared to older adults. [11][12][13][14][15][16][17][18][19][20] The World Health Organization (WHO) recently issued guidelines on provision of adolescent-friendly services to help overcome many of the barriers noted above to improve these outcomes; [21] however, there is little evidence of documented benefit from these services. [22,23] A qualitative study with healthcare providers suggested that peer support and collaboration with healthcare providers may improve care for older HIV-infected adolescents and young adults in sub-Saharan Africa; [24] however, studies with objective outcomes are needed.
In this paper, we present an analysis of retention and viral suppression among adolescents attending an adolescent-friendly clinic compared to adolescents and young adults attending the same site's standard pediatric clinic. to 12 years) and 241 HIV-infected adolescents and young adults (age 13-24 years). In the standard pediatric clinic, HIV-infected children are seen by the doctor and counselors on weekdays every 1-3 months and obtain medication the same day at an onsite pharmacy.
Adolescent clinic
In March 2009, a Saturday adolescent clinic opened at the Ethembeni Clinic that included ART dispensing, lunch, and scheduled group activities (e.g., dancing, soccer, education, counseling). The clinic was established with the intention of decreasing school absenteeism and stigma and improving peer support and retention in care. Appointments at the Adolescent Clinic are every two months. Parents or caregivers are not required to attend. Initially adolescents over age 13 could attend if they were HIV-infected, fully aware of their HIV status, and receiving ART for 6 months. The six-month time frame allowed for prepackaging of weightbased ART medication during the time of anticipated rapid weight gain after ART initiation. Due to limited space and funding, the adolescent clinic was closed to new enrollment after reaching 80 adolescents in November 2012 and subsequent adolescents remained in the standard pediatric clinic until additional space in the adolescent clinic became available (e.g., when an older adolescent transitioned to adult care). Adolescents not attending the adolescent clinic remained in the standard weekday pediatric clinic. The same physician, nurses, and counselors work in both the adolescent clinic and pediatric clinic at the same clinical site. Clinical personnel working in the adolescent clinic are permitted to subtract hours they worked during the weekend clinic from their regular workweek hours. Additional expenses for each clinic, including the food and activities, were provided by a local non-profit organization and cost $1.25 per adolescent.
Study design
We performed a retrospective cohort analysis of perinatally HIV-infected adolescents and young adults ages 13-24 years receiving at least one prescription of ART at Ethembeni Clinic from April 2007 to November 2015. Adolescents and young adults who initiated ART while hospitalized for tuberculosis and subsequently transferred to alternate facilities at discharge were excluded from the analysis. Any adolescent or young adult who attended the adolescent clinic at least once was considered exposed to the adolescent clinic for the primary analysis.
Study procedures
We obtained demographic data, pre-ART information, medication history, pharmacy refill data, clinic visits, tuberculosis history, CD4, and viral load from medical records. We then compared retention in care and viral suppression among adolescents and young adults attending the two clinics. Retention in care was defined as one clinic visit and/or ART dispensing in the six months prior to data extraction (November 2015). All adolescents and young adults who were not retained were tracked by phone call, home visit, or a National Department of Health laboratory database search to ascertain their current status and to evaluate for undocumented transfers to alternate clinics. Viral suppression was defined as viral load <400 copies/ ml from the most recent results (i.e., within the prior 6 months); missing viral load data were considered as not suppressed. Adolescents and young adults who died were considered not retained in care and not virally suppressed. Patients with documented transfers out of Ethembeni Clinic were considered retained in care if they had a clinic visit or ART dispensed elsewhere, or a viral load within three months of documented transfer. We performed a secondary analysis among all adolescents using the composite outcome of retained and suppressed defined as one clinic visit, pharmacy refill, and viral load <400 copies/ml within the previous 12 months in accordance with the South African National Treatment Guidelines. [25] Data were entered into a REDCap database. We used SAS version 9.4 (Cary, NC) to calculate descriptive statistics and conduct univariable and multivariable logistic regression models to assess retention in care and viral suppression. We included age at ART initiation, gender, ART duration, pre-ART CD4 (most recent CD4 prior to ART initiation), history of tuberculosis and adolescent clinic versus standard clinic in our models because these factors were previously shown to affect mortality, retention in care and viral suppression in similar South African pediatric HIV cohorts. [26][27][28] We performed secondary analyses including: evaluating transfers as not retained at the clinic; retention and viral suppression based on current clinic attendance to account for transfers back to the standard clinic from the adolescent clinic; retention in care over the last 3 months; and viral suppression among all active patients retained in care. In addition, we performed a sensitivity analysis excluding all adolescents and young adults who transferred care or were lost to follow-up within the first 6 months of treatment since they would not have been eligible to attend the adolescent clinic. To address potential confounding, we calculated a propensity score for adolescent clinic exposure controlling for age at ART initiation, gender, time on ART, pre-ART CD4, and history of tuberculosis. We included the propensity score in additional multivariable models to account for the chance of confounding by indication. [29] Propensity score matching reduces the chance of bias for those that may receive the intervention (adolescent clinic) due to covariates that may predict entry into the intervention group.
Ethics statement
The Durban University of Technology Independent Review Committee, KwaZulu-Natal Department of Health and the Partners/Massachusetts General Hospital Research Ethics Board approved this protocol and granted a waiver of informed consent.
Participants included in the analysis
A total of 254 adolescents and young adults ages 13-24 years received at least one prescription of ART from Ethembeni clinic between 2007 and 2015. Thirteen adolescents were excluded from this analysis because they initiated ART while hospitalized for tuberculosis at Don McKenzie Hospital and transferred to alternate facilities at discharge without attending the Ethembeni Clinic. The remaining 241 adolescents and young adults had a median of 67 months of follow-up (interquartile range [IQR] 40-84), of whom 88 attended the adolescent clinic at least once and 153 adolescents remained in the standard pediatric clinic. Among the entire cohort, a total of 29 (21%) adolescents transferred, 5 (4%) died, and 18 (13%) were lost to follow-up. Through participant tracking, 11 of those lost to follow-up were found in care at an alternate facility, 4 stopped ART (2 re-engaged in care and 2 declined to resume treatment), and 3 remained lost to follow-up; there were no known additional deaths. Comparing the adolescent clinic to the standard clinic 2% vs. 10% were lost to follow up; 1% vs. 3% died; 6% vs. 18% transferred, respectively as indicated in Fig 1. Of those attending the adolescent clinic, the median time in the adolescent clinic was 30 months (15 visits) with an IQR of 22 to 43 months (11 to 22 visits).
Demographics
As shown in Table 1, the adolescents and young adults attending the adolescent clinic were significantly older at ART initiation (median 11.2 years; IQR 9.4-12.8 years) compared to
Retention
Overall, the retention rate was 89% (214/241). Through tracking we were able to determine current status outcomes for all adolescents and young adults except the 3 who remained lost to follow-up. We found significantly higher retention rates in adolescents and young adults attending the dedicated adolescent clinic (95%) versus those in standard care (85%; OR 3.7; 95% CI 1.2-11.1; p = 0.018). Multivariable logistic regression adjusting for age at ART initiation, gender, pre-ART CD4, months on ART, and history of tuberculosis indicated higher retention rates in adolescents and young adults attending the adolescent clinic compared to adolescents and young adults in the standard clinic (AOR = 8.5; 95% CI 2.3-32.4; p = 0.002). Younger age (AOR = 0.8; 95% CI 0.7-0.9; p = 0.010), males (AOR = 4.9; 95% CI 1.4-16.3; p = 0.010) and fewer years on ART (AOR = 0.8; 95% CI 0.6-1.0; p = 0.038) were also significantly associated with retention in care in this model ( Table 2).
Viral suppression
Viral suppression results were available for 97% (233/241) of adolescents and young adults. Overall, the viral suppression rate was 81% (196/241). We found higher viral suppression rates among adolescents and young adults attending the adolescent clinic (91%) versus adolescents attending the standard pediatric clinic (80%; OR 2.5; 95% CI 1.1-5.8; p = 0.028). Multivariable logistic regression model adjusting for age at ART initiation, gender, pre-ART CD4, months on ART, and history of tuberculosis indicated higher viral suppression rates in adolescents and young adults attending adolescent clinic compared to those attending the standard pediatric clinic (AOR = 3.8; 95% CI 1.5-9.7; p = 0.005) ( Table 2). No other factors were significantly associated with viral suppression in that multivariable model.
Retained and suppressed within the last 12 months
We found that 73% (177) adolescents and young adults who initiated ART had the composite outcome of being in care with a suppressed viral load within the last 12 months. Multivariable Table 2. Unadjusted and adjusted analysis comparing retention in care and viral suppression among adolescents and young adults attending an adolescent-friendly clinic compared to those attending standard pediatric clinic in KwaZulu-Natal, South Africa.
Secondary analyses
When considering all transfers as not retained at the Ethembeni clinic, adolescents and young adults attending the adolescent clinic were more likely to be retained in care (AOR 13.0; 95% CI 3.6-47.6; p = 0.0001) compared to adolescents and young adults attending the standard clinic. When considering current clinic attendance (accounting for transfers from the adolescent clinic back to standard clinic), adolescents and young adults currently attending the adolescent clinic were also more likely to be retained (
Sensitivity and propensity analyses
Compared to the primary findings, we found no significant differences in the sensitivity analysis excluding the 3 adolescents from the standard clinic who were not retained in care for the first 6 months of ART. Additionally, there was no difference in retention or viral suppression outcomes after including the propensity score in the final models.
Discussion
Altogether, adolescents and young adults in this study had high rates of retention (89%) and viral suppression (81%). These findings are similar to a recent meta-analysis indicating an overall retention rate among HIV-infected South African adolescents of 83% and overall viral suppression rate of 81%. [30] In our cohort, despite having lower pre-ART CD4 and older age at ART initiation, adolescents and young adults attending the dedicated adolescent clinic had even higher retention (97%) and viral suppression rates (91%) than have been described in other South African adolescent HIV cohorts. [30] Definitions of adolescent-friendly services vary and effective mechanisms likely hinge on convenient scheduling and peer support. The WHO recommends that adolescent-friendly services be accessible, acceptable, equitable, appropriate, and effective. [21] Tanner et al propose that the most important factors in adolescent-friendly services within the United States Adolescent Medicine Trials Network for HIV/AIDS Interventions (ATN) are the physical space and the social environment. [31] In our study, the physical environment and clinical staff were the same in each group. One major difference was in the accessibility by opening clinic on Saturdays. Adolescent-friendly services including afternoon and weekend clinic appointment can mitigate school absenteeism, decrease stigma of leaving school on a regular basis, and decrease wait times. [32][33][34] Combining medication collection with a social support/adherence group also decreases transportation costs and school absences by limiting trips to the clinic for both services. For the older adolescents, adolescent clinics present an opportunity to separate from parents and attend clinic alone in preparation for transition to the adult clinic. However, other settings have seen higher risk of viral failure in adolescents without parental involvement. [35] Adolescent-friendly clinics also provide a supportive social environment where peers can interact for emotional support. [31] Structured peer group therapy has been shown to decrease negative perception of illness, worries about illness, and improve viral outcomes particularly after sustained peer involvement. [36] The combination of peer support, youth focused clinic providers, and convenient scheduling likely contribute to the benefits of adolescent-friendly clinics. [34] Adherence clubs, similar to the model used in the adolescent clinic in our study, have been a popular method to improve adherence and retention in care. Stable virally suppressed adults attending adherence clubs have shown higher retention in care and higher viral suppression rates than those attending standard clinics. [37,38] Although there are multiple models of adolescent adherence clubs that bring together HIV-infected youth, published results on retention and viral outcomes have been limited. [36] In our cohort, older adolescents and young adults had lower retention in care adjusting for clinic attendance. This finding indicates that older youth were less likely to be retained in either the standard or adolescent clinics. This highlights the difficulty with preparations for transition to adult care for older adolescents and young adults who often decrease engagement in care. [39] In our multivariate analysis, males had higher retention in care compared to females but there was no difference in viral suppression between the sexes. Sex differences in have been seen in other cohorts and are likely multifactorial and related to local socioeconomic factors. [40][41][42][43] In addition, our cohort saw lower initial CD4 in those adolescents and young adults attending the adolescent clinic compared to the standard clinic. This finding is likely due to the temporal changes in the South African National Treatment Guidelines. Older adolescents and young adults were started on ART with lower CD4s due to older recommendation to start ART for CD4 200 cells/μl. These adolescents were the ones to fill the adolescent clinic. When the guidelines changed younger adolescents were initiated onto ART at higher CD4 counts but the adolescent clinic had already reached capacity.
Data on effectiveness of youth-friendly services compared to standard care on patient outcomes are limited. Teasdale et al saw no difference in retention at 6 and 12 months for newly diagnosed youth utilizing youth-friendly services compared with youth prior to the implementation of youth-friendly services in Kenya. [44] Their study used a pre/post intervention design and only evaluated retention in care at 6 and 12 months after ART initiation. Our study used a retrospective design where adolescent were receiving ART for a median of 73 months. The Teasdale study may not have seen differences due to short follow-up time or temporal differences in the pre/post design. High initial retention and viral suppression rates in adolescents that decrease over time have been previously documented in southern Africa which could explain why no difference was seen at 6 and 12 months after ART initiation. [45] This study has several limitations. First, it was a retrospective analysis of an adolescent clinical program and not initially intended for a research design. Because adolescents were not randomized into the clinics, it is possible that unmeasured confounders may have lead to higher retention and viral suppression among adolescents attending the adolescent clinic; however, our propensity analysis did not find any significant difference in overall outcomes when adjusting for propensity score. Another limitation is that adolescents were required to be on treatment for 6 months prior to entry into the adolescent clinic. Our sensitivity analysis excluding the three adolescents not retained in the first 6 months did not find a difference in retention or viral suppression rates, although the small number of adolescents in this situation may have limited the power of this analysis.
Conclusion
Despite lower pre-ART CD4 and older age at initiation, adolescents attending a dedicated Saturday adolescent clinic had significantly higher retention and viral suppression rates compared to adolescents attending the standard pediatric clinic. Further studies, including randomized controlled trials, are needed to confirm our results and explore factors for intervention. Additional studies are also needed to identify factors that facilitate successful delivery of care among HIV-infected adolescents as they prepare to transition to adult care.
|
2018-04-03T04:08:11.427Z
|
2017-12-29T00:00:00.000
|
{
"year": 2017,
"sha1": "4d3b07748010c1e2601ac2c9868d1a0b266cfb5c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0190260&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d3b07748010c1e2601ac2c9868d1a0b266cfb5c",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15948203
|
pes2o/s2orc
|
v3-fos-license
|
Prediction for the He I 10830A Absorption Wing in the Coming Event of Eta Carinae
We propose an explanation to the puzzling appearance of a wide blue absorption wing in the He I 10830A P-Cygni profile of the massive binary star Eta Carinae several months before periastron passage. Our basic assumption is that the colliding winds region is responsible for the blue wing absorption. By fitting observations, we find that the maximum outflow velocity of this absorbing material is ~2300 km/s. We also assume that the secondary star is toward the observer at periastron passage. With a toy-model we achieve two significant results. (1) We show that the semimajor axis orientation we use can account for the appearance and evolution of the wide blue wing under our basic assumption. (2) We predict that the Doppler shift (the edge of the absorption profile) will reach a maximum 0-3 weeks before periastron passage, and not necessarily exactly at periastron passage or after periastron passage.
The He I λ10830Å high excitation line has a complex P Cygni profile, composed of three blue-shifted peaks with significant variations over the cycle (Damineli et al. 1998;. The emission profile has significant variations over the cycle. The Doppler shifts of the peaks are of relatively low velocities, |v peaks | < 300 km s −1 (Damineli et al. 2008b). The location of the minimum of the profile (the deepest point in absorption) does not change with orbital phase, and stays at v obs−m = −570 km s −1 . However, the absorption profile does change. In particular, just before periastron a wide blue wing appear in absorption, reaching ∼ −1000 km s −1 a month before periastron, and ∼ −1800 km s −1 at periastron, and the maximum equivalent width of absorption occurs 10 day after periastron passage (Damineli et al. 2008b). Damineli et al. (2008b) approximated the average radial velocity of the absorption profile at half intensity, and found it to change from −640 km s −1 before phase zero to −450 km s −1 shortly after phase zero.
An additional He I λ10830Å blue absorption feature of up to −1000 km s −1 , is observed at several arcseconds from the center in the lobes (Smith 2002). In this paper we refer only to the He I λ10830Å ground measurements of Damineli et al. (2008b). We note that other absorption and emission lines of He I can be formed in different regions in the binary system (see also Kashi & Soker 2007b). In particular, some visible He I lines can be formed in the hot winds of the two stars close to their origin, as compared with the He I λ10830Å that is formed in cooler regions. We show that this cooler region can be the post-shock primary wind. Therefore, different He I lines need not have the same behavior along the orbit.
Because of the winds' very complicated flow structure, when starting this project we limited ourself to build a toy-model in order to achieve two goals: (1) To show that the orientation where the secondary is toward us at periastron (ω = 90 • ), can be accounted for the development of a wide blue absorption wing starting several weeks before periastron passage. (2) To encourage a nightly observation of the He I λ10830Å close to periastron passage.
THE STARS AND THEIR WINDS
The η Car binary parameters used by us are compiled by using results from several different papers (e.g., Ishibashi et al. 1999;Damineli et al. 2000;Corcoran et al. 2001Corcoran et al. , 2004Hillier et al. 2001;Pittard & Corcoran 2002;Smith et al. 2004;Verner et al. 2005).
The assumed stellar masses are M 1 = 120M ⊙ , M 2 = 30M ⊙ , the eccentricity is e = 0.9, and the orbital period is P = 2024 day. The mass loss rates and terminal speeds arė M 1 = 3 × 10 −4 M ⊙ yr −1 ,Ṁ 2 = 10 −5 M ⊙ yr −1 , v 1,∞ = 500 km s −1 and v 2,∞ = 3000 km s −1 . For these parameters, the half opening angle of the wind-collision cone is φ a ≃ 60 • (Akashi et al. 2006); because of the orbital motion this angle is not constant, and we calculate it along the orbit in the present paper. For the inclination angle we take i = 41 • Smith 2002Smith , 2006.
The primary's wind speed depends on latitude (Smith et al. 2003). The minimum in the He I λ10830Å line profile suggests that the primary's wind speed toward us is v obs−m = −570 km s −1 . The two winds collide, and form a flow structure, schematically drawn in Figure 1. The two winds go through two respective shock waves, and form a contact discontinuity between them. The contact discontinuity asymptotically forms a conical shell surface. The radiative cooling time of the post-shocked primary's wind is short. The post-shocked primary's wind forms a dense flow along the contact discontinuity, which we refer to as the conical shell. Absorption is expected to take place mainly outside the stagnation-point region, and so we approximate the conical shell as an ideal cone.
The He I λ10830Å absorption profile results from He I atoms which absorb from the He I λ10830Å emission line and from the continuum emitted by dust. As we are more interested in the wide wing, the continuum is more relevant to our study. As the system approaches periastron, dust is formed closer and closer to the binary system (Kashi & Soker 2008a). This makes things very complicated, as the conical shell, where the absorbing gas reside according to our model, also changes its size. Some of the dust is formed in the conical shell itself.
Other lines, e.g. Hα (Smith et al. 2003), also show some fast blueshifted absorption wings. Some of these wings disappear when the system approaches periastron. This is not a problem for our model, since those lines originate in different regions than where the He I λ10830Å line does (e.g. the Hα comes from the primary's wind; Smith et al. 2003). We emphasize again that some He I lines (in particular in the visible band) are formed in different regions than the He I λ10830Å line, and their behavior is different as well.
The binary parameter that is most controversial is the orientation of the semimajor axis−the periastron longitude. Some researchers argue that the secondary (less massive) star is away from us during periastron passages, i.e. an orbital longitude of ω = 270 • (e.g., Nielsen et al. 2007;Damineli et al. 2008b), others argue that the secondary is toward us during periastron passages, ω = 90 • (Falceta- Abraham et al. 2005;Kashi & Soker 2007b, and still different values exist in the literature (Davidson 1997;Smith et al. 2004;Dorland 2007;Henley et al. 2008;Okazaki et al. 2008a). Following our recent paper (Kashi & Soker 2008b) we will take the orientation to be such that the secondary is toward us at periastron (ω = 90 • ).
THE TOY-MODEL
We suggest that the conical shell is responsible for the absorption of the He I λ10830Å high excitation line. When the two winds meet at the contact discontinuity, part of the shocked fast secondary's wind is mixed with part of the shocked slow wind (Pittard 2007), and accelerates part of it to higher velocities. The conical shell has an asymptotic angle of φ a ≃ 60 • , which we take to be the flow direction of the absorbing gas.
In order to make the model more accurate we will take into account the time dependance of some parameters. The primary's wind velocity profile can be described using the β-profile: where v s = 20 km s −1 is the sound velocity on the primary's surface, v 1,∞ = 500 km s −1 is the primary's wind terminal velocity, and β = 1 is a parameter of the wind model.
The radial (along the line joining the two stars) component of the relative velocity between the secondary star and the primary's wind is v 1 − v r , where v r the radial component of the orbital velocity; v r is negative when the two stars approach each other. The total relative speed between the secondary and the primary's wind is where v θ is the tangential component of the orbital velocity. The orbital motion and the variation of the primary wind speed with distance from the primary have a small influence on the conical shell asymptotic angle φ a . We will use the expression given by Eichler & Usov (1993) where We will take into consideration the rotation of the cone relatively to the line connecting the two stars. This rotation occurs due to the orbital velocity of the conical shell, and has a considerable influence close to periastron. We define δφ to be the angle measured from the secondary between the direction to the primary and that to the stagnation point (see Soker 2005 for further details) We find that close to periastron δφ ≃ 56 • . The geometry and different parameters are shown in Figures 1, 2 and 3.
A closely-related geometry was used by Hill et al. (2000) to fit the excess emission observed in the C III λ5696Å line in the spectra of WR 42 and WR 79. Luehrs (1997) also used a somewhat different geometry to fit the excess emission observed in the C III λ5696Å line in the spectra of WR 79. These two models were purely geometric. In the model we suggest for the absorption, some more considerations have to be taken in to account. near periastron (θ = 0), where according to our model the secondary is toward us. An example of a "tube" is also shown. Both θ and δφ are measured in the equatorial plane. At each point along the orbit there is one value for each of the variables θ, δφ, and φ a , while we integrates over many tubes on the conical shell, each with its value of direction to the observed ψ.
Although the absorbing gas is a continuous media, we decompose the conical shell into 'tubes'. We define ζ to be the azimuthal angle on the surface of the conical shell; ζ = 0 in a direction perpendicular to the equatorial plane, and is measured clockwise. Tubes exist from ζ = 0 to ζ = 2π. For each tube on the conical shell we calculate the angle ψ between the tube and the observer as a function of the orbital angle θ (see Figure 2) cos ψ = cos φ a sin i cos(θ − δφ) + sin φ a sin ζ sin i sin(θ − δφ) + sin φ a cos ζ cos i.
We take v 0 < v m < v max to be the maximum velocity attended by a mass element, dm, in a tube; in each tube there is a large number of mass elements formed by the mixing process of the two winds. By mixing we refer also to secondary shocked wind segments that have been cooled to temperature low enough to contribute to the absorption at high speeds. They reside near the contact discontinuity as well. Here v 0 = v 1,∞ = 500 km s −1 is the primary's wind velocity, and v max is a parameter of the model that is constraint to be v max v 2,∞ = 3000 km s −1 . The projected velocity of mass element having a velocity v m and residing in a tube with an angle of ψ to our line of sight is v D = v m cos ψ.
Consider one of our tubes; it has a length L and a circular cross section with a radius R t , such that R t ≪ L. The effective cross section of the absorbing tube is 2R t L sin ψ. Therefore, the contribution of each tube to absorption is multiplied by sin ψ. The amount of accelerated primary's wind to each velocity v m is hard to predict. A numerical simulation beyond the scope of this work needs to be done in order to determine this amount. We simply take the velocity distribution to be constant, namely, in each tube the fraction of the gas in the velocity interval dv is constant W (v)dv = C W dv m , where C W is a constant. Changing variable from the velocity along the tube to that along the line of sight, this weighting function (up to a constant) is, by using v D = v m cos ψ, At each orbital angle θ we sum the contribution to absorption at Doppler shift v D over all tubes, The intensity is taken to be where K a is a constant of the toy-model that takes care of units and the condition 0 < I < 1. We set hereK a = 1/max(A), because we do not compare to the absorption equivalent width.
Our model is actually a toy-model. It assumes a simple geometry of the absorbing material, i.e., a rotated conical shell with a varying opening angle, and a mass distribution within each tube that is constant with the velocity. The model contains two types of parameters: (1) Those that are given in the literature. Such are the binary parameters and the conical shape with its opening angle φ a . These parameters are more or less in consensus. The orientation of the semimajor axis (the periastron longitude ω) is controversial, and we take the value from our previous papers ω = 90 • (secondary toward us at periastron).
(2) Parameters that are unique to our toy-model. Such is the value of v max which is constraint to be v max v 2,∞ = 3000 km s −1 . For these parameters we find that a good general fit can be obtained for v max = 2300 km s −1 . The value of the absorption coefficient K a has a small influence on our conclusions, and it serves only to give the general form of the absorption profile.
The assumption that the conical shell reaches its asymptotic opening angle at large distances breaks down as the system approaches periastron. The reason is that the relative velocity of the two stars is no longer much smaller than the primary's wind speed. This causes the winding-up of the conical shell into a spiral structure in the equatorial plane, e.g., as shown in the numerical study by Okazaki et al. (2008b). Only a close region near the binary system reaches this limiting angle. Namely, a smaller region, but much denser, will contribute to the absorption at the blue edge. However, contribution to the continuum near 1 µm comes from the stellar wind and hot dust from closer regions to the binary system (Kashi & Soker 2008a). Therefore, the much smaller region of the conical shell can still absorb a detectable fraction of the continuum. Our toy-model does not allow us to make any quantitative prediction. For that a 3D numerical code is required. Nevertheless, our toy-model does allow us to reach the two goals, as mentioned in the last paragraph of section 1.
THE BLUE ABSORPTION WING
In Figure 4 we present our results by a contour map of the intensity I, as given in equation 9, in the Doppler shift−time plane. The blue Doppler shift v D is given in unit of km s −1 , and the vertical axis indicates days relative to (before) periastron (phase 0 at t = 0). The levels of the contours are I = 1, 0.95, 0.9, 0.85, 0.8, 0.7, 0.6, and so on, from left to right, where I = 1 indicates the edge of the absorption profile. Despite the constant weighting function W that we used, in reality we expect that the amount of helium blown with high velocities corresponding to I ≃ 0.9 − 1 will be too small to be detected. We have also estimated the velocity edges of the absorption wing at four epochs from Damineli et al. (2008b) observations; these are indicated by four horizontal error-bar lines,. The somewhat noisy data made it difficult to pinpoint the exact edge of the wing, and therefore we could only estimate it to an accuracy of ∼ 100 km s −1 . Each line is centered at the approximated edges of the absorption wing, and extends to ±50 km s −1 to each direction. At early times there is no noticeable differences between the contour lines in the range I = 0.85 − 1, and all nicely fit the two observation equally well. Very close to periastron, the line I = 0.85 fits the observations better. However, during this time we expect the collapse of the conical shell, such that no fresh gas will be accelerated to high velocities. We cannot make an accurate prediction so close to periastron passage. Over all, we consider the lines I = 0.85 − 1 to be a very good fit to observations. We modified the parameters in our model to fit the observations of Damineli et al. (2008b) from the 2003 event. With this fitting we learn about two properties of the models.
(1) We find that the orientation we use, where the secondary is toward us at periastron (ω = 90 • ) can account for the development of a wide blue absorption wing starting several months before periastron passage, if the absorbing material is within this conical shell. (2) According to our model, the maximum observed blue shift of the wing is reached ∼ 5 day before periastron passage. The observed maximum blue shift might occur at a somewhat different time for three reasons. Firstly, the binary and winds parameters might be somewhat different than those we used. Secondly, we did not take into account the winding of the conical shell; we only considered the approximate direction of its axis by calculating the angle δφ. Thirdly, close to periastron the conical shell is likely to collapse onto the secondary (Soker 2005;Kashi & Soker 2009).
With the only four available observations for the fitting, with the present possible degree of accuracy, and according to our parameters, we expect the maximum blue shifted absorption to occur 0 − 20 day before periastron passage, i.e., late December or early January. There are two main reasons for that we cannot be more accurate. Firstly, we cannot treat properly the conical shell as the system approach periastron. Secondly, we expect the colliding wind region to collapse onto the secondary near periastron; we did not consider this process here either.
Our fundamental assumption is that the absorber of the blue wing of the He I λ10830Å line resides in the conical shell formed by the colliding winds. This assumption works quite well with the semimajor orientation ω = 90 • , where the secondary is closest to the observer at periastron passage. We now show that other orientations that are popular in the literature cannot account, not even qualitatively, for the behavior of the blue wing, if our fundamental assumption holds. We followed the calculations presented in Figure 4, but for three different semimajor axis orientations, as drawn in Figure 5: ω = 0: The semimajor axis is perpendicular to the line of sight and the secondary is closer to the observer before the event. ω = 180 • : The semimajor axis is perpendicular to the line of sight and the secondary is closer to the observer after the event. ω = 270 • : The secondary is closest to the observer at apastron, opposite to our favorite orientation of ω = 90 • (Figure 4). The results for the expected Doppler shifts of the absorption wing are presented in Figure 6, together with the fours observations from Damineli et al. (2008b). It is clear that none of the other orientations can account for the observed absorption wing, not even qualitatively and just for the two early observations.
SUMMARY AND PREDICTION
We study the blue absorption wing of the He I λ10830Å P Cygni profile of η Car. The two winds of the two stars collide, and the post shocked gas of the two winds flow on both sides of a surface (the discontinuity surface) that has a pseudo-conical structure: the conical shell. The shocked secondary's wind accelerates part of the shocked primary's wind to high velocity (Figure 1). This gas, and the segments of the shocked secondary's wind which cool to a low temperature near the contact discontinuity (Figure 1), are assumed to be responsible for the blue absorption wing. This is the fundamental assumption of our model. We use our previous results and assume an orientation where the secondary is toward the observer at periastron (ω = 90 • ; see Figure 5). For the conical shell we built a toy model (Figure 2).
The absorption profile, up to a scaling factor, is calculated according to equations (8) and (9), and the results are presented in Figure 4. We are interested in the bluest part of the absorption profile, where intensity is lower; in our scaled units these are the contours in the range I ≃ 0.8 − 1. Using our fundamental assumption, and a toy model for the conical shell of the colliding winds, we showed that with our orbital orientation we can account for the appearance of the wide blue wing several months before periastron. Other semimajor orientations ω, cannot reproduce the results under our fundamental assumption (figure 6) This is our main result, namely, that if the absorber responsible for the blue wing of the He I λ10830Å line reside in the winds collision region, then only the ω ≃ 90 • can account for the blue wing of this line.
Our results also predict that the Doppler shift v D (the edge of the profile) will reach a maximum 0 − 3 weeks before periastron passage. Since close to periastron the conical shell starts to collapse onto the primary, and even before it experiences the effect of wrapping, it is hard to pinpoint the exact time of this maximum. Nevertheless this maximum should be observed before the event. Figure 4, but for different values of ω, as schematically depicted in Figure 5 . None of the other orientations can account for the observed absorption wing under our fundamental assumption that the absorber reside in the conical region formed by the collision of the two winds. In particular, the opposite orientation, ω = 270 • , yields an entirely opposite behavior to observations, with minimum absorption at periastron.
|
2008-12-05T08:06:42.000Z
|
2008-08-29T00:00:00.000
|
{
"year": 2009,
"sha1": "087c9fbac96365590e79d16c92d4ed1c4c88ed25",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/394/2/923/3708729/mnras0394-0923.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "89d24380170cc5486b7241c63fb7e66ad91c2c6c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
191164974
|
pes2o/s2orc
|
v3-fos-license
|
Effect of deposition conditions and buffer layers on amorphous or polytype phase formation in Al2O3 thin films by chemical vapor deposition using tri-methyl aluminum
Al2O3 thin films were deposited on (001)Si substrate through Cr2O3/yttria-stabilized-zirconia (YSZ) buffer layer by cold-wall type chemical vapor deposition method with tri-methyl aluminum as a raw material. By changing the deposition temperature, different polytypes of Al2O3 thin films were formed. At lower temperatures (1123 1173K), ©-Al2O3 and amorphous Al2O3 were found in mixture. With increasing the deposition temperature, the series of Al2O3 polytypes (£, ¬ and ¡) appeared in the order of decreasing the unit cell volume per Al atom. At 1323K, single-phase ¡-Al2O3 thin film was obtained in success. On (00l)Cr2O3/YSZ/Si substrate, epitaxial (00l)¡-Al2O3 thin-film was grown, however, on (00l)YSZ/Si substrate, epitaxial (00l)¬-Al2O3 thin film was formed. It shows that buffer layer also has much influence on polytype of Al2O3 thin film. On the other hand, there exist the polycrystalline ¡-Al2O3 and Cr2O3 have the same Miller index (h k l), therefore, polycrystalline ¡Al2O3 thin film was deposited on the polycrystalline Cr2O3 buffer layer in which each Al2O3 and Cr2O3 grain has an epitaxial relation. This epitaxial growth can be explained by both the similarity in crystal structure between Al2O3 film and Cr2O3 buffer layer, and also their moderate lattice mismatch. ©2019 The Ceramic Society of Japan. All rights reserved.
Introduction
Among the various polytypes of Al 2 O 3 such as ¡, ¬, £, ª, ©, ¤, and » phases, 1) ¡ phase is only stable and the others are metastable with higher Gibbs formation energies. 2) Higher temperatures than around 1473 K are necessary to form ¡-Al 2 O 3 after formation of other polytypes or amorphous phase. 3) As cutting tool coating materials, metastable ¬-Al 2 O 3 and stable ¡-Al 2 O 3 are used because of their high hardness and heat resistance. 4),5) To make these Al 2 O 3 coatings in industry, chemical vapor deposition (CVD) is applied due to excellent step coverage and large deposition area. In CVD process, choosing the aluminum source material and controlling its gaseous phase oxidation are important.
For aluminum source materials, there were a lot of materials tested such as aluminum tri-isopropoxide, 6) aluminum chloride (AlCl 3 ), 4),5),7), 8) aluminum acetylacetone [Al(acac) 3 ], 9) and tri-methyl aluminum [TMA, Al(CH 3 ) 3 ]. 10)14) AlCl 3 have been used for ¡-Al 2 O 3 thin film with CO 2 H 2 by thermal CVD in cutting tools industry since the 1970 s. 7) In this process, H 2 reacts with CO 2 to create a low-oxygen partial-pressure atmosphere and to prevent rapid oxidation of AlCl 3 in gaseous phase. 4),5), 7) However, the obtained Al 2 O 3 thin films deposited at lower temperatures tend to include chlorine residue. 8) Recently, much attention has been paid to TMA [Al(CH 3 ) 3 ] as aluminum source. By atomic layer deposition, 11)13) dense and amorphous Al 2 O 3 thin-films can be obtained. By plasma-enhanced CVD, 10) £-Al 2 O 3 thin films were deposited with N 2 O as an oxidizing agent on Si substrate. By thermal CVD, 14) £-Al 2 O 3 thin films were also deposited with O 2 on GaN. However, no report describes ¡-Al 2 O 3 thin film deposition on Si substrate with TMA as a raw material.
To fabricate the crystalline thin film in CVD or PVD process, buffer layer technique has a great potential. Actually, Cr 2 O 3 thin films (corundum structure) were grown epitaxially on yttria-stabilized-zirconia (YSZ) buffered (001)Si substrate using pulsed laser deposition (PLD) method. 15) Similarly, by sputtering method, 16) polycrystalline ¡-Al 2 O 3 thin film was deposited on polycrystalline Cr 2 O 3 -buffered Si substrate in which each Al 2 O 3 and Cr 2 O 3 grains have an epitaxial relation. No report, however, has described ¡-Al 2 O 3 thin film deposited on Cr 2 O 3 /YSZ/Si substrate using CVD so far. If ¡-Al 2 O 3 thin film is deposited on a Cr 2 O 3 -buffered Si substrate by CVD method, it must be useful information for nextgeneration cutting tool coating.
In this study, Al 2 O 3 thin films formation was investigated with a horizontal cold-wall type thermal CVD using TMA as a raw material, by flowing O 2 for oxidation gas and Ar on a buffered (001)Si substrate. Discussion was made on how the Al 2 O 3 polytype was changed depending on the deposition temperature and the buffer layer structure.
Experimental
In this study, single crystal Si was used as the substrate. Buffer layers were deposited on (001) plane of Si using PLD. Four types buffered substrates were YSZ/Si, Cr 2 O 3 / YSZ/Si, P-Cr 2 O 3 /P-CeO 2 /Si, and P-Cr 2 O 3 /P-YSZ/Si, where P-denotes the polycrystalline. The P-YSZ was formed by the following process. After the amorphous YSZ film was deposited on Si substrate at room temperature (298 « 5 K) using PLD, then annealed at 1373 K for 1.8 © 10 4 s. The other buffer layers were also deposited by PLD using a KrF excimer laser ( = 248 nm) at the substrate temperature of 1023 K in vacuum with oxygen partial pressure of 7.33 © 10 ¹2 Pa. The laser fluence and the repetition rate were approximately 1.2 J cm 2 and 7 Hz, respectively. The YSZ, Cr 2 O 3 , and CeO 2 targets for PLD were fabricated by solid-state sintering. That is, YSZ (8 at%Y 2 O 3 92 at%ZrO 2 ), Cr 2 O 3 , and CeO 2 powders were compacted into pellets of 20 mm in diameter and 5 mm in thickness. The pellets were sintered at 1723 K for 1.08 © 10 4 s, at 1573 K for 2.16 © 10 4 s, and at 1573 K for 5.4 © 10 4 s. In addition to the buffered Si substrates above, a Cr 2 O 3 sintered body was used as the substrate.
On these substrates, Al 2 O 3 thin films were deposited by using the low-pressure, horizontal, cold-wall type thermal CVD. Figure 1 shows a schematic diagram of this process. TMA was purchased from a company (Tri Chemical Lab. Inc., Japan). TMA was introduced into the CVD chamber without carrier gas. The flow rate of TMA vapor was adjusted by a needle valve. After the substrate was set on Inconel susceptor (50 mmº © 31 mm, Inconel 601), it was heated by induction heating (RF-5KN4, 2050 kHz, 5 kW; NEC Tokin Corp., Japan). Table 1 summarizes the CVD deposition parameters. For the temperature calibra-tion, actual substrate temperatures were measured using a K-type thermocouple attached on the upper surface of Si substrate with 10 © 10 © 0.5 mm 3 with Al 2 O 3 -based cement paste (Aron Ceramics; Toagosei Chemical Industry Co. Ltd., Japan).
The film thickness was measured with a surface profilometer (Dektek 3) ; Sloan Technology Corp., USA). The phases and crystallinity of the deposited films were evaluated using X-ray diffractometer (XRD, MPD; Malvern PANalytical, Netherland) with Cu K ¡ radiation. Crystal orientation and reciprocal-space-mapping measurements were done with thin film XRD (MRD; Malvern PANalytical, Netherland) to analyze the epitaxial relations between the film and the buffer layer.
Results and discussion
The effect of deposition (substrate) temperature on Al 2 O 3 polytype is shown in Figure 2. This is the case of Cr 2 O 3 /YSZ/Si substrate (both Cr 2 O 3 and YSZ buffer layer were 30 nm in thickness). In Fig. 2(a), good lattice matching can be seen between (001)YSZ and (001)Si, and also (001)Cr 2 O 3 and (001)YSZ. The former epitaxial growth is well-known to us. 17) The latter epitaxial growth has a twin domain in the c-axis direction. 15 Figure 3 shows the relation between the deposition rate of Al 2 O 3 thin films and the deposition temperature. Since TMA tends to react with oxygen rapidly and forms oxide particles by gas-phase homogeneous nucleation, decomposition of TMA increases with increasing the deposition temperature. Simultaneously, the CVD reaction at the sub- strate surface is enhanced with increasing the temperature. As shown in Fig. 3, the deposition rate rises gradually between 1123 and 1173 K, because the gas phase oxidation and homogeneous nucleation are moderate and because the amount of TMA supply for CVD process is close to constant at the lower temperature range. From 1173 to 1223 K, the gas-phase homogeneous nucleation is slightly enhanced. Then the amount of TMA is reduced, however, CVD reaction on the substrate is enhanced, causing that the deposition rate increased. From 1223 to 1323 K, the amount of TMA decreases dramatically because the gasphase homogeneous nucleation occurs rapidly. Based on the discussion above, the change in Al 2 O 3 deposition rate as a function of the deposition temperature has a maximum as shown in Fig. 3. It is well known that amorphous Al 2 O 3 decreased the strength of Al 2 O 3 -coated cutting tools. 18) Therefore, the Al 2 O 3 film is needed in industry containing higher amounts Table 1. Table 1. of crystalline Al 2 O 3 polytypes, especially ¡ or ¬-Al 2 O 3 . To make clear how much Al 2 O 3 polytypes are in the thin film of this study, semi-quantitative estimation of the amount of each Al 2 O 3 polytype per unit volume of Al 2 O 3 thin film was carried out. At first, the amount of each crystalline Al 2 O 3 polytype was evaluated using the structure factor (F ) with Rietvelt software (X-Pert HighScore plus software, ver. 4.7; PANalytical). The total amount of crystalline phase was obtained by using intensity and the F values for all detected polytypes and comparing the amounts of amorphous Al 2 O 3 phase in Al 2 O 3 films.
For XRD measurements, the accelerating voltage and the current were fixed at 40 kV and 30 mA, respectively. The X-ray irradiation area at diffraction angles of ¡(0012), ¬(006), £(333), and ©(333) were adjusted at almost equal. The integrated intensity of XRD peaks per unit volume was calculated from the intensity in Fig. 2 and the thickness in Fig. 3. The Inorganic Crystal Structure Database was referred for the Al and O atom positions for each polytype. The measured XRD peak intensity is proportional to F 2 . The multiplicity factor is regarded as unity because of the epitaxial thin film. The volume fraction of ¡-Al 2 O 3 in unit volume of thin film must be proportional to I ¡(hkl) /(t·«F ¡(hkl) « 2 ), where I ¡(hkl) denotes the observed integrated intensity for (h k l) reflection, t expresses the film thickness, and F ¡(hkl) stands for the structure factor for (h k l) reflection. The summation of each Al 2 O 3 -polytype fraction in the unit volume of the Al 2 O 3 thin film can be described as shown below. Figure 4 shows the effect of the deposition temperature on the volume fraction of each Al 2 O 3 polytype in the unit volume of thin film. The total amount of crystalline Al 2 O 3 polytypes are also shown in the figure. From Fig. 4(a), it is seen that the total amount of crystalline Al 2 O 3 polytypes increases with increasing the deposition temperature, and it is almost the same as the ¡-Al 2 O 3 fraction. Figure 4 On the other hand, we can estimate the amount of amorphous phase in each deposition temperature semiquantitatively from Fig. 4. This estimation also shows the small amount of ©-phase and the large amount of amorphous phase were deposited at lower deposition temperatures (1123 and 1173 K). Similarly, small amounts of ¬phase and large amounts of amorphous phase were deposited at intermediate deposition temperature (1223 K). At higher deposition temperatures (1273 and 1323 K), larger Table 1. amounts of ¡-Al 2 O 3 were deposited on the Cr 2 O 3 /YSZ/Si substrate. These results agree well to Fig. 2. As described before, ©-phase, £-phase and ¬-phase, are metastable thermodynamically. Also, both ©-phase and £-phase have structural defect in their crystal structures, resulting in that these polytypes crystallize at a lower temperature, compared with ¡-Al 2 O 3 . Since the Cr 2 O 3 buffer layer has the same corundum structure as that of ¡-Al 2 O 3 , 16) a large amount of ¡-Al 2 O 3 thin film was formed on Cr 2 O 3 /YSZ/ Si at 1323 K through hetero epitaxial growth. Figure 5 shows the effect of TMA flow rate on (a) the film deposition rate, and (b) volume fractions of each Al 2 O 3 polytype and (c) the total amount of crystalline Al 2 O 3 polytypes in the unit volume, which are calculated in the same way in Fig. 4. From this figure (a), the deposition rate is approximately constant at a TMA flow rate below 4.3 © 10 ¹9 m 3 s ¹1 , and then the deposition rate rapidly increases at a range of 4.7 © 10 ¹9 to 5.0 © 10 ¹9 m 3 s ¹1 . At the same time, the summation of each Al 2 O 3 polytype decreases. In this deposition condition, the ¡-Al 2 O 3 is dominant in the crystalline phase, so the amount of ¡-Al 2 O 3 decreases with the increasing TMA flow rate, although ¬-Al 2 O 3 formed above 3.8 © 10 ¹9 m 3 s ¹1 and £-Al 2 O 3 formed above 4.3 © 10 ¹9 m 3 s ¹1 . Table 2 shows the volume per one Al atom in a unit cell at 1323 K. The unit cell volumes of the ¡-Al 2 O 3 , ¬-Al 2 O 3 , and £-Al 2 O 3 at 1323 K were calculated using the International Centre for Diffraction Data (ICDD) crystal structure data ¡ (ICDD 01-070-5679), ¬ (ICDD 00-052-0803), and £ (ICDD 00-056-0457) at room temperature (298 K), and the thermal expansion coefficients (TEC) data given in the literature. 19)21) Since the TEC data of ¬-Al 2 O 3 and £-Al 2 O 3 was limited at lower temperature range below 1323 K, the unit-cell volumes were calculated based on the extrapolated TEC data.
When increasing the TMA flow rate, the series of Al 2 O 3 polytypes, ¡-, ¬and £-Al 2 O 3 , appeared in Fig. 5, and finally amorphous phase might be main phase with the order of increasing the volume per one Al atom in each unit cell. This tendency arises because the higher TMA flow rate made it difficult to form the thermodynamically stable phase, i.e. a dense crystal structure: ¡-Al 2 O 3 , under high supply rate of the raw material. On the other hand, with increase of deposition temperature, amorphous and the series of Al 2 O 3 polytypes, ©-, £-, ¬and ¡-Al 2 O 3 , appeared in Fig. 4 with the order of decreasing the volume per one Al atom in each unit cell. In this hypothesis, we should estimate the volume per one Al atom at 1323 K, Table 1. On the other hand, polycrystalline Cr 2 O 3 thin films are known to be formed on gold-coated glass at a low temperature of 673 K using CVD. 22) Actually,Cr 2 O 3 has no polytype such as ¡-, ¬-, or £-type in Al 2 O 3 . The result of this, chromium has multiple valences (0 to +6). 23) Therefore, corundum-type Cr 2 O 3 can be deposited at lower temperatures. However, aluminum shows single ionic valence (+3). Formation of Al 2 O 3 crystal is rather difficult. It must be arranged with stoichiometry: two aluminums and three oxygens. Amorphous Al 2 O 3 forms easily when appropriate ratios of aluminum and oxygen are supplied. This multivalency also must affect difficulty in ¡-Al 2 O 3 formation at lower temperatures. Figure 6 shows the asymmetric (229) reciprocal-spacemapping (RSM) of ¡-Al 2 O 3 thin film grown on Cr 2 O 3 / YSZ/Si substrate. From Fig. 6 and pole figure analysis (data not shown), one finds that Cr 2 O 3 and ¡-Al 2 O 3 have good lattice matching, YSZ, Cr 2 O 3 and ¡-Al 2 O 3 thin films were found to have epitaxial growth. Compared between the distance between the ¡-Al 2 O 3 (229)-planes and the distance between the Cr 2 O 3 (229)-planes, it is seen that the film was grown with its own in-plane lattice constant. Figure 8 shows the estimated unit cell configuration and lattice mismatch ( f ) between Al 2 O 3 thin films and top buffer layer at 1323 K, at which the largest amount of ¡- O 3 , and YSZ at 1323 K were calculated using the lattice constants at room temperature by ICDD cards and the thermal expansion coefficients given in the literature. 19),20),24) For simplicity, we ignored the effects of thermal stress by the TEC difference between the Al 2 O 3 film and the substrate in this calculation. Thermodynamically, ¡-Al 2 O 3 is the most stable phase among the Al 2 O 3 polytypes. Cr 2 O 3 has the same corundum structure as that of ¡-Al 2 O 3 . ¬-Al 2 O 3 has an orthorhombic lattice; YSZ has a cubic lattice. Actual calculation brings the lattice mismatch is ¹4% between ¡-Al 2 O 3 and Cr 2 O 3 (as shown in Fig. 8), which is rather large for normal epitaxial growth of thin films, however, the same crystal structure (corundum) permits epitaxial growth if lattice mismatch is large.
Conversely, two ¬-Al 2 O 3 unit cells are possibly deposited on one Cr 2 O 3 unit cell with a lower lattice mismatch, ¹2.6 and ¹3.2% as shown in Fig. 8. In this case, however, the difference in crystal structure might explain that the ¬-Al 2 O 3 film was deposited on Cr 2 O 3 at limited temperatures only (Fig. 2 at 1223 K). Furthermore, this result explains the reason that the (242)¬-Al 2 O 3 diffraction peak deposited on polycrystalline Cr 2 O 3 buffer layer at a higher deposition temperature (1323 K) also as shown in Figs. 7(c) and 7(d).
When using YSZ as the substrate instead of Cr 2 O 3 , the lattice mismatch between ¡-Al 2 O 3 and YSZ is quite large (¹20 and ¹7.6%). Moreover, their crystal systems (hexagonal and orthorhombic) differ. It is difficult to configure the ¡-Al 2 O 3 unit cell on the YSZ unit cell with an epitaxial nature. Perhaps for this reason, ¡-Al 2 O 3 cannot be deposited on YSZ [ Fig. 7(b)]. On the other hand, the lattice mismatch between ¬-Al 2 O 3 and YSZ is ¹6.2 and 7.5% with suitable unit cell configurations for epitaxial film growth (two ¬-Al 2 O 3 unit cells deposited on three YSZ unit cells).
From these discussions, it is concluded that epitaxial growth can be estimated from both similarity in crystal structure and moderate lattice mismatch between ¡-Al 2 O 3 thin film and Cr 2 O 3 buffered Si substrate, and between ¬-Al 2 O 3 and YSZ buffered Si substrate.
Conclusion
Using cold wall thermal CVD with TMA as a raw material, Al 2 O 3 thin films were deposited on the Cr 2 O 3 /YSZbuffered Si substrate. We investigated the effects of sub- strate temperatures, deposition rates, and different buffer layers on crystallized Al 2 O 3 film growth. By changing the deposition conditions, especially the deposition temperature and buffer layers, various Al 2 O 3 polytypes (¡, ¬, £, and ©) films and amorphous Al 2 O 3 film were obtained. The ¡-Al 2 O 3 thin films were grown epitaxially on Cr 2 O 3 / YSZ buffered Si substrate at 1323 K. The Cr 2 O 3 buffer layer brought epitaxial ¡-Al 2 O 3 film growth. When the P(polycrystalline)-Cr 2 O 3 top buffer layer was used, P-¡-Al 2 O 3 films was made on P-Cr 2 O 3 /CeO 2 /Si, P-Cr 2 O 3 /P-YSZ/Si, and Cr 2 O 3 sintered bodies. The crystal orientations of the three P-¡-Al 2 O 3 films were slightly different, but the crystal orientation was strongly inherited by that of the continuing P-Cr 2 O 3 . The Cr 2 O 3 buffer layer played very important roles for growing ¡-Al 2 O 3 . In conclusion, epitaxial growth can be explained by both similarity in crystal structure and moderate lattice mismatch between Al 2 O 3 thin film and Cr 2 O 3 buffered Si substrate.
|
2019-06-14T14:20:49.680Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "bc568aa228218d4764480d046768fd8e240f605f",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/jcersj2/127/6/127_18192/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4c49eec61cdbdaaf7a66fea427e52e7071d23d2f",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
6855168
|
pes2o/s2orc
|
v3-fos-license
|
Violence in the Hearts of Galaxies - Aberration or Adolescence?
Violent activity in the nuclei of galaxies has long been considered a curiosity in its own right; manifestations of this phenomenon include distant quasars in the early Universe and comparatively nearby Seyfert galaxies, both thought to be powered by the release of gravitational potential energy as material from the host galaxy accretes onto a central supermassive black hole (SMBH). Traditionally, the broader study of the formation, structure and evolution of galaxies has largely excluded active galactic nuclei. Recently however, this situation has changed dramatically, both observationally and theoretically, with the realisation that the growth and influence of the SMBH, the origin and development of galaxies and nuclear activity at different epochs in the Universe may be intimately related. I review the intriguing evidence for causal links between supermassive black holes, nuclear activity and the formation and evolution of galaxies, and describe opportunities for testing these relationships using the next generation of earth-bound and space-borne astronomical facilities. (Abridged)
Introduction
Over the last 50 years, astronomers have been intrigued by enormously energetic objects called Active Galactic Nuclei (AGN), a violent phenomenon occurring in the nuclei, or central regions, of some galaxies with intensities and durations which cannot easily be explained by stars, thus providing some of the first circumstantial evidence for theoretically-predicted supermassive black holes. Despite their intriguing properties they were largely viewed as interesting but unimportant freaks in the broader study of galaxy formation and evolution, leading astronomers studying the properties of galaxies to exclude the small fraction of galaxies with active centres as irritating aberrations. Here I describe the discovery of AGN and the variety of classifications that followed; I describe some features of unifying models of the central engine that attempt to explain the varied properties of different AGN classes that give rise to the classification. The search for supermassive black holes in AGN and non-active galaxies is discussed along with the developing realisation that all galaxies with significant bulge components might harbour dormant supermassive black holes as remnants of a past adolescent period of quasar activity and therefore posses the potential to be re-triggered into activity under the right conditions, making nuclear activity an integral part of galaxy formation and evolution.
The Early Studies of Active Galactic Nuclei
The discovery of AGN began with the development of radio astronomy after World War II when hundreds of sources of radio waves on the sky were detected and catalogued (e.g. Third Cambridge Catalogue (3C) - Edge et al. 1959 and its revision (3CR) -Bennett 1961), but the nature of these strong radio emitters was unknown. Astronomers at Palomar attempted to optically identify some of the catalogued radio sources; Baum & Minkowski (1960) discovered optical emission from a faint galaxy at the position of the radio source 3C295 and, on studying the galaxy's spectrum, or cosmic bar-code, measured its redshift and inferred a distance of 5000 million light years, making it the most distance object known at that time. Distances can be inferred from Hubble's law whereby the more distant an object, the faster it appears to be receding from us, due to the expansion of the Universe. Chemical elements present in these objects emit or absorb radiation at known characteristic frequencies and when observed in a receding object, the observed frequency is reduced or redshifted due to the Doppler effect; the same physical process that causes a receding ambulance siren to be lowered in pitch after it passes the observer.
Attempts to find visible galaxies associated with other strong radio sources such as 3C48, 3C196 and 3C286 failed and only a faint blue, star-like object at the position of each radio source was found -thus leading to their name 'quasi-stellar radio sources', or 'quasars' for short. The spectrum of these quasars resembled nothing that had previously been seen for stars in our Galaxy and these blue points remained a mystery until Maarten Schmidt (1963) concentrated on 3C273, for which an accurate radio position was known (Hazard, Mackey & Shimmins 1963). The optical spectrum of the blue source associated with the radio emitter seemed unidentifiable until Schmidt realised that the spectrum could be clearly identified with spectral lines emitted from hydrogen, oxygen and magnesium atoms if a redshift corresponding to 16% of the speed of light was applied. The same technique was applied successfully to 3C48 (Greenstein & Matthews 1963) and demonstrated that these objects are not members of our own galaxy but lie at vast distances and are super-luminous. Indeed, the radiation emitted from a quasar (L ∼ >10 13 L ⊙ , where the Sun's luminosity is L ⊙ = 3.8 × 10 26 Watt) is bright enough to outshine all the stars in its host galaxy. Such energies cannot be produced by stars alone and it was quickly realised that the release of gravitational potential energy from material falling towards, or being accreted by, a supermassive black hole at the galaxy centre, ∼100 times more energy efficient than nuclear fusion, was the only effective way to power such prodigious outputs (Lynden-Bell 1969).
A black hole is a region of space inside which the pull of gravity is so strong that nothing can escape, not even light. Two main kinds of black holes are thought to exist in the Universe. Stellar-mass black holes arise from the the collapsed innards of a massive star after its violent death when it blows off its outer layers in a spectacular supernova explosion; these black holes have mass slightly greater than the Sun but are compressed into a region only a few kilometres across. In contrast, supermassive black holes, which lurk at the centres of galaxies, are 10 million to 1000 million times more massive than the Sun and contained in a region about the size of the Solar System. The emission of radiation from a supermassive black hole appears at first to be contradictory; however, the energy generating processes take place outside the black hole's point-of-no-return, or event horizon. The mechanism involved is the conversion of gravitational potential energy into heat and light by frictional forces within a disk of accreting material, which forms from infalling matter that still possesses some orbital energy, or angular momentum, and so cannot fall directly into the black hole.
Radiation from AGN is detected across the electromagnetic spectrum and today, nuclear activity in galaxies has been detected over a wide range of luminosities, from the most distant and energetic quasars, to the weaker AGN seen in nearby galaxies, such as Seyferts (Seyfert 1943), and even the nucleus of our own Milky Way.
AGN Orientation -Looking at it from All Angles
After the initial discovery of radio-loud AGN, the advent of radio interferometry soon led to detailed images of these strong radio emitters (e.g., Bridle & Perley 1984;Bridle et al. 1994) which revealed remarkable long thin jets of plasma emanating from a central compact nucleus and feeding extended lobes, often at considerable distances from the AGN, millions of light years in the most extreme cases. The radio emission is synchrotron radiation produced by electrons spiraling around magnetic fields in the ejected plasma; figure 1 shows a radio image of the classic radio galaxy Cygnus A in which the nucleus, jets and lobes are visible. These dramatic jets and clouds of radio-emitting plasma were interpreted as exhaust material from the powerful central engine (Scheuer 1974;Blandford & Rees 1974).
(a) Too fast to believe -the remarkable jets in radio-loud AGN The sharpest radio images, made repeatedly over many years using networks of radio telescopes spanning the globe, resulted in 'movies' of the motion of material in the jets. The blobs of plasma in these jets were apparently being ejected at many times the speed of light, c, appearing to violate fundamental laws of physics. It was quickly realised that such superluminal motion, was an optical illusion caused by the plasma moving at relativistic speeds, i.e. ∼ > 0.7c, and being ejected towards us at an angle close to our line of sight (e.g., Blandford & Rees 1978). Relativistic motion appears to be present for jet matter over hundreds of thousands of light years and the detailed physical driving mechanisms remain an area of active study. The relativistic motion of jet matter has an enormous impact on the appearance of these objects and is possibly the single-most important contributor to the variety of observed morphological types. The fast motion of jet material also causes extreme apparent brightening, or Doppler boosting, of the radiation and greatly amplifies any flickering, or variability, in the light levels. Today, the wide range of observed radio structures, brightnesses and levels of variability can be understood in terms of the angle at which we view the high-speed plasma jet. Radio galaxies like Cygnus A are orientated perpendicular to our line of sight, lying in the plane of the sky, appear rather symmetrical, and as expected, show no variability or superluminal motion. At the other extreme are bright, compact and highly variable BL Lac objects, which are being observed head-on. Figure 2 shows a sketch of this model in which a jet of plasma is ejected from either side of the central engine at relativistic speeds; object classification depends on the angle of the jet to our line of sight. Objects viewed at intermediate angles are seen as either extended, 'lobe-dominated' quasars or relatively compact, 'core-dominated' quasars (see also Urry & Padovani 1995).
(b) Obscuring-Doughnuts in Radio-Quiet AGN Radio-quiet quasars and Seyferts are known to be ∼10 times more common, but 100 to 1000 times weaker at radio wavelengths and significantly less extended than their radio-loud cousins (Goldschmidt et al. 1999), but orientation still has important effects, this time on the optical properties. Optical spectroscopy provides a powerful diagnostic tool for the physical conditions in astronomical objects; as described earlier chemical elements have a characteristic spectral signature and physical conditions within a gas can be inferred from distortions of this chemical bar-code. In particular, broadening of the spectral lines indicates a spread in gas-cloud velocities, whilst the relative brightnesses of spectral lines indicate the intensity of ultraviolet radiation incident upon the gas.
Measurements of the optical spectra of Seyfert nuclei show spectral lines from gas ionised (i.e. gas in which atoms have been stripped of one or more electrons) by strong ultraviolet radiation that is too intense to be produced by a collection of stars and is instead thought to originate from the accretion disk. All Seyfert Figure 2. Top: Radio-loud unification scheme, in which the observed AGN type depends on the observer's viewing angle to the ejection axis of the radio jet. Bottom: Sketch of an AGN central engine with a central black hole surrounded by (a) an accretion disc that emits cones of ultraviolet ionising radiation and defines the radio jet launch direction, (b) a torus of dust a gas that accounts for the different observed kinds of radio-quiet AGN by blocking our view of the accretion disc and dense, rapidly-moving ionised gas clouds in the broad-line region (BLR) when viewed edge-on (type 2 objects). Less-dense ionised clouds in the narrow-line region (NLR) lie above the plane of the torus and are visible from all angles.
Radio lobe
nuclei contain a region of ionised gas, the Narrow Line Region (NLR), extending over several hundred light years where the spectral line-widths correspond to gas velocties of a few hundred km s −1 and densities are moderate (electrons per unit volume n e ∼10 3 −10 6 cm −3 ). Closer in, within ∼0.1 light year of the black hole, is the Broad-Line Region (BLR), a much denser region of gas (n e ∼10 9 cm −3 ) that shows gas velocities up to 10,000 km s −1 . Seyferts were originally classified into two types; type-1 Seyferts that show evidence for both a BLR and an NLR, and type-2 Seyferts that show only an NLR (Khachikian & Weedman 1971, 1974. The mystery of the missing BLRs in type-2 Seyferts was solved elegantly in 1985 when Antonucci & Miller discovered a hidden BLR in the scattered light spectrum (Fernandez et al. 1999) and an inferred inner ring of neutral hydrogen from absorption measurements (Mundell et al. 2002).
of the archetypal Seyfert 2 galaxy NGC 1068, which closely resembled that of a Seyfert type 1. This discovery led to the idea that the BLR exists in all Seyferts and is located inside a doughnut, or torus, of molecular gas and dust; our viewing angle with respect to the torus then explains the observed differences between the unobscured, broad-line Seyfert 1s, viewed pole-on, and the obscured, narrow-line Seyfert 2s, viewed edge-on. Hidden Seyfert 1 nuclei can then be seen in reflected light as light photons are scattered into the line of sight by particles above and below the torus acting like a "dentist's mirror" (Antonucci & Miller, 1985;Tran 1995;Antonucci 1993;Wills 1999). The lower panel of Figure 2 shows a sketch of a Seyfert nucleus with the different types of AGN observed as angle between line of sight and torus axis increases. Figure 3 shows an image of the molecular torus in NGC 4151, surrounding the mini, quasar-like radio jet emanating from the centre of the galaxy, as predicted by the unification scheme. Radio quiet quasars also have broad and narrow lines and are considered to be the high luminosity equivalents of Seyfert type 1 galaxies. A population of narrowline quasars, high luminosity equivalents to obscured Seyfert 2s, are predicted by the unification scheme but, until now, have remained elusive. New optical and infrared sky surveys are beginning to reveal a previously undetected population of red AGN (Cutri et al. 2001) with quasar type 2 spectra (Djorgovski et al. 1999) and weak radio emission (Ulvestad et al. 2000). A significant population of highly obscured but intrinsically luminous AGN would alter measures of AGN evolution, the ionisation state of the Universe and might contribute substantially to the diffuse infrared and X-ray backgrounds. (c) Further unification?
The presence of gas emitting broad and narrow optical lines in radio-loud AGN and the discorvery of mini radio jets in Seyferts (e.g. Wilson & Ulvestad 1982) led to further consistency between the two unification schemes. Nevertheless, the complete unification of radio-loud and radio-quiet objects remains problematic, particularly in explaining the vast range in radio power and jet extents, and might ultimately involve the combination of black hole properties, such as accretion rate, black hole mass and spin, and orientation (Wilson & Colbert, 1995;Boroson 2002).
Searching for Supermassive Black Holes
Although incontrovertible observational proof of the existence of supermassive black holes (SMBHs) has yet not been found, evidence is mounting to suggest the presence of massive dark objects, or large mass concentrations at the centres of galaxies. Black holes, by definition, cannot be 'seen' and instead one must look for the consequences of their presence. The presence of SMBHs has been inferred indirectly from the energetics of accretion required to power luminous AGN and explain rapid flux variability and, more directly, from kinematic studies of the influence of the black hole's gravitational pull on stars and gas orbiting close to it in the central regions of both active and non-active galaxies. Theoretical models rule out alternatives to a supermassive black holes such as collections of brown or white dwarf stars, neutron stars or stellar-mass black holes which would merge and shine or evaporate too quickly (Maoz 1995(Maoz , 1998Genzel et al. 1997Genzel et al. , 2000.
(a) Quasar lifetimes and the black hole legacy
Soon after the discovery of quasars it became clear that they were most common when the Universe was relatively young with the peak of the quasar epoch at redshift z∼2.5 or a look-back time of 65% of the age of the Universe (See Figure 4); today bright quasars are rare and weaker Seyferts dominate instead. The number of dead quasars or relic, dormant black holes left today can estimated by applying some simple arguments to the quasar observations. Soltan (1982) integrated the observed light emitted by quasars, and, assuming the power source for quasar light is accretion of material by a supermassive black hole with a mass-to-energy conversion efficiency of 10% and that the black hole grows during the active phase, predicted the total mass in relic black holes today. Knowing the number of galaxies per unit volume of space (e.g. Loveday et al. 1992), if one assumes that all galaxies went through a quasar phase at some time in their lives, then each galaxy should, on average, contain a ∼10 8 M ⊙ black hole as a legacy of this violent, but short-lived period (∼10 7 to ∼10 8 years). Alternatively, if only a small fraction of galaxies went through a quasar phase, the active phase would have lasted lasted longer (>10 9 years) and the remnant SMBHs would be relatively rare, but unacceptably massive (>10 9 M ⊙ ) (e.g. Cavaliere et al., 1983;Cavaliere & Szalay 1986, Cavaliere & Padovani 1988).
More complex models including quasar evolution (e.g. Tremaine 1996;Faber et al. 1997) and the effects of galaxy growth (e.g. Haehnelt & Rees 1993) favour shortlived periods of activity in many generations of quasars, or a mixture of continuous and recurrent activity (Small & Blandford 1992;Cen 2000;Choi, Yang & Yi 2001). The complex physics of accretion and black hole growth, however, remain an area of active study (e.g. Blandford & Begelman 1999;Fabian 1999). Nevertheless, the range of black hole mass of interest is thought to be M • ∼10 6 to 10 9.5 M ⊙ , with the lower mass holes being ubiquitous (Kormendy & Gebhardt 2001).
(b) Irresistible black holes -dynamics of gas and stars
Although the prodigious energy outputs from powerful quasars offer strong circumstantial evidence that supermassive black holes exist, most notably in driving the ejection and acceleration of long, powerful jets of plasma close to the speed of light (Rees et al. 1982), it has not, until recently, been possible to make more direct kinematic measurements of the black hole's gravitational influence. The mass of a central object, the circular velocity of an orbiting star and the radius of the orbit are related by Newton's Laws of motion and gravity. Precise measurements of the velocities of stars and gas close to the centre of a galaxy are then used to determine the mass of the central object.
The strongest dynamical evidence for black holes comes from studies of centre of our own Galaxy and a nearby Seyfert, NGC 4258; a decade of painstaking observations of a cluster of stars orbiting around the mildly active centre of the Milky Way, within a radius of 0.07 light years of the central radio source Sgr A*, suggest a central mass of M • =(2.6±0.2)×10 6 M ⊙ Genzel , 2000Ghez 2000). Discovery of strong radio spectral lines, or megamasers, emitted from water molecules in a rapidly rotating nuclear gas disc at the centre of NGC 4258 implies a centre mass M • =(4±0.1)×10 7 M ⊙ concentrated in a region smaller than 0.7 light years (Miyoshi et al. 1995), again small enough to rule out anything other than a black hole (Maoz 1995(Maoz , 1998. Precision measurements of black hole masses in other galaxies using a variety of techniques, although challenging and still model dependent, have become increasingly common (e.g. Maggorian et al. 1998;Bower et al. 1998;Gebhardt et al. 2000) and now more than 60 active and non-active galaxies have black hole estimates.
Black Hole Demographics -the Host Galaxy Connection
In general, galaxies consist of two main visible components -a central ellipsoidal bulge and a flat disc structure commonly containing spiral arms -together making a structure resembling two fried eggs back-to-back. Elliptical galaxies have no discs and are dominated by their bulges, maintaining their shapes by the random motions of their stars; spiral galaxies, like our own Galaxy and nearby Andromeda have prominent discs and are supported mainly by rotation, with rotation speeds between 200 km s −1 and 300 km s −1 . Some spiral galaxies contain a bar-like structure that crosses the nucleus; the spiral arms then begin at the ends of the bar and wind outwards. If the bar is narrow and straight it is classed as a 'strong' bar and if ovalshaped (essentially an elongated bulge) it is 'weak'. Dynamical simulations have revealed that in the region of the bar, stars do not travel on circular orbits as they do in the disk, but instead follow more elongated elliptical, or 'non-circular' paths.
With the great progress made recently in measuring the mass of central supermassive black holes in a significant number of active and non-active galaxies, correlations with their host galaxy properties are now possible. Maggorian et al. (1998) confirmed the correlation between the brightness of a galaxy bulge (and hence stellar mass) and the mass of its central black hole (e.g. Kormendy & Richstone 1995) establishing a best fit to the linear relation of M • =0.006M bulge , despite a large scatter. A much tighter correlation was subsequently discovered between the velocity dispersion (σ) of stars in the host galaxy bulge and the central black hole mass (e.g. Gebhardt et al 2000;Ferrarese & Merrit 2000). The velocity dispersion is a measure of the range of random speeds present in star motions and is potentially a more reliable galaxy mass indicator than total starlight; the greater the spread in speeds, the more massive the galaxy bulge. The tightness of the correlation points to a connection between the formation mechanism of the galaxy bulge and central black hole although the physics involved are not yet known. The M • −σ relation for a mixture of nearby active and non-active galaxies (Figure 5), measured using a variety of techniques, shows the relationship between bulge and black hole is very similar for both, although investigations continue to establish the precise form of the correlation and whether it is universal for active and non-active galaxies. If universal, this relationship would provide exciting confirmation that non-active galaxies contain dormant versions of the same kind of black holes that power AGN.
No correlation exists between galaxy disc properties and black hole mass, and disc galaxies without bulges do not appear to contain supermassive black holes (e.g. Gebhardt et al. 2001), suggesting discs form later and are not involved in the process that intimately links the black hole and bulge.
AGN and their Environment (a) The violent early Universe
The relationships between black holes and their host galaxies are increasingly compelling but unanswered questions remain concerning the relationship between star formation, galaxy formation, quasar activity and black hole creation in the early Universe. Observations of faint galaxies in the Hubble Deep Field suggested a peak in star formation history that matches that of the quasar epoch (e.g. Madau et al. 1996) implying a close link between star formation and quasar activity. More recent measurements, however, suggest that the star formation activity may be constant for redshifts greater than 1 with the onset of substantial star formation occurring at even earlier epochs, at redshifts beyond 4.5 (Steidel et al. 1999). An increasing number of new quasars are also being found at redshifts greater than 4 (Fan et al. 2001;Schneider et al. 2002) providing constraints for cosmological models of galaxy formation and continuing the debate on the relationship between quasar activity, star formation and the creation of the first black holes (e.g. Haiman & Loeb 2001).
The life cycle of an AGN involves a mechanism to trigger the infall of gas to create an accretion disc and continued fuelling, or replenishment, of this brightlyshining accretion disc. A number of models have suggested that at intermediate to high redshifts it may be moderately easy to trigger and fuel AGN, where galaxies might be more gas rich, star formation is vigorous and collisions between galaxies are common (Haehnelt & Rees 1993). Kauffman & Haehnelt (2000) suggest a model in which galaxy and quasar evolution at early times was driven by mergers of gas-rich disc galaxies, which drove the formation and fuelling of black holes and created today's elliptical galaxies, thereby tying together host galaxy and black hole properties. As the Universe ages, a decreasing galaxy merger rate and available gas supply and increasing accretion timescales produce the decline in bright quasars.
An alternative hypothesis, linking black hole and bulge growth with quasar activity, involves strong bars in early galaxies (Sellwood 1999); early disc galaxies developed strong bars which were highly efficient at removing angular momentum from disc gas and funnelling it towards the centre to feed and grow a black hole. This represents the bright quasar phase in which the black hole grows rapidly, but on reaching only a few percent of the mass of the host disc, the central mass concentration soon destroys the bar due to an increasing number of stars that follow random and chaotic paths, thereby choking off the fuel supply and quenching the quasar. In addition, the increase in random motion in the disc leads to the creation of a bulge. A disc might be re-built some time later if the galaxy receives a new supply of cold gas, perhaps from a 'minor merger' whereby a small gaseous galaxy or gas-cloud falls into the main disc and is consumed by the disc without causing significant disruption, and without significantly affecting the black hole mass. This scenario nicely accounts for the relationship between black hole masses and bulge properties and lack of correlation with disc properties.
An important unknown parameter in these models is the amount of cold gas in progenitor disc galaxies and how it evolves with time; it is expected that the Universe was more gas-rich in the past (Barger et al. 2001), but observations of neutral hydrogen (Hi) and molecular gas such as carbon monoxide (CO) with new generation facilities, such as Atacama Large Millimeter Array (ALMA), the Giant Metrewave Radio Telescope (GMRT), the Extended Very Large Array (EVLA) and the proposed Square Kilometre Array (SKA), will offer exciting opportunities to measure the gaseous properties of distant galaxies directly to further our understanding of galaxy formation and evolution and its relationship to quasar and star-formation activity.
(b) Re-activating dormant black holes in nearby galaxies While the most luminous AGN might coincide with violent dynamics in the gas-rich universe at the epoch of galaxy formation (Haehnelt & Rees 1993), nuclear activity in nearby galaxies is more problematic since major galaxy mergers, the collision of two equal-mass disc galaxies, are less common and galaxy discs are well established; reactivation of ubiquitous 'old' black holes is therefore likely to dominate. Host-galaxy gas represents a reservoir of potential fuel and, given the ubiquity of supermassive black holes, the degree of nuclear activity exhibited by a galaxy must be related to the nature of the fuelling rather than the presence of a black hole (e.g., Shlosman & Noguchi 1993;Sellwood & Moore 1999). Gravitational, or tidal forces exterted when two galaxies pass close to one another may play a role in this process, either directly, when gas from the companion, or outer regions of the host galaxy, is tidally removed and deposited onto the nucleus, or by causing disturbances to stars orbiting in the disc and leading to the growth of structures such as bars, in which stars travel on elliptical paths and drive inflows of galactic gas (e.g. Toomre & Toomre 1972;Simkin, Su & Schwarz 1980;Shlosman, Frank & Begelman 1989;Mundell et al. 1995;Athanassoula 1992;.
Numerous optical and IR surveys of Seyfert hosts have been conducted but as yet show no conclusive links between nuclear activity and host galaxy environment.
Neutral hydrogen (Hi) is an important tracer of galactic structure and dynamics and may be a better probe of environment than the stellar component. Hi is often the most spatially extended component of a galaxy's disc so is easily disrupted by passing companions, making it a sensitive tracer of tidal disruption (e.g. Mundell et al., 1995). In addition, because gas can dissipate energy and momentum through shock waves , whereas collisions between stars are rare, the observable consequences of perturbating the Hi in galactic bars are easily detectable. However, despite the diagnostic power of Hi, until recently few detailed studies of Hi in Seyferts have been performed (Brinks & Mundell 1996;Mundell 1999).
The strength of a galaxy collision, which depends on initial galaxy properties such as mass, concentration, distance and direction of closest approach, ranges from the most violent mergers between equal mass, gas-rich disc galaxies, to the weakest interaction in which a low mass companion, perhaps on a fly-by path, interacts with a massive primary. In this minor-merger case the primary disc is perturbed but not significantly disrupted or destroyed. Indeed, Seyfert nuclei are rare in strongly interacting systems, late-type spirals and elliptical galaxies (Keel et al. 1995;Bushouse 1996) and sometimes show surprisingly undisturbed galactic discs despite the presence of Hi tidal features (Mundell et al. 1995). Seyfert activity may therefore involve weaker interactions or minor mergers between a primary galaxy and a smaller companion or satellite galaxy, rather than violent major mergers (e.g. De Robertis, Yee & Hayhoe 1998). A key question is whether the gaseous properties of normal galaxies differ from those with Seyfert nuclei and a deep, systematic Hi imaging survey of a sample of Seyfert and normal galaxies is now required.
Unanswered Questions and Prospects for the Future
Studies of galaxies and AGN are being revolutionised by impressive new sky surveys, such as SLOAN and 2DF, which have already significantly increased the number of known galaxies and quasars in the Universe. In the next decade and beyond, prospects for understanding AGN and their role in galaxy formation and evolution are extremely promising given the number of planned new instruments spanning the electromagnetic spectrum.
•We do not yet know whether galaxies grow black holes or are seeded by them; NGST (Next Generation Space Telescope) will find the smallest black holes at the earliest times and allow us to relate them to the first galaxies and stars.
• The amount of cold gas in galaxies through cosmic history is a key ingredient in star-formation, quasar activity and galaxy evolution models but is still unknown. The study of gas at high redshifts with ALMA, the GMRT and the EVLA will revolutionise our understanding its role in these important phenomena and provide powerful constraints for cosmological models.
•Current models of AGN physics -fuelling, accretion discs and the acceleration of powerful radio jets -remain speculative; detailed studies of X-ray emitting gas, e.g with the highly ambitious X-ray space interferometer MAXIM, might offer valuable new insight into the energetics and physical structure of this extreme region.
•Finally, the detection and detailed study of gravitational waves, using the spacebased detector LISA, from massive black holes living in black-hole binary systems or in the very act of merging will prove the existence of SMBHs and perhaps provide insight into the origin of the difference between radio-loud and radio-quiet AGN.
|
2014-10-01T00:00:00.000Z
|
2002-08-01T00:00:00.000
|
{
"year": 2002,
"sha1": "4760cdbeeb75222918db2519eab4f9ceff50ccaa",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/astro-ph/0208039v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "39d711318a02200f8229daab19b3be7eff8f1a64",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Biology",
"Physics",
"Medicine"
]
}
|
14620604
|
pes2o/s2orc
|
v3-fos-license
|
Pure interstitial dup(6)(q22.31q22.31) – a case report
‘Pure’ interstitial duplication of chr6q is rare. The varying size of duplication encompassing 6q22.31 is associated with the expressivity of dysmorphism and autism. Here, we report a unique case with facial dysmorphism, developmental delay, complex neurological impairment and spasticity unrelated to autism. Genetic analysis by aCGH exhibited a 627–971 kb dup(6)(q22.31q22.31) encompassing TRDN and NKAIN2 genes. The presence of the duplication was confirmed by quantitative PCR in the proband and phenotypically normal parents. With the current techniques, we cannot exclude presence of a deleterious homozygous point mutation in the proband where each copy would have been inherited from both parents.
Background
Around 3.6% of the duplications observed in the DNA are mainly clustered within pericentromeric and subtelomeric regions [1]. Genomic DNA with segmental duplications are likely to be 1-200 kb in size and carry a high probability of encompassing repetitive sequences and coding genes [2]. Segmental duplication is described for all human chromosomes with slightly greater number of cases with maternal inheritance [3]. Duplication of long arm of chromosome 6 (chr6q) is rare. Most cases represent co-existence of an unbalanced translocation with other chromosome(s) that lead to a terminal duplication of chr6q with a partial monosomy of other chromosome/s. However, 'Pure interstitial duplication of chr6q' encompassing the larger segment is reported only in a few cases that provide clearly defined phenotypes affiliated to it [4,5]. Cases involving 6q22.31 duplication with another segmental aneusomy had phenotypic manifestations that are more associated to the later and not 6q22.31 [6,7]. Current case report presents an unusual case that portrays facial dysmorphism, severe developmental delay, complex neurological impairment and spasticity with 627-971 kb interstitial dup (6)(q22.31q22.31) as a sole observable anomaly inherited from either of the parents.
Case report
A 4½ years old boy was the first child born prematurely at 8 months by vaginal delivery to consanguineous parents who are half siblings ( Figure 1). The age of the mother and father was 27 and 28 years respectively at the time of birth. Severe developmental delay and spacticity in the proband was first noticed at the age of 15 months. Physical examination revealed an asymmetrical face with dolicocephaly, large and prominent forehead and high anterior hairline. Eyes were small with arched eyebrows and scanty eyelashes. Hypertelorism, epicanthal folds and narrow palpabral fissures were also noted. Ears were low set, hypoplastic antitragus and lobule. Nose was short and stubby with wide nasal tip, broad nasal bridge, small anteverted nares, atresia choanae, hypoplastic alaenasi and thick columella. Philtrum was long and smooth. Upper lip was inverted V-shaped, down turned corners with wide and open mouth, thick lower lip, full cheeks, prominent mid face, underdeveloped nasolabial fold, mild retrognathis and broad jaw ( Figure 2). He had short and stubby fingers with simian crease observed in the left palm. He was not able to sit, stand or walk without support at the age of presentation. Moreover, he could not recognize his parents. The younger sibling was also affected with the same clinical features and died at the age of 1 year.
Conventional G-banding technique showed apparently normal chromosomal pattern [46,XY] at 550-band resolution. DNA from the proband and his parents was extracted from peripheral blood using the QIAamp DNA Blood Midi kit (Qiagen, Valencia, CA) to identify cryptic genomic imbalances. DNA concentration and quality was determined with NanoDrop (ND-1000 spectrophotometer and software; NanoDrop Technologies, Berlin, Germany). DNA copy number was detected with array-Comparative Genomic Hybridization (aCGH) following manufacturer's recommendations using 60 K oligo probes approximately spaced at 40-100 kb intervals across the genome (Human Genome CGH microarray 60B kit, Agilent™). Sex matched genomic DNA (Promega Corporation, Madison, WI, USA) was used as a reference. Relative fluorescence intensity data was analyzed with the aCGH analysis software v3.4 (Agilent Technologies Inc., Santa Clara, CA, USA) by applying Z-score segmentation algorithm with a window size of 10 points to identify chromosome aberrations. We identified a 627-971 kb heterozygous duplication of 6q22.31 region in the proband [ Figure 3]. Furthermore, the duplication was transmitted from either parent as both parents carried the same duplication as indentified using qPCR [ Figure 4]. Quantity of the genomic DNA from the proband and parents was insufficient to carry out exome sequencing to identify point mutations that would have been missed by aCGH and test for their mode of inheritance. The minimal region affected by this duplication spans from chromosome 6 position 123,581,324 to 124,208,360 [(chr6: 123,581,324-124,208,360)(hg18 build36) x3]. This region does not overlap with any known CNVs in the Database of Genomic Variants [DGV] [8]. This region encompasses two genes, the entire coding region of TRDN, and the first exon of NKAIN2.
Discussion
Micro-deletions and micro-duplications are relatively rare events, which arise during spermatogenesis or oogenesis. They might pass down disproportionately only for a few generations. They have been assigned to the 'hotspot regions of the genome' and are observed in several genetic disorders such as mental retardation (MR), developmental delay, schizophrenia, autism, neurocognitive disorders etc. [9,10]. Partial duplication of 6q with phenotypic alterations is reported only in few cases [5,11]. These reports showed that the duplication of 6q co-exist with other chromosomal abnormalities (chr16p and others) [6,7] where duplications were more common than deletions. Furthermore, the size of duplication is in direct correlation with the expression of clinical phenotypes. Scant reports of 'Pure interstitial duplication of 6q' are available in the literature [4,5]. The associated clinical features reported in most cases were intrauterine growth retardation (IUGR), hypertelorism, moderate facial dysmorphia (flat or depressed nasal bridge and anteverted nares), microcephaly, moderate psychomotor retardation, short fingers and cardiac anomaly linked to a varying degree of large sized duplication encompassing 6q22.31 [5]. The present case had a relatively smaller interstitial duplication (~0.62 Mb) and presented almost all of the above clinical features except cardiac anomaly. Furthermore, the proband had severe developmental and intellectual disability and spasticity. He was neither able to sit, speak, stand nor walk without external support at the age of the presentation. However, the younger sibling with all of the above phenotypic features died at the age of 1 year. No investigations were carried out in the younger sibling. Goh et al. [4] found similar features in dysmorphic siblings, which were trisomic for 6q22.1 to 6q23.3, representing a large duplication.
Sanders et al. [6] reported multiple recurrent de novo duplications that were strongly associated with autism. At least 8 patients were reported to harbour duplications in 6q22.31 region ranging from 0.03 to 0.62 Mb along with the other concurrent segmented aneusomy [7]. However, the patient presented in this report had a duplication of 0.62 Mb (627-971 kb; chr6:123,581,324-124,208,360), only nearer to the previously reported regions. This region spans at least two genes (TRDN and NKAIN2), the entire coding region of TRDN [12] and the first exon of NKAIN2. TRDN (Triadin; OMIM No. 603283) with its alternatively spliced isoforms and differential expression is involved in excitation-contraction coupling of smooth and cardiac muscles as part of the calcium release complex in association with the ryanodine receptor. It has functions of (i) ion channel binding, (ii) protein binding and bridging, (iii) protein homo-dimerization activity and (iv) receptor binding functions. NKAIN2 (Na+/K+ Transporting ATPase-interacting 2; OMIM No. 609758) is a transmembrane protein that interacts with the beta subunit of a sodium/potassium-transporting ATPase. Truncation of NKAIN2 has been described in patients with developmental delay [13] and complex neurological impairment [14]. The interstitial duplication detected in the present case could have been inherited from either of the parents.
Since both parents were phenotypically normal, it is highly likely that the proband has a homozygous deleterious point mutation, giving rise to the phenotypic expression of severe developmental, intellectual disability and spasticity. This hypothesis could be further supported by the observation of Froyen et al. in their study of 300 families with X-linked mental retardation (XLMR) identifying 6 overlapping duplications of about 320 kb involving four genes (SMC1A, RIBC1, HSD17B10, HUWE1) encompassing Xp11.2 in unrelated males [15]. In addition to the duplication, point mutation in SMC1A was shown to be associated with Cornelia de Lange syndrome with facial dysmorphism, mental retardation and growth deficit in childhood [16]. The syndromic form of mental retardation with choreoathetosis was shown to be associated with silent mutation in HSD17B10 [17]. Moreover, point mutations of HUWE1 gene leading to dose sensitisation may also partially be responsible for the phenotypes in cases with gene duplications, as shown by Froyen et al. [15].
Thus, the portrayed identical phenotype with severe morphological features presented here with relatively smaller pure interstitial dup(6)(q22.31q22.31) may additionally harbour deleterious point mutation, imparting a biologically pronounced effect which may be attributed to the high degree of consanguinity between parents.
Consent
Written informed consent was obtained from the parents of the patient for publication of this Case Report and accompanying images.
|
2018-04-03T00:56:04.739Z
|
2015-01-31T00:00:00.000
|
{
"year": 2015,
"sha1": "3803d065860df2ab6ba2102b04a3ab5e67755b33",
"oa_license": "CCBY",
"oa_url": "https://ijponline.biomedcentral.com/track/pdf/10.1186/s13052-015-0113-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3803d065860df2ab6ba2102b04a3ab5e67755b33",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59045759
|
pes2o/s2orc
|
v3-fos-license
|
Effects of increasing standardized ileal digestible lysine:calorie ratio for 120- to 180-lb gilts grown in a commercial finishing environment
A 28-d growth trial was conducted to estimate the lysine requirement for 120to 180-lb gilts. A total of 1,092 gilts (initially 121.7 lb, PIC 337 × 1050) were allotted to treatment diets with standardized ileal digestible (SID) lysine/ME ratios of 1.89, 2.12, 2.35, 2.58, 2.81, and 3.04 g/Mcal. All diets contained 0.15% L-lysine HCl and 3% choice white grease and were formulated to meet or exceed all other requirements. Seven replicate pens per treatment were used; there were approximately 26 pigs per pen. Gilts were vaccinated with 2 doses of commercial porcine circo virus type 2 (PCV2) vaccine while in the nursery. As the SID lysine content of the diet increased, both ADG and F/G improved (linear, P < 0.001) with the greatest values at the SID lysine/ME ratio of 2.58 g/Mcal. Daily SID lysine intake and SID lysine intake per pound of gain increased (linear, P < 0.001) as lysine density of the diet increased. Diet did not in-fluence (P > 0.25) feed cost per pound of gain; however, there was a tendency for improved (linear, P < 0.06) income over marginal feed cost (IOMFC) as SID lysine level increased in the diet. The SID lysine/ME ratio that yielded the greatest IOMFC value, 2.58 g/Mcal, corresponded to the treatment with the greatest growth response. On the basis of this trial, 2.58 g SID lysine/Mcal ME appears to provide the greatest biological and economical response for 120to 180-lb gilts.; Swine Day, 2008, Kansas State University, Manhattan, KS, 2008
Introduction
As feed prices continue to increase, producers must optimize feed efficiencies to minimize feed costs.Because lysine is the first limiting amino acid in corn-soybean mealbased swine diets, it is essential for nutritionists and producers to utilize the most effective lysine level to maximize efficiency without incurring extra costs.Lysine requirements are often expressed in terms of standardized ileal digestible (SID) lysine or as a ratio of SID lysine to the ME level in a diet.This ratio allows dietary lysine levels to be altered for a variety of feeding situations in which different feed ingredients are used.Lysine requirements need to be routinely reevaluated as genotype and heath status change within the production system.Currently, porcine circovirus type 2 (PCV2) vaccine is used to protect against the performance and economic effects related to porcine circovirus disease.The vaccine also has been shown to increase growth rates.
Therefore, the objective of this experiment was to estimate the lysine requirement of 120to 180-lb gilts vaccinated with PCV2 vaccine.
Procedures
Procedures in this experiment were approved by the Kansas State University Institutional Animal Care and Use Committee.A total of 1,092 gilts (initially 121.7 lb, PIC 337 × 1050) were used in a 28-d growth trial to estimate the lysine requirement for 120 to 180 lb gilts.Gilts were vaccinated with 2 doses of commercial PCV2 vaccine while in the nursery and housed in a curtain-sided commercial finishing barn located in southwest Minnesota.There were 26 pigs per pen.
All diets were corn-soybean meal based with 0.15% added L-lysine HCl.Soybean and corn levels were altered to achieve the desired lysine concentration in the diet.All diets contained 3% added fat in the form of choice white grease.Diets were formulated to meet all other requirements recommended by NRC (1998).The SID lysine/ME ratios for the experimental diets were 1.89, 2.12, 2.35, 2.58, 2.81, and 3.04 g/Mcal (Table 1).During the trial, diet samples were collected and analyzed to validate the calculated amino acid values.
Pens of pigs were allotted to 1 of 6 dietary treatments in a completely randomized design with 7 replicate pens per treatment.Pig weights (by pen) and feed disappearance were measured throughout the trial at 14-d intervals to determine ADG, ADFI, F/G, daily SID lysine intake, SID lysine intake per pound of gain, feed cost per pound of gain, and income over marginal feed costs (IOMFC).Income over marginal feed costs was calculated by assessing a value to the weight gain per pig ($60/cwt) during the trial and subtracting the feed costs incurred per pig.The data were analyzed for linear and quadratic effects of increasing SID lysine:calorie ratios by using the PROC MIXED procedure in SAS with pen as the experimental unit.
Results and Discussion
Daily gain and F/G improved (linear, P < 0.001, Table 2) as SID lysine:calorie ratios increased in the diet.The greatest numeric increases in ADG and F/G were observed up to 2.58 g SID lysine/Mcal ME.No statistical trends were detected (P > 0.70) for ADFI.Therefore, daily SID lysine intake increased (linear, P < 0.001) as dietary SID lysine levels increased.SID lysine intake per pound of gain also increased (linear, P < 0.001) as lysine density of the diets increased.On the basis of the performance results, it appears that approximately 9 g SID lysine were required for each pound of gain.No differences were observed (P > 0.25) for feed cost per pound of gain; however, IOMFC tended (P < 0.06) to increase linearly as SID lysine:calorie ratio increased.The greatest economical response was at 2.58 g SID lysine/Mcal ME, which corresponds to the growth response.These data illustrate that 2.58 g SID lysine/Mcal ME provides the most efficient growth and economic responses for 120-to 180-lb gilts.
Figures 1 and 2 show results from our trial compared with those from a similar trial conducted by Main et al. (2002) in the same southwest Minnesota research facility with the same genetic line of pigs (PIC 337 × 1050).Growth plateaus were reached at slightly higher SID lysine:ME ratios in our trial than in the earlier trial.This higher lysine requirement was not surprising as we continue to reap the benefits of growth due to genetic advancement as well as improved overall health with PCV2 vaccination.Kansas State University previously recommended using approximately 2.35 g SID lysine/Mcal ME for 120-to 180-lb gilts.The data from this trial show that utilizing a slightly higher value of approximately 2.58 g SID lysine/Mcal ME will help maximize biological and economic responses in healthy pigs with good feed intakes and growth rates.
|
2018-12-18T15:55:50.116Z
|
2008-01-01T00:00:00.000
|
{
"year": 2008,
"sha1": "1056a09b259ecdb13fac5b608f15ccf41fc38fde",
"oa_license": "CCBY",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=7016&context=kaesrr",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "a33a098052cdb55b64153e135dabec5529e8efb9",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
257833554
|
pes2o/s2orc
|
v3-fos-license
|
Imaging compact boson stars with hot-spots and thin accretion disks
In this work we consider the observational properties of compact boson stars with self-interactions orbited by isotropically emitting (hot-spot) sources and optically thin accretion disks. We consider two families of boson stars supported by quartic and sixth-order self-interaction potentials, and choose three samples of each of them in growing compactness; only those with large enough compactness are capable to hold light-rings, namely, null bound orbits. For the hot-spots, using inclination angles $\theta=\{20^\circ, 50^\circ, 80^\circ \}$ we find a secondary track plunge-through image of photons crossing the interior of the boson star, which can be further decomposed into additional images if the star is compact enough. For accretion disks we find that the latter class of stars actually shows a sequence of additional secondary images in agreement with the hot-spot analysis, a feature absent in typical black hole space-times. Furthermore, we also find a shadow-like central brightness depression for some of these stars in both axial observations and at the inclination angles above. We discuss our findings in relation to the capability of boson stars to effectively act as black hole mimickers in their optical appearances as well as potential observational discriminators.
In this work we consider the observational properties of compact boson stars with self-interactions orbited by isotropically emitting (hot-spot) sources and optically thin accretion disks. We consider two families of boson stars supported by quartic and sixth-order self-interaction potentials, and choose three samples of each of them in growing compactness; only those with large enough compactness are capable to hold light-rings, namely, null bound orbits. For the hot-spots, using inclination angles θ = {20 • , 50 • , 80 • } we find a secondary track plunge-through image of photons crossing the interior of the boson star, which can be further decomposed into additional images if the star is compact enough. For accretion disks we find that the latter class of stars actually shows a sequence of additional secondary images in agreement with the hot-spot analysis, a feature absent in typical black hole space-times. Furthermore, we also find a shadow-like central brightness depression for some of these stars in both axial observations and at the inclination angles above. We discuss our findings in relation to the capability of boson stars to effectively act as black hole mimickers in their optical appearances as well as potential observational discriminators.
I. INTRODUCTION
The Kerr hypothesis establishes the universality of the celebrated Kerr solution [1] to describe the end-state of full gravitational collapse in terms of the formation of a black hole entirely described for external observers by its mass and angular momentum [2]. Analytical and computational investigations upon its background successfully reproduces current observations of gravitational wave profiles out of binary mergers [3,4] as well as the features of the strong light deflection around the supermassive objects at the heart of the M87 [5] and Milky Way [6] galaxies. This way, the existence of black holes are taken as yet another success of Einstein's General Relativity to accurately describe our Universe and the objects living in it [7,8].
Nonetheless, given the (arguable) impossibility of directly observing the most salient feature of black holes -its event horizon -as well as the theoretical and observational uncertainties and inherent bias in the interpretation of gravitational waves and shadow observations, in the last few years an entire field of black hole mimickers has blossomed (see [9] for a review of their observational status). These black holes mimickers usually are ultra-compact objects potentially capable of disguise themselves as black holes despite not having an event horizon. Among them, boson stars -hypothetical macroscopic Bose-Einstein condensates -bear a special place. This is so because they are supported by complex scalar fields with canonical kinetic and potential terms but, more importantly, because mechanisms for their dynamical generation are known [10][11][12], thus overlooking the main criticisms applied to other popular black hole mimickers such as wormholes. Furthermore, they allow for a large flexibility in their implementation with scalar and vector fields [13] sustained by different classes of self-interactions, and with important phenomenological repercussions in X-ray spectroscopy [14], the dark matter problem [15,16], or gravitational wave signatures [17] and echoes [18].
While gravitational wave observations currently focus on stellar-mass black holes only (while we await for the arrival of LISA and Einstein Telescope devices), shadow observations explore an entire different mass range, namely that of millions of solar masses upwards. Recently many studies have recognized the great opportunity to look for observational hints of black hole mimickers (including boson stars) hidden in shadow images [19][20][21][22][23][24][25]. Such images are created by highlybent trajectories of light rays issued by the accretion disk partnering the compact object, and which consists on a wide ring of radiation (the light-ring) enclosing a central brightness depression (the shadow). Such features are strongly linked to yet another salient feature of the Kerr black hole, namely the existence of unstable null bound orbits, identified as light-rings, which are a generic feature of asymptotically flat black holes [26] but can also be sustained by other ultra-compact objects. Whether i) a black hole mimicker without such bound orbits can still mimic the observed light-ring/shadow features in the images of a black hole and ii) new features may arise that effectively act as observational discriminators between a black hole and its mimicker, are under heavy scrutiny in the literature. The most succulent feature to carry out these tasks is the sequence of additional images in which the secondary images and the light-ring can be decomposed into, depending on the properties of the orbiting material [27][28][29], and the presence/absence of a shadow including its (calibrated) size [30,31].
The main aim of this work is to analyze the features above by imaging two families of spherically symmetric compact boson stars supported by quartic and sixth-order selfinteraction terms using two methods: the observational properties of hot-spots (bright regions associated to temperature anisotropies of the non-homogeneous accretion flow [32]) or-biting the boson star, and those of optically and geometrically thin accretion disks, emitting isotropically. This analysis brings out the succulent observational features of these objects. Indeed, for the first method we observe a secondary track, plunge-through trajectory, on top of the primary-track of the hot-spot in the integrated fluxes of those boson stars, which can be further decomposed into additional secondary tracks for large enough compactness, a feature that becomes more acute for larger observational angles, in agreement with previous results [33]. This neatly distinguishes some of our configurations from canonical black holes and triggers new observational opportunities. Indeed, when using the second method for such very compact boson stars when illuminated by well-motivated intensity profiles, we find two interesting features of some of these objects: i) a sequence of new secondary images that do not appear in their black hole counterparts and ii) a shadow-like feature, i.e., a central brightness depression. While the first such feature is in agreement with the one found in the hot-spots and may act as a clear observational discriminator between boson stars and black holes, the second allows such boson stars to effectively act as black hole mimickers, even at large observation inclination angles.
This work is organized as follows: in Sec. II we set our theoretical framework and specify the three plus three configurations of boson stars supported by quartic and sixthorder self-interactions with different compactnesses, respectively, develop the equations for geodesic motion and discuss the stability of time-like orbits. In Sec. III, we consider the observational properties of hot-spots analyzing the integrated fluxes and astrometrical quantities at observation angles θ = {20 • , 50 • , 80 • }. In Sec. IV we consider the observational properties of these boson stars when illuminated by a geometrically and optically thin accretion disk, placing the focus on the multi-ring structure and the shadow-like mimicking features of some of these stars at both axial inclination and the observational inclinations mentioned above. In Sec. V we conclude with a summary and critical discussion of our results.
A. Models and configurations
Let us a consider a (complex) scalar field Φ minimally coupled to the gravitational field via the action (a = 0, 1, 2, 3) where g is the determinant of the space-time metric g µν written in terms of a coordinate system x a , R is the Ricci scalar, a star Φ * denotes a complex conjugate, and V is the scalar potential. We have adopted a system of geometrized units for which G = c = 1. The corresponding field equations are obtained by varying Eq. (1) with respect to the metric g ab and the scalar field Φ to yield where T ab is the stress-energy tensor of the complex scalar field Φ and it is given by where ∇ c denotes covariant differentiation. We are interested here in considering static and spherically symmetric boson stars, and thus we consider the ansatz for the metric where we have introduced the metric functions A(r), B(r), while φ(r) characterizes the radial part of the scalar field, with ω denoting its frequency, and dΩ 2 denotes the line-element on the two-sphere. Replacing this ansatz into the field equations in Eq. (2) and (3) leads to the equations of motion of the system (a prime denotes a radial derivative): BrA This is a highly non-linear system whose resolution demands the employment of suitable numerical methods. To this end, we supply asymptotic boundary conditions by demanding asymptotic flatness of the geometry via a Schwarzschild-like behaviour, and a vanishing radial scalar field at r → ∞, that is where M is the total mass of the boson star. At the origin we demand the metric functions to be normalized to a finite value, and the radial scalar field to a target value φ c , i.e., In practice, we can always set A c = 1 through a time reparametrization. This leads to a background that does not asymptotically match Eq. (10). Upon finishing the integration, we rescale the time coordinate (changing ω and A) to obtain the spacetime solution in the usual Schwarzschild-like coordinates. Therefore, we integrate the differential equations Eqs. (2) and (3) for static and spherically symmetric backgrounds from the origin using the boundary conditions in Eqs. (13)- (15). In order to do so, we must first specify the scalar potential. In what follows, we consider two well-motivated potentials: • V = µ 2 |Φ| 2 + Λ|Φ| 4 [34], where µ is the mass term and Λ is a coupling constant. Boson stars (BS) presenting quartic self-interactions can be highly massive objects as the maximum mass configuration scales with Λ as M max ∼ Λ 1/2 m 3 p /µ 2 for large values of Λ. However, the compactness of these solutions saturates for large Λ, with the boson stars radius R never being smaller than 6M. This implies that all circular orbits with orbital radii r o > 0 are stable independently of the value of Λ [35]. We shall denote this class of models as ΛBS.
• V = µ 2 |Φ| 2 (1 + |Φ| 2 /α 2 ) 2 [36] where α is a constant parameter. Potentials of this type allow for (degenerate) vacuum configurations. This potential is usually labeled as solitonic, and its self-gravitating solutions as solitonic boson stars (SBS) as it is one of simplest potentials that feature non-topological solitons in the absence of gravity. In this case, ultra-compact solutions can be achieved in the limit α → 0, with the minimum radius being R ≈ 2.81M [37,38]. Because of their compactness, solitonic BS can have light-rings, being an interesting candidate as a spherical black hole mimicker [39].
In this work we consider three candidates belonging to each ΛBS and SBS class, whose chosen parameters and main features are displayed in Table I. We focus on the values Λ = 400 and α = 0.08, picking three solutions for each potential, all of which are linearly stable against radial perturbations. We depict in Fig. 1 the behaviour of the metric and scalar field functions, and in Fig. 2 the corresponding mass-radius relations, highlighting with markers the solutions explored in this paper. As the scalar field φ(r) decays exponentially for r µ −1 but never vanishes, one can only define an effective radius for the BS, which we define in such a way as to encompass 98% of its total mass. The mass function m(r) can be found through and, therefore the radius is defined by m(R) = 0.98M. Numerical solutions for very small values of α and very large values of Λ are challenging to find with the usual shooting methods, but can be found via alternative semi-analytical approximations [35,37]. Nonetheless, we shall use the full numerical solutions in order to analyze the space-time, with the aid of analytical fits for the image computations.
B. Time-like circular geodesics, marginally stable orbits and light-rings Before going into details about the imaging of boson stars, it is instructive to study the geodesic structure of these space- times. The Lagrangian describing orbits at the equatorial plane θ = π/2 (something we can fix without loss of generality due to the spherical symmetry of the system) is given by where the overdot indicates derivative with respect to the geodesic affine parameter, ϕ is the azimuthal angle, and δ = 1 (0) for time-like (null) geodesics. Introducing the following definitions for the specific energy and angular momentum per unit mass, i.e.: the equation of motion for the radial coordinate is given by the effective balance equation where is the effective potential describing the geodesics. Let us consider the case of time-like circular orbits. We can use the above equations, together with the first derivatives of the effective potential, to find the specific energy and angular momentum, obtaining where r o is the radius of the circular orbit and all the quantities above are evaluated at r = r o . Note that both the specific energy and angular momentum diverge at a (possible) orbit for which vanishes. If the above equation is satisfied for real values of r o , this corresponds to the position of null circular orbits (in the Schwarzschild space-time this results in r o = 3M). Note that ultra-compact objects without event horizons are known for having light-rings that come in pairs [40,41]. Therefore, it is instructive to track Eq. (22) to search for possible lightrings. We focus on SBS as these are the most compact ob-jects explored in this paper. We display the evolution of this quantity with the ratio r o /M for the three SBS configurations presented in this work in Fig. 3. Among all models (including the ΛBS), only the SBS3 one presents such light-rings (around Although it is physically possible to place a particle in a circular orbital motion close to light-rings, this analysis does not tell us anything about the stability of such orbits. The stability of time-like circular orbits can be analyzed through the sign of the effective potential in Eq. (20), i.e., provided that (ε, ) are real for the orbit to exist. By analyzing the models investigated in this paper, we find that for ΛBS all circular orbits are stable, meaning that accretion disks may extend all the way down to the center of the star. For the SBS models, however, we find that there is a window in which either the orbits do not exist or are unstable. This is illustrated in Fig. 4, where we plot the second derivative of the effective potential for the SBS models. We plot this quantity logscale to illustrate solely the stable orbits. For SBS2 and SBS3, the outer marginally stable circular orbit is located very close to 6M, similarly to the Schwarzschil black hole case. This is not surprising, as these boson stars have radius such that R/M < 6. Surprisingly, for SBS1, even though the configuration have radius bigger than 6M, unstable orbits still exist. This illustrates that naively looking into the compactness only to search for marginally stable circular orbits or light-rings in BSs might lead to wrong results.
In Ref. [21] it was pointed out that having stable timelike circular orbits inside the BS is not enough to determine whether a physical light source may exist in those orbits (see also Ref. [24]). A central point is the existence of a maximum in the angular frequency of the time-like geodesics Ω o at some radius, which introduces a scale for the inner edge of the accretion disks. The angular frequency for time-like geodesics can be computed through In Fig. 5 we show the angular frequency for the BSs explored in this paper. We see that for the ΛBS cases (left panel of Fig. 5) the maximum of the frequency is located near the center of the star, indicating that it would be difficult for accretion disks to have a Schwarzschild-like structure. However, for all SBS explored in this paper, a maximum in the frequency is observed (left panel of Fig. 5). This indicates that SBS are more likely to produce accretion disk structures similar to those of black holes. Finally, from directly integrating the geodesic equation we can illustrate how strong the lensing can be in the BSs explored in this paper. Let us focus on the most compact models for each self-interaction, i.e., ΛBS3 and SBS3. We consider a single emitter located at either (x, y) = (2M, 0) or (x, y) = (8M, 0), to illustrate the behavior of a source located inside and outside the star, respectively. The result is shown in Fig. 6. For a source outside the star, the ΛBS3 produces a caustic effect inside the star (something that was observed in other compact stellar models [42]) and the SBS3 model have strong deflections which is consistent with the fact that such configuration have light-rings. For the source inside the star, we see that the effect in the ΛBS3 is weaker, indicating that possible observational effects from that region should be mostly due to the gravitational redshift. For the SBS3 model, however, we can see strong deflections even for a source located inside the star, which combined with the strong redshift due to its compactness should provide a strong candidate to mimic black hole images. We shall further investigate more physical sources in the next sections.
III. ORBITS AND HOT-SPOTS
Let us now analyze the observational properties of hot-spots orbiting a central bosonic star, the latter described by the ΛBS and SBS configurations described in the previous section. For this purpose, we recur to the ray-tracing open-source code GYOTO [43], on which we model the hot-spot as an isotropically emitting spherical source orbiting the central object at for ΛBS all time-like circular geodesics are stable and present a maximum close to the center of the star. Right panel: for SBS however, the maximum frequency is always at some finite radius. We stress here that not all of these orbits are stable (as indicated in Fig. 4). Moreover, for the SBS3 orbits there is a forbidden region in between the light-rings. some constant orbital radius r 0 and at the equatorial plane, i.e., θ = π/2. As a run test, we use this software to ray-trace light trajectories in the two most compact configurations considered in this work, namely, ΛBS3 and SBS3, which are depicted in Fig. 6 (left and right plots, respectively). For the sake of this work, we have set the orbital radius to r o = 8M and the radius of the hot-spot to r H = M, where M is the ADM mass of the background space-time. Under these assumptions, GYOTO outputs a 2-dimensional matrix of specific intensities I ν lm at a given time instant t k . This matrix can be interpreted as an observed image, where each of the pixels {m, l} is associated with an observed intensity. The simulation is then repeated through several time instants t k ∈ [0, T [, where T is the orbital period of the hot-spot, to obtain cubes of data I klm = δνI ν lm , where ∆ν is the spectral width. We use these simulated cubes of data to produce three observables, namely: 1. Time integrated fluxes: 2. Temporal fluxes: 3. Temporal centroids: where ∆Ω is the solid angle of a single pixel and r lm is a vector representing the displacement of the pixel {l, m} with respect to the center of the observed image. A more popular astronomical observable, the temporal magnitude m k , can then be obtained from the temporal flux F k , as In Figs. 7 and 8 we show the integrated fluxes for the three ΛBS and the three SBS, respectively, as observed through three different inclination angles with respect to the vertical axis, chosen conveniently as θ = {20 • , 50 • , 80 • }. The magnitudes for the same solutions and observation angles are plotted in Fig. 9, and the corresponding centroids are plotted in Fig. 10. In these two latter figures, we also provide a comparison between the three ΛBS solutions and the three SBS solutions for different inclination angles, in order to allow one to observe how an increase in the compactness of the bosonic star configuration affects its observables. In the following subsections, we analyze separately the integrated fluxes and the astrometrical observables.
A. Integrated fluxes
The integrated fluxes are depicted in Fig. 7 for the ΛBS configurations, and in Fig. 8 for the SBS ones. For the ΛBS models, one verifies that the results are qualitatively similar to the ones obtained in a previous publication [33] for bosonic stars without self-interactions, a result that is somewhat expected since the space-time properties of these configurations are also similar, i.e., these stars are not compact enough to have neither a light-ring or an ISCO, and they do not feature event horizons either. Indeed, for small observation angles one can only observe the primary track of the hotspot. The secondary track eventually becomes observable as one increases the compactness of the star and/or the observation angle. Such a secondary track features two components, the usual secondary image also observed in black hole spacetimes, and a plunge-through image corresponding to the photons crossing the interior of the bosonic star before reaching the observer. The latter component is absent in black hole space-times due to the existence of an event horizon and consequent impossibility of photons to escape from the interior of the space-time.
For the SBS models, more interesting and qualitatively different results arise. Whereas for the SBS1 model the integrated flux images are again qualitatively similar to the ones previously obtained for the ΛBS models, i.e., only the primary track is observed for a low inclination angle and the secondary track, composed by the usual secondary plus the plungethrough components, eventually arises as one increases the inclination angles. However, the situation drastically changes for the SBS2 and the SBS3 models. Indeed, for such models not only the secondary track is always present, but also several additional tracks can be observed. two components, the usual secondary and the plunge-through components, for larger observation angles. The SBS3 model features an even more complex structure of sub-images: a third additional track and the light-ring contributions can also be observed for low observation angles, and these contributions do not merge as one increases the observation angle.
These results indicate that the qualitative properties of the observed integrated fluxes depend strongly on the compactness of these horizonless compact objects, a feature that was already hinted by a previous publication on relativistic fluid stars [44]. Three different regimes can thus be identified: i. If the light deflection is not strong enough, the secondary track can only be observed for certain inclina-tion angles, i.e., there is a critical observation angle θ (1) c such that if θ < θ (1) c the secondary track is absent; ii. For a stronger light deflection, the secondary track is present independently of the observation angle but its two components, the usual secondary and the plungethrough, might be observed as independent tracks for some observation angles, i.e., there is another critical observation angle θ (2) c , such that if θ < θ (2) c the secondary and the plunge-through components are independent tracks, and if θ > θ (2) c the secondary and the plunge-through components merge into a single secondary track; iii. For a very large light deflection, there is a further split of the secondary track into three independent tracks, independently of the observation angle.
In this work, the third component of the secondary and the light-ring components are only visible for the SBS3 model. We note, however, that this does not mean that these two contributions always arise simultaneously. Indeed, in the previous work on relativistic fluid spheres referenced above, several examples for which the light-ring contributions are present without the third splitting of the secondary track are provided.
B. Astrometrical properties
The qualitative behavior of both the magnitude m k (depicted in Fig. 9 for both the ΛBS and SBS configurations) and the centroid c k (depicted in Fig. 10) are strongly dependent on the sub-image structure of the observation. If a single track, i.e., the primary track, is observable, the magnitude of the observation features a single peak caused by the Doppler shifting due to the orbital motion, whereas the centroid of the observation follows the position of the primary image, as it happens for all of the ΛBS models and for an observation angle of 20 • . The slight difference in the height of these peaks is caused by the differences in the angular velocity of the hot-spot, which is slightly larger for the more compact configurations. If at some point of the orbit a secondary image appears, one observes an increase in the magnitude of the observation caused by the extra photons arising at the observer from the secondary image, and the centroid of the observation is shifted towards the secondary image. This effect can be clearly observed for the ΛBS3 model for an observation inclination of θ = 50 • , as well as the ΛBS1, ΛBS2, and SBS1 for an observation inclination of θ = 80 • . Note that if the effects of light deflection are strong enough to break the two components of the secondary track into two separated images, the secondary and the plunge-though, then the additional peak in the magnitude breaks into several sub-peaks, corresponding to the instants in which the secondary image appears and splits into two components, then both achieve a maximum of luminosity, and finally they merge and disappear. Depending on the relative intensity of these two components, the behavior of the centroid might follow a more complicated trajectory, as it happens for the ΛBS3 model at an observation angle of θ = 80 • and for the SBS2 model at both θ = 50 • and θ = 80 • .
When the light deflection is strong enough to induce the appearance of additional secondary tracks, the complexity of the behavior of the magnitude and the centroid increases. For low-inclination observations for which the additional tracks are present and do not merge, one observes that the centroid still follows an approximately elliptical curve, but this curve is smaller than in the case in which a single primary image is present, as the secondary contributions shift the centroid towards the center of the observation. Furthermore, for the magnitude, although it still features a single peak, the latter is smaller than in the case of a single primary image, as the photons corresponding to the secondary image arise to the observer from a trajectory crossing the central object through the opposite of the primary image, and thus contribute negatively to the Doppler shift. These effects are observed for the SBS2 model at θ = 20 • and the SBS3 model for both θ = 20 • and θ = 50 • .
Finally, it is interesting to note that even though all observable tracks in the SBS3 model are visible independently of the observation angle, the contribution of the secondary tracks to the total flux increases with the observation angle, being particularly relevant in the region of the observers' screen opposite to the primary track. As a consequence, and even though the secondary tracks are always present, one can still observe the appearance of a additional peaks in the magnitude and consequent shifting of the centroid for the SBS3 model at an observation inclination of θ = 80 • . The main difference between this situation and the one described previously for which the secondary image appears at some point in the orbit, splits into two components, merges, and disappears again, is that in that situation one observes three additional peaks in the magnitude, whereas in this case only two additional peaks are present. Note that for all of the SBS3 model observations, both the magnitude and the centroid present a slight noise caused by the light-ring contribution.
A. Intensity profiles
Let us now turn to the observational properties of opticallythin accretion disks around the bosonic stars considered previously. For this purpose, we recur to a Mathematica-based ray-tracing code previously used in several other publications [45,46], where the (infinitesimally-thin) accretion disk at the equatorial plane is modelled by a monochromatic intensity profile. To model these intensity profiles, we recur to the recently introduced Gralla-Lupsasca-Marrone (GLM) model [47], whose main interest is the fact that its predictions are in a close agreement with those of general relativistic magnetohydrodynamics simulations of astrophysical accretion disks [48]. The intensity profile of the GLM model is given by where γ, β and σ are free parameters controlling the shape of the emission profile, namely the rate of increase, a radial translation, and the dilation of the profile, respectively. These parameters can be adjusted in order to select adequate intensity profiles for the models under study. For the purpose of this work, we select two different GLM models to model the intensity profile of the accretion disk, which we motivate in what follows.
• Given that all of the bosonic star configurations considered in this work feature stable orbital regimes close to the center of the star r = 0, and under the assumption that the matter composing the accretion disk interacts only weakly with the fundamental fields composing the star, it is fair to assume that the intensity profile of the accretion disk increases monotonically from infinity downwards and peaks at the center. We denote this as the Central accretion disk model, which is described by the parameters γ = β = 0 and σ = 2M in the GLM model above.
• On the other hand, the SBS configurations feature marginally stable circular orbits at r MS ∼ 6M (cf. Fig. 4). Given that circular orbits become unstable in a region of r o < r MS , and provided that the SBS models explored here have a maximum in Ω o , it is reasonable to consider an intensity profile of the accretion disk which increases monotonically down to r = r MS , where it peaks, and then abruptly decreases for r < r MS . This structure should be similar to the ones found in Ref. [21], which shows through hydrodynamics simulations that some accretions disks in boson stars could have inner edges. We denote this as the ISCO accretion disk model, given the similarity to the black hole case, being described by the parameters γ = −2, β = 6M, and σ = M/4. Note that the r IS CO for the three SBS configurations is not exactly 6M and differs depending on the model. Nevertheless, to allow for a same-ground comparison of the results between these two models, we take β = 6M for every configuration.
The emitted intensity profiles I e for the Central and ISCO disk models are plotted in Fig. 11. In what follows, the Central disk model is used in the background of all bosonic star configurations, i.e., for both ΛBS and SBS, whereas the ISCO disk model is used only for those configurations which feature an ISCO, i.e., only the SBS ones.
B. Axial observations
The intensity profiles given in Fig.11 correspond to the reference frame of the emitter I e , i.e., the accretion disk, where the photons are emitted with a given frequency, say ν e . In the reference frame of the observer, the observed frequency ν 0 is redshifted with respect to the emitted one, with ν 0 = √ −g tt ν e .
Consequently, the intensity profile in the reference frame of the observer I 0 is affected by the shape of the background metric and takes the form The observed intensity profiles for the combinations of accretion disk models and bosonic star configurations outlined previously are given in Fig.12, whereas the corresponding observed axial images (i.e., as observed from the axis of symmetry of the accretion disk) for the Central and ISCO disk models are provided in Figs.13 and 14, respectively.
For the Central disk model, we verify that for every ΛBS configuration, as well as for the SBS1 configuration, the effects of the gravitational redshift are not strong enough to induce a decrease in the central intensity peak, and thus the observed images for these models present similar qualitative properties, more precisely a central blob of radiation. However, for the SBS2 and SBS3 configurations, one verifies that the effects of the gravitational redshift induce a strong dimming in the central peak, leading to a maximum of intensity away from the center. This dimming of intensity produces a shadow-like feature for these two models, inducing a brightness depression region in the center of the observed images. Furthermore, one also verifies that for these two models the light deflection is strong enough to produce additional contributions in the observed images caused by photons that have revolved around the central object more than a half orbit. These are known as the secondary images, similar to those already found in the hot-spots above, and produce the additional peaks of intensity visible for the SBS2 and SBS3 configurations. Furthermore, one can also observe a thin peak in the intensity profile of the SBS3 configuration, corresponding to the light-ring, which is also visible as a thin intense circle in the observed image.
As for the ISCO disk model, given that the intensity profiles in the reference frame of the emitter are truncated at a finite radius, namely r = 6M, all observed images produced with this model feature a central dark region independently of the bosonic star configuration considered as a background. Nevertheless, bosonic stars with different compactness and geodesic structures feature qualitatively different behaviours. For the SBS1 configuration, the light deflection is not strong enough to produce secondary images, and thus the observed intensity profile features a single peak, which is translated into the observed image as a single ring and a dark shadow without any additional features. When the light deflection is strong enough to produce a secondary image, additional peaks of intensity start appearing in the observed intensity profiles, which contribute with extra circular contributions to the observed image inside the previous shadow. The number of secondary images depends on the metric chosen as a background, varying from a single secondary image [44] to several, as it happens for the SBS2 and SBS3 models. In particular, for the SBS3 configuration, one observes three additional secondary peaks in the observed intensity, as well as the light-ring contribution, which are translated as four additional circular contributions in the observed image, inside the shadow.
The results described above indicate that the ΛBS configurations, along with the SBS1 configuration with the Central disk model, are not compact enough to reproduce the expected observable properties of black hole space-times, more specifically, the shadow observed in the images of the supermassive objects in the center of M87 and Sgr A * galaxies, and thus they do not correspond to adequate models for black hole mimickers in this astrophysical context, provided that the universality of black hole metrics hold. On the other hand, the SBS2 and SBS3 configurations, along with the SBS1 configuration with the ISCO disk model, do produce shadow-like features in the observed images, and are thus potentially suitable candidates for black hole mimickers in this context and deserve a more careful analysis.
C. Inclined observations
For the combinations of accretion disk models and bosonic star configurations deemed more astrophysically relevant as black hole mimickers in the previous section, we have produced additional images considering observers standing at inclination angles of θ = {20 • , 50 • , 80 • }. As a comparison to our analysis here, observed images for the same inclination angles in the background of a Schwarzschild black hole can be found e.g. in Ref. [44]. The observed images for SBS2 and SBS3 with the Central disk model are given in Fig.15, whereas the observed images for all SBS configurations with the ISCO disk model are given in Fig.16.
These results indicate that, even though the SBS2 configuration with a Central disk model presents a central dimming of radiation when observed axially, the contrast between the central dimming and the intensity of the secondary peak smoothens out as one increases the observation inclination, resulting in an observed image at high inclinations that differs drastically from the black hole scenario, see Ref. [44]. The same does not apply to the SBS3 model, for which a dark shadow-like region with a strong contrast with respect to the surrounding region near the light-ring is present independently of the inclination observation. It is worth to mention, however, that the size of the shadow of the SBS3 configura-tion is significantly smaller than its black hole scenario counterpart, mainly due to the secondary contributions that appear inside the light-ring, which in turn may trouble the compat- ibility of such models with calibrated observations of shadows' radius [31]. For the ISCO disk model, again one verifies that the SBS1, even though it produces a shadow similar to that of a black hole from an axial inclination perspective since light deflection is not strong enough to produce a secondary image, the resulting observation at high inclinations differs drastically from that of a black hole. The SBS2 and SBS3, on the other hand, do produce observed images fea-turing secondary images, and thus are more closely related to their black hole counterparts. Nevertheless, one can still enumerate several qualitative differences between these models and the black hole scenario, namely the plunge-through image in the absence of a light in the SBS2, a feature similar to what was previously found for boson stars without self interactions [49], and several additional secondary tracks inside the light for SBS3, features that can effectively act as observational discriminators between boson stars of this kind and black hole space-times.
V. CONCLUSION
In this work we have analyzed the observational properties of bosonic stars with self-interactions being orbited by isotropically emitting sources and optically-thin accretion disks. In particular, we studied bosonic stars with quartic interaction terms (ΛBS models), as well as solitonic boson stars with sixth-order interaction terms (SBS models). The latter models were proven to be the most interesting in an astrophysical context, as they more closely reproduce the observable predictions of black hole space-times and thus provide adequate models for black hole mimickers.
Indeed, we have shown that the light deflection effects in the ΛBS models are not strong enough to produce any qualitative differences with respect to the observations from boson and Proca stars without self-interactions, i.e., the same astrometric effects for orbital motion e.g. the shifting of the centroid and additional peaks of magnitude when the secondary tracks are present, as well as a weak central intensity dimming in accretion disk models that extend all the way down to the center of these configurations. We thus conclude that these models can hardly be taken as strong candidates to represent current observations from the EHT and GRAVITY collaborations, and are thus not adequate to describe supermassive compact objects found in galactic centres.
As for the SBS models, we verified that these are potentially relevant in this astrophysical context, provided that they are compact enough. For the least compact of these configurations, namely SBS1, the observational properties are similar to the ones of ΛBS configurations and bosonic stars without self-interactions, and thus inadequate to describe the images of the objects at the galactic centres. However, this is not true for the SBS2 and SBS3 models. Indeed, for the latter models one observes a dimming of intensity in the central region of the accretion disk caused by the gravitational red-shift, resulting in a shadow-like feature similar to that of a black hole. For the SBS3 model, this dark region is more pronounced and remains visible for any inclination angle, although being slightly smaller than its black hole counterpart. Furthermore, both the SBS2 and SBS3 configurations present additional secondary images for both the orbital motion of a hot-spot and the optically-thin accretion disk, and the SBS3 configuration features also light-ring contributions.
The qualitative differences between the SBS2 and SBS3 models with respect to the black hole scenario indicate that, even though these models are virtually indistinguishable from a black hole given the lack of enough resolution in current EHT observations to resolve secondary images in the main ring of radiation, an eventual upgrade in these observatories (via e.g. the ngEHT) and an increase in the quality and resolution of the observed images may allow the detection of these additional contributions to the image in order to conclusively infer the nature of these supermassive compact objects.
Furthermore, it is worth to notice the similarities between the observed images for the ISCO model given in Fig.16 and the integrated fluxes of hot-spot orbits given in Fig.8, which emphasize the correctness of the results. Furthermore, the GYOTO software used to produce the results for the orbital motion of a hot-spot had been proven to produce highaccuracy results in several contexts [20], which emphasizes the validity of our Mathematica-based ray-tracing code as a high-precision tool for the study of light-deflection in the strong-field regime of gravity.
|
2023-03-31T01:15:46.687Z
|
2023-03-30T00:00:00.000
|
{
"year": 2023,
"sha1": "2201b44a244b49bb33a654bd89c5bec9f76c55b3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "2201b44a244b49bb33a654bd89c5bec9f76c55b3",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
259546499
|
pes2o/s2orc
|
v3-fos-license
|
The improved cycling stability of nanostructured NiCo2O4 anodes for lithium and sodium ion batteries
Developing the high-capacity anode materials such as conversion-type metal oxides which possess both Li and Na storage activity is very practical for the high-energy Li-ion battery (LIB) and Na-ion battery (LIB). Herein, we use NiCo2O4 anodes as a model to investigate the morphology evolution which accounts for the poor cycling performance and understand the effect of structure optimization on the electrochemical performance. Three NiCo2O4 samples with different morphologies of microspheres, nanospheres and nanosheets are synthesized. Firstly, the serious structural degradation of NiCo2O4 microspheres is observed whether it works as a LIB or SIB anode. In addition, a significant difference between the lithiation and sodiation capacity of NiCo2O4 materials reveals Na+ ions only partially intercalated in NiCo2O4 and the conversion reaction limited by the strain. Next, NiCo2O4 nanosheets on Ni foam as a binder-free anode for the LIB are investigated which suggest the positive effect of 3D nanostructures on the morphology stability. As a result, NiCo2O4 nanosheets deliver a high lithiation capacity of 1092 mAh g−1 after 100 cycles at 0.5 A g−1 and an excellent rate capacity of 643 mAh g−1 at 4 A g−1. Finally, NiCo2O4 nanospheres are evaluted as a SIB anode which indicate the smaller particle size of active materials is beneficial to the release of stress and structure stability during discharge–charge processes. Relatively speaking, the nano sheet structure is with the best electrochemical performance based on the capacity retention. A rational design of the electrode’ architecture is very important for the conversion-type 3d transition metal oxide anodes for the advanced LIB and SIB.
Introduction
The Li-ion battery (LIB) are the most successful rechargeable battery technology in commercial use.The Na-ion battery (SIB), as the attractive next generation, are promising in some cost-sensitive fields to replace the LIB [1][2][3][4].However, the existing low-energy LIB/SIB products are not suitable for the rapid economic growth.The urgent need for higher energy is driving the development of high-capacity active materials according to that the energy is proportional to the capacity.The research on cathode materials for the LIB/SIB has yielded good results [5][6][7][8].In comparison, anode materials have got much less attention.Thus, it is very significant to improve the electrochemical performance of anode materials.
Though Na shares similar properties with Li, some anode materials such as graphite with good electrochemical activity for the LIB cannot tolerate Na intercalation, due to larger radius of Na + ions and other factors.Hence, the development of high-capacity anode materials which possess both Li and Na storage activity is very practical, especially at cost reduction [9,10].The 3d metal oxides such as NiCo 2 O 4 are one of the most promising candidates which achieve high theoretical capacities by the multielectron-transferbased conversion reaction [11][12][13][14][15]. Zhang et al. prepared mesoporous Co 3 O 4 materials which possess a theoretical lithiation capacity of 890 mAh g −1 according to 8 Li + reaction (NiCo 2 O 4 + 8Li + + 8e − → Ni + 2Co + 4Li 2 O).It is common for the conversion-type anode materials that the initial capacity is higher than the theoretical value, due to the electrolyte-related side reactions, the formation of SEI film and the interfacial storage of Li + .This phenomenon is more obvious for the nano-structure materials with the greater specific surface area.Interestingly, the mesoporous Co 3 O 4 anode delivered an initial lithiation capacity of 1448 mAh g −1 at a current density of 0.1 A g −1 , much more than the theoretical value.The additional capacity was thought to be related to the electrolyte-derived solid electrolyte interphase (SEI) film [16].Yu et al. designed a NiO/Co 3 O 4 / NiCo 2 O 4 heterostructure with a lithiation capacity of 1081 mAh g −1 at 0.1 A g −1 [17].Qiao et al. constructed the RGO/ NiCo 2 O 4 @C material with an initial lithiation capacity of 2048 mAh g −1 at 0.3 A g −1 [18].
However, the capacity retention of these conversiontype 3d transition metal oxide anodes is not satisfactory, associated with the collapse of material structures during the continuous lithiation-delithiation processes.There are no sites for the conversion-type anodes to storage lithium.When being lithiated, the active materials are wrapped by generated products and the volume is inevitably changed.It impedes Li + embedding.The utilization of such materials is seriously reduced by the pulverization of active materials which leads to the loss of electrical contact [19,20].The nanostructure is a promising strategy to address the above issue through releasing the stress [21][22][23][24].A nano-octahedron Ni-Co-Mn oxide anode was synthesized by Ling et al. and retained 78.9% of its initial lithiation capacity after 500 cycles at 1 A g −1 , much better than its counterpart (24.1%) [25].Yet, the systematic research on the loss of the original morphology of 3d transition metal oxides as LIB/SIB anodes and the effect of nanostructuring on their structural stability during electrochemical processes is lacking, which is meaningful to the application.
For the conversion-type anodes, the structural stability is usually evaluated by the morphology change after the electrochemical process and used to understand the cycling stability.Herein, we use NiCo 2 O 4 as a model to investigate the morphology evolution of the conversion-type LIB/SIB anodes and the effect of structure optimization on the electrochemical performance.Three NiCo 2 O 4 samples with different morphologies of microspheres, nanospheres and nanosheets are synthesized.Firstly, the Li and Na storage properties of NiCo 2 O 4 microspheres are understood.The serious structural degradation is observed whether the microsphere works as a LIB or SIB anode which accounts for the poor cycling performance.Next, NiCo 2 O 4 nanosheets on Ni foam as a binder-free anode for the LIB are investigated which suggest the positive effect of 3D nanostructures on the morphology stability of NiCo 2 O 4 materials.As a result, NiCo 2 O 4 nanosheets deliver a high lithiation capacity of 1092 mAh g −1 after 100 cycles at 0.5 A g −1 and a rate capacity of 643 mAh g −1 at 4 A g −1 .Finally, NiCo 2 O 4 nanospheres are evaluated as a SIB anode which indicate the smaller particle size of active materials is beneficial to the release of stress and structure stability during electrochemical processes.A rational design of the electrode' architecture is very important for the conversion-type 3d transition metal oxide anodes for the LIB and SIB.
Experimental section
Chemicals (analytical grade) were purchased from Sinopharm chemical reagent Co., Ltd.
The preparation of NiCo 2 O 4 microspheres
Firstly, 4 mmol of nickel sulfate and 8 mmol of cobalt Sulphate were dissolved in 50 ml of deionized water to form solution A. 12 mmol of sodium carbonate and 5 mmol of ammonium bicarbonate were dissolved in 50 ml of distilled water to form solution B. Then, solution A was quickly poured into B.After stirring for 120 min, the carbonate sediment was obtained.Finally, NiCo 2 O 4 microspheres were synthesized after drying the carbonate precursor at 50 °C and followed by sintering at 450 °C for 120 min in a muffle furnace.
The preparation of NiCo 2 O 4 nanospheres
Firstly, 0.1940 g of Ni(NO 3 ) 2 •6H 2 O and 0.3877 g of Co(NO 3 ) 2 •6H 2 O were dissolved in 80 ml of isopropanol and then transferred to an autoclave.After being heated at 160 °C for 150 min, the nanosphere precursor was obtained.At last, NiCo 2 O 4 nanospheres were prepared by sintering the precursor at 350 °C for 180 min in air.
The preparation of NiCo 2 O 4 nanosheets on Ni foam
1.9386 g of urea and 0.4933 g of ammonium fluoride were dissolved in 80 ml of distilled water and then transferred to an autoclave with Ni foam.After being heated at 120 °C for 150 min, the nanosheet precursor was obtained.At last, NiCo 2 O 4 nanosheets were prepared by sintering the precursor at 350 °C for 180 min in air.
The preparation of NiCo 2 O 4 anodes and coin cells
NiCo 2 O 4 nanosheets on Ni foam were used as the binder-free electrodes.NiCo 2 O 4 nanospheres (or microspheres) were mixed with Super P and PVDF (7:2:1, mass ratio) in NMP to form a slurry and then coating it on Cu foil.The loading mass of nanosheets, nanospheres and microspheres are around 1.5 mg cm −2 , 2 mg cm −2 and 2 mg cm −2 , respectively.The load of 1.5 mg/cm 2 is calculated by measuring the mass of a circular Ni foam with the diameter of 12 mm before and after loading the NiCo 2 O 4 nanosheets.The fabricated electrodes were assembled with a separator (Celgard 2500) and Li (or Na) metal to obtain coin cells (CR-2032).The electrolyte for Li-ion batteries is including of LiPF 6 , EC and DMC.The electrolyte for Na-ion batteries is including of NaClO 4 , EC and DEC.
TGA with air atmosphere operation for the precursor is described in Fig. 2a.The mass loss of about 32% is calculated in thermal decomposition processes.Based on the DTG diagram, NiCO 3 in Ni-Co carbonate precursors is decomposed at about 245 °C with the mass loss of 10%.CoCO 3 in Ni-Co carbonate precursors is decomposed at about 350 °C with the mass loss of 22%.The value agrees with the feed ratio of Ni and Co elements.For improving the crystallinity of NiCo 2 O 4 materials, the higher temperature of 450 °C is used to synthesize samples.N 2 adsorption isotherms and pore size (Fig. 2b and c) suggest the mesoporous characteristics of NiCo 2 O 4 microspheres.The specific 220) and (311) lattice planes are identified by the interlayer distance of 0.46, 0.29 and 0.24 nm, respectively.In addition, the diffraction rings in SAED patterns of NiCo 2 O 4 microspheres reveal the polycrystalline structure (Fig. 2h). Figure 2i depicts the crystal structure of spinel NiCo 2 O 4 .Ni occupies octahedral sites and Co occupies tetrahedral and octahedral sites.Obviously, there is no space for Li + or Na + ions embedding.Thus, the volume change of NiCo 2 O 4 materials during charge-discharge processes is predicted.
The NiCo 2 O 4 microsphere electrodes are evaluated as the anode for Li-ion batteries.Figure 3a exhibits CV curves of the first and second cycles, in which the obvious difference of reduction peaks at about 0.85 V is observed, associated with the formation and growth of SEI layer in the initial Figure 3c shows the rate capacities of 1018, 1035, 996, 918, 829 and 654 mAh g −1 based on the galvanostatic discharge/charge test at 0.1, 0.2, 0.5, 1.0, 1.5 and 2.0 A g −1 , respectively.The discharge capacities decrease as the currents increased, attributed to the transport kinetics of charge carriers.A rise in the discharge capacity when the current recovers to 0.1 A g −1 is due to the activation of NiCo 2 O 4 microsphere electrodes.Further, the cycling performance based on the galvanostatic discharge/charge test is described in Fig. 3d.The capacities increase slightly in the initial 50 cycles and then decay significantly in the subsequent cycles.After 100 cycles at 0.5 A g −1 , the capacity is of 566 mAh g −1 .Nyquist plots of the microsphere electrodes after being activated at 0.1 A g −1 for three cycles are plotted in Fig. 3e.R sei and R ct of the equivalent circuit are assigned to the ion transport resistance in SEI layer and active materials, respectively [32].The fitting impedance data is 31 Ω for R sei and 9 Ω for R ct .Further, Fig. 3f-h describe SEM images of the first lithiated (f) and delithiated (g) and the 100-cycled (h) electrodes.The surface of electrodes at fully discharge state is covered by the products of electrolyte-related reactions.After being fully charged, the surface layer is partially decomposed and the microsphere-like morphology is identifiable, although the size of primary particles is larger than the pristine.As well known, the lithium intercalation leads to volume changes of NiCo 2 O 4 materials [33].However, an irreversible damage of material structures is observed at the 100-cycled electrode, due to the accumulated stress in continuous electrochemical processes.The loss of electric contact between active materials results in the capacity fading.
In addition, the sodium storage performance of NiCo 2 O 4 microspheres is investigated.The sodiation and desodiation processes are suggested by CV curves (Fig. 4a) and galvanostatic discharge-charge curves (Fig. 4b), similar to the electrochemical processes of NiCo 2 O 4 microspheres as LIB anodes.Due to the larger radius of Na + ions, the sodiation suffers from poorer electrochemical activity and more sluggish kinetics [34].The microspheres show an initial sodiation capacity of 697 mAh g −1 with the coulombic efficiency of 62%.The difference in reduction potentials of the electrolyte between the LIB and SIB is observed.Figure 4c shows the rate capacities of 390, 285, 205, 151 and 133 mAh g-1 at 0.05, 0.1, 0.2, 0.4, and 0.5 A g −1 , respectively.Figure 4d describes the cycling performance.After 50 cycles at 0.5 A g −1 , the sodiation capacity of 170 mAh g −1 is retained.Further, SEM images of the microsphere electrodes after the first sodiation (Fig. 4e) and desodiation (Fig. 4f) are exhibited.The microsphere-like morphology is destroyed, suggesting the large strain when sodium intercalates NiCo 2 O 4 microspheres.Thus, a stable structure is necessary for the conversion-type materials as LIB/SIB anodes.
Physicochemical properties and lithium storage performance of NiCo 2 O 4 nanosheets
The nanosheets are employed to explore the effect of a nanostructure design on NiCo 2 O 4 anodes for the LIB. Figure 5a shows the XRD patterns of NiCo 2 O 4 nanosheets, in which the (220), (311), (511) and (440) planes of spinel NiCo 2 O 4 are identified (JCPDS No. 20-0781).Figure 5b exhibits SEM images and the corresponding EDS element mapping, in which the uniform distribution of Ni, Co and O elements is observed.The 222) and (422) planes are identified.Figure 5f-h depict the XPS spectra.Ni 2p3/2 peaks (Fig. 5g) and Co 2p3/2 peaks (Fig. 5h) at the binding energy of 856.5 eV and 781.5 eV are identified.
The lithium storage performance of NiCo 2 O 4 nanosheets is shown in Fig. 6.The second and third CV curves (Fig. 6a) are almost overlapping, suggesting no obvious side reactions after the second cycle.The diffusion coefficient of Li + ions (D Li ) based on the third CV curve is calculated according to the equation: D Li = 0.5(RT/Az 2 F 2 C Li ) 2 , where A stands for the electrode's area and z (Li + ) = 1 [35].As shown in Fig. 6a, the reduction peak at about 1.05 V and the oxidation peaks at about 1.45 V and 2.30 V are marked as peak i), ii) and iii), respectively.The corresponding D Li is calculated and exhibited in Fig. 6b.It reveals the faster kinetics of cathodic processes in comparison with the anodic processes.Figure 6c shows discharge-charge profiles of the initial three cycles which agree with the CV curves.Figure 6d depicts the rate capacities of 1522, 1545, 1286, 943 and 643 mAh g −1 at currents of 0.1, 0.2, 1, 2 and 4 A g −1 , respectively.The rate retention of NiCo 2 O 4 anodes is significantly improved by the optimal nanosheet architecture.As described in Fig. 6e, 6g), which imply that the diffusion coefficient slows down as cycling [36].It is common for the metal oxide anodes which go through a phase transition and SEI layer evolution during the discharge-charge process [37,38].It's cycling performance is investigated (Fig. 8d).After 50 cycles at 0.5 A g −1 , the sodiation capacity of 230 mAh g −1 is retained.Further, The EIS data of the microsphere and nanosphere electrodes after the first sodiation at 0.1 A g −1 is exhibited in Fig. 8e and f.The fitting impedance data is 75.1 Ω (the microsphere electrode) and 24.4 Ω (the nanosphere electrode) for R sei and 535.9 Ω (the microsphere electrode) and 410.9 Ω for R ct (the nanosphere electrode).What's more, SEM images of the nanosphere electrodes after the first sodiation and desodiation (Fig. 8g and h) and 50 cycles (Fig. 8i) are exhibited.Compared with the microsphere as a SIB anode (Fig. 4e and f), the cycled nanospheres show a higher stability in the sphere-like morphology.
Conclusions
In this work, the different morphology stability of conversiontype NiCo 2 O 4 anodes for the LIB and SIB is studied.NiCo 2 O 4 microspheres show the serious morphology loss during discharge-charge processes against Li or Na, due to the accumulation of stress.To maintain the electrode' architecture, the nanostructures such as nanosheets and nanospheres are employed.NiCo 2 O 4 nanosheets on Ni form are prepared as the LIB anodes and exhibit the effectively enhanced cycling performance with the capacity of 1092 mAh g −1 after 100 cycles at 0.5 A g −1 in comparison with 566 mAh g −1 of NiCo 2 O 4 microspheres.NiCo 2 O 4 nanospheres are synthesized as the SIB anodes and show the higher structure stability during cycling while NiCo 2 O 4 microspheres occur the pulverization.In addition, a significant difference between the lithiation and sodiation capacity of NiCo 2 O 4 materials reveals Na + ions only partially intercalated in NiCo 2 O 4 and the conversion reaction limited by the strain.The structure optimization is an effective strategy for an enhancement of the conversion-type anodes for the LIB and SIB.
Figure
Figure 7a-f present SEM (a, b, d and e) and TEM (c and f) images of the precursors (a-c) and NiCo 2 O 4 nanospheres (d-f).Figure 7g-i show the SEM-EDS elemental mapping
Table 1
Cycling performance of metal oxide anodes for LIBs Materials Cycling performance Reference
|
2023-07-11T15:06:28.112Z
|
2023-07-08T00:00:00.000
|
{
"year": 2023,
"sha1": "22e361b991d35a64639a26c73852e35ebd8d074a",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-2795735/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "d958a71c392defb3a7e6952a17fc17a0b9c31f98",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
256920048
|
pes2o/s2orc
|
v3-fos-license
|
Individual recognition of opposite sex vocalizations in the zebra finch
Individual vocal recognition plays an important role in the social lives of many vocally active species. In group-living songbirds the most common vocalizations during communal interactions are low-intensity, soft, unlearned calls. Being able to tell individuals apart solely from a short call would allow a sender to choose a specific group member to address, resulting in the possibility to form complex communication networks. However, little research has yet been carried out to discover whether soft calls contain individual identity. In this study, males and females of zebra finch pairs were tested with six vocalization types - four different soft calls, the distance call and the male song - to investigate whether they are able to distinguish individuals of the opposite sex. For both sexes, we provide the first evidence of individual vocal recognition for a zebra finch soft unlearned call. Moreover, while controlling for habituation and testing for repeatability of the findings, we quantify the effects of hitherto little studied variables such as partners’ vocal exchange previous to the experiment, spectral content of playback calls and quality of the answers. We suggest that zebra finches can recognize individuals via soft vocalizations, therefore allowing complex directed communication within vocalizing flocks.
the experiment and during couples' normal calling interactions prior to experimentation. Pairs are known to differ in the quality of their bond (fitness) 21 and calling patterns 10,22 , and these characteristics might be related 11 . Therefore, we also examined whether the strength of the relationship before the experiment had an influence on the number of answers during the playback experiment. Finally, we explored whether differences in spectral features of the playback stimuli influenced the observed response. If soft calls could be used to recognize individuals, they would allow birds to address specific individuals in vocalizing flocks.
Results
Call rate throughout the experiment. First we checked whether the birds habituated to our playback design. We found no consistent decrease in the vocal activity during the experiment (see Supplementary Fig. S1). There were no systematic variations observed in the number of calls; birds continued to use a similar rate of calls throughout the experiment. There was no statistical difference in the number of calls of the first and the last bin for males (p = 0.48) and females (p = 0.10) suggesting no habituation to the experimental design.
Responses to playback: latency. Playback is an established method to test whether animal vocalizations contain individual identity, and we expected that different levels of familiarity between caller and receiver elicit different vocal responses.
First, we examined whether the latency of response to the playback calls differed by familiarity. In males we found a single significant difference (Fig. 1a, see Supplementary Table S1): individuals answered faster to stack calls of their mate than to those of non-mates during the second trial (trial B, mean ± SD, mate: 0.52 s ± 0.37 s, familiar: 0.65 s ± 0.42 s, unfamiliar: 0.64 s ± 0.41 s; differences between fitted values, i.e. effect size: stack-m -stack-f 0.19 s faster, p = 0.0018; stack-m -stack-uf 0.19 s faster, p = 0.0018). The difference between the two trials was due to a slower response to both familiar and unfamiliar calls in trial A (compare: trial A, familiar: a b Figure 1. Latency during playback. Latency to the first answering call for different playback series (analysed time interval: 0-1.5 s after the onset of the playback stimulus). Colours represent the type of playback call broadcast; dots indicate individual calls (raw data). Familiarity categories: m = mate of the focal bird; f = familiar individual; uf = unfamiliar individual. For males (a) and females (b) in both trials the computed 95% credible intervals (error bars) as well as the fitted value (black symbols) are shown. Significant differences between mate and the other familiarity categories are marked by black asterisks.
Results regarding the song and remaining four calls were equivocal between trials. However, all statistically significant differences in each trial always went towards the expected direction: the focal individual responded faster to the partner than to a familiar or unfamiliar male.
With kackle calls, responses to the mate were significantly faster than to both familiar and unfamiliar calls only during the second trial (mean ± SD, mate: 0.75 ± 0. The answers to songs of the mate were also significantly faster than those to familiar and unfamiliar songs for the first but not for the second trial, despite trends in the expected direction (mean ± SD, mate: 0.66 ± 0. Taken together, our results demonstrate that females can identify individual identity upon hearing males' stack calls. Although inconclusive, our results suggest that partners can be distinguished from individuals of differing familiarity also by the remaining call types. Response to playback: number of calls per playback series. Second, we considered the total number of calls emitted during each playback series. Overall, the number of answers did not differ between the two trials (probability trial B > trial A; p = 0.60 for females and p = 0.75 for males).
In males, during trial A, the number of answers elicited by stack calls depended strongly on their familiarity. Male zebra finches responded with a higher number of calls to the stack calls of their pair-bonded females (mean ± SD of number of calls, 171.8 ± 152.9) than to those of a familiar (29.7 ± 22.1; probability stack-m < stack-f = 0.0003) or unfamiliar female (69 ± 52.6, probability stack-m < stack-uf = 0.0064; Fig. 2a). No other comparison reached statistical significance in males. In females, the number of answers showed no strong relationship with the level of familiarity of the caller (Fig. 2b). These results suggest that males can identify individual identity by hearing females' stack calls.
Type of answering call. Males and females responded to playbacks using different call types. Females predominantly responded with stack calls (69.2% of the total answers), followed by distance calls (22.4%) (see Supplementary Fig. S2). In males we found a similar pattern, the majority of their responses being stack call (50.1%), followed by distance calls (17.6%) and hat calls (11.8%) (see Supplementary Fig. S3). Not only the number of calls or their latency, but also the quality of the answer (type of call used to answer) might differ when responding to different familiarity levels. However, we did not find any such differences (see Supplementary Tables S3 and S4 for females and males, respectively). Thus, we conclude that the level of familiarity did not influence the type of response to playback.
Habituation within playback series. Because the specificity of the answers for the different familiarity levels could change during the experiment due to behavioural habituation, we compared results obtained during the first and last 30 playback calls of each series with those of the complete series (300 calls). Regarding the number of calls, in males, we found only two statistical differences between the two datasets for both first and last playback calls (see Supplementary Figs S4a and S5a) which were in the expected direction, there were more calls in response to the mate compared with the other familiarity levels. In females, we also found only three differences between response to the first 30 playback calls versus the complete dataset, and only one difference between response to the last 30 playback calls and the complete dataset (see Supplementary Figs S4b and S5b). Thus also for females we found limited differences between datasets and all in the expected direction with more answers to mates. Interestingly, considering only the first playback calls in the first trial we found more answers to mates compared to both of the other levels of familiarity when answering male songs.
There were very few differences in response to experimental conditions over time when considering the latency of response to playback when comparing the full dataset to the first 30 calls (see Supplementary Fig. S6 and Tables S5 and S6) and to the last 30 calls (see Supplementary Fig. S7 and Tables S7 and S8). Only when considering the 30 call subsets, males showed significant differences in the direction opposite to our expectations in kackle and hat, i.e. slower answers to the mate than to non-pair-bonded females. This might be due to the rarity of elicited answers and the absence of a real pattern leading to false positive results.
For females, for which we showed that latency was very important (Fig. 1b), there was only one case out of 24 for each first and last 30 playback calls dataset where the direction of the difference opposed our expectations. The majority of differences between latency to respond to the mate versus other familiarities observed in the full dataset were also present in the subset data, especially in the first trial (A) (see Supplementary Figs S6b and S7b); a change in response to experimental treatment over time occurred during the last 30 playback calls of the second trial (B) were most of the differences with the full dataset are concentrated (see Supplementary Table 8). These results confirm that the quality of the answer did not change much throughout the experiment and that longer playback series produced more reliable results.
Relation between calling behaviour during baseline and experiment. We investigated whether the calling relationship of mates before the experiment influenced the number of answers during the experiment. We first asked which call combinations showed repeatable patterns during the baseline period and, therefore, might result in a predictable answering rate during the playback experiment (see Supplementary Table S9). We therefore considered only the stack-stack exchanges whose percentage of answers was consistent during baseline recording for males (repeatability = 0.995 + 0.004) and females (repeatability = 0.83 + 0.13). Subsequently, we correlated the percentage of answers during the playback experiment (separately for trial A and B) with the percentage of answers during the baseline (mean of both days). There was no relationship between baseline and experiment in females (trial A p = 0.7905, trial B p = 0.4784) but we detected a negative relationship in males (trial A p = 0.0325, trial B p = 0.0523) (see Supplementary Fig. S8). The relationship between response during baseline a b Figure 2. Number of calls during playback. Number of answering calls that focal individuals emitted during the different playback series. Raw data for males (a) and females (a) and both trials (symbols indicate responses of individual birds) and the computed 95% credible intervals (error bars) as well as the fitted value (black symbols) are shown. Colours represent the type of playback call broadcast. Familiarity categories are indicated by letters: m = mate of the focal bird; f = familiar individual; uf = unfamiliar individual. Significant differences between mate and the other familiarity categories are marked by black asterisks. and experimental periods in males demonstrates that vocal performance during the experiment is related to preceding vocal relationships.
How the variability of call type spectral features is related to the variation in conspecific response. Finally, we identified the most individually distinct call types and whether the individual variability of each call type was a good predictor of the conspecific response (quantified as the number of calls and latency to respond). We aim to demonstrate that what we interpret as recognition is not a by-product of easier discrimination. According to our analysis the hat was the most distinct call type for both males and females. The stack call, although unambiguously recognised by both males and females, was not the most distinct call type (see Supplementary Fig. S9a). Furthermore, the magnitude of the response (response to the mate -response to the familiar/unfamiliar) was not correlated with the index of individual distinctiveness (see Supplementary Fig. S9b). Because the most individually distinct call types were not the ones recognised best, we demonstrated that there was recognition beyond discrimination.
Discussion
Females and males unambiguously showed individual recognition of stack calls produced by the opposite sex. We found a differential response to distinct familiarity levels, both in the number and timing of the response. Females only used timing to demonstrate recognition: they vocalized at similar rates when responding to the playback of different familiarity levels, but responded more quickly to the calls of their mate versus non-mates. Intriguingly, males used multiple strategies in different trials to demonstrate recognition: a higher number of calls in response to mates during the first and a shorter latency to respond during the second. Furthermore, in females, consistent differences between answers to their mate and those to at least one other level of familiarity were detected for two call types (hat and distance call). Only latency and number of calls regardless of type differed between familiarity levels.
Until recently, soft calls have been considered "a background hum in which other calls are embedded (…), not directed at specific individuals and do not stimulate specific replies" 7 . In contrast to this view, growing evidence demonstrates that soft calls are indeed directed at individuals and can elicit specific replies 10,11,22 . In group contexts, addressing specific subjects is a prerequisite for effective communication and we now provide the missing link explaining how this is achieved: we show that soft calls can be assigned to individuals and that the latency of an answer can provide specific information. For stack calls, we estimated a difference between the answer to the mate and familiar or unfamiliar individuals of approximately 188 ms for males and 125 ms for females. This delay is roughly similar to the mean latency of calls used as replies and double the length of stack calls 10 ; therefore, this response gap can be biologically relevant and directly used within a communicating group. Individual vocal recognition using contact calls has rarely been investigated in Passeriformes although different functions have been proposed: for example, Large-billed Crows (Corvus macrorhynchos) can recognize strangers' loud calls 23 , Long-tailed bushtits (Aegithalos caudatus) kin's contact calls 6 , Chestnut-crowned Babblers (Pomatostomus ruficeps) group members' contact calls 24 and Silvereye (Zosterops lateralis) mates' calls 25 . However, in most of the studies it was unclear whether these calls were learned and a large proportion of the typical repertoire remained untested. Therefore, more research is needed to identify common themes in the evolution of recognition of soft vocalizations and to establish whether addressing specific individuals in a vocalizing group is common among Passeriformes.
Females tended to respond differently to their partner's vocalizations in most call categories tested, whereas males' answers only differed when responding to stack calls. We cannot yet explain this result, but differential discrimination abilities 26 and sex-specific roles in the communication process have been proposed 27 . Despite several lines of evidence indicating that females might be able to recognize individual identity from all soft call types, we could only confirm this for hat and stack calls. This may be due to the differing functions of specific call types. For example, recognition of the hat call might be important in identifying the alarming bird, a colony member or an external individual, whereas recognizing the stack call may serve to maintain vocal contact with a mate. Tet and kackle calls, in contrast, are part of the private communication occurring at the nest where other individuals are not present and thus, individual recognition may be less important 12,19 . Breeding calls, such as the kackle, become more common once a couple is nesting 11 . The stack call, on the other hand, is one of the most common call types in non-breeding groups 10, 11 , a situation which resembles the context of our playback experiment, which might explain why stack calls were promptly recognized in our study.
Notably, our results entail a process of comprehension learning of the stack call, which is important because comprehension learning is a prerequisite for the evolutionary origin of vocal learning 10,28 . Soft calls are generally used when partners are in close proximity of each other -and therefore see each other 7,12,19 ; hence cues other than acoustic modalities are available to facilitate individual recognition. Therefore, identifying individuals is indeed not the sole intention of these vocalization types; the encoded identity can be used in communication between specific individuals in a group. Our results suggest that birds are integrating information about call type and call identity to tailor vocalizations and provide the correct answer type and time. Previous observations and multiple independent lines of evidence led us to postulate that our findings agree with the hypothesis that vocal learning is driven by social complexity 29 . Learning acoustic parameters is a precursor for any subsequent learned modification of the spectral features of a vocalization 30 . Moreover, soft calls are encoded in a high order telencephalic nucleus of the motor pathway 10 , which is fundamental for the control of learned vocalizations, and may facilitate coordination of communication 31,32 . Therefore, although vocal recognition is present in many vocal non-learners 5 and comprehension learning may just be a prerequisite rather than a driving force, we suggest that unlearned calls in vocal learners might provide a model to better understand the origin of vocal learning capabilities.
Until now, only part of the repertoire of the zebra finch had been tested for individual recognition. As for many other Passeriformes 33 , song has repeatedly been shown to contain individual characteristics that can be used for identification 7,27,34 . In most cases this has been proven in simultaneous choice tests 13,35,36 . In contrast, higher vocal response towards the partner's song was only reported once and exclusively when vocalizations emitted towards the speaker were taken into account 36 . Our results regarding song are equivocal; females showed a differential response during their first trial, confirming previously published results, but this response did not hold during the second trial, which could partially be explained by habituation to the experimental design for this vocalization type. In addition to song, several studies investigated the distance call of both sexes, attempting to assess whether these vocalizations contain individual information and whether this information is used for identification [13][14][15]37 . In our experiment, we found sex-dependent behavioural responses to recordings of distance calls from individuals of different familiarity. Female zebra finches displayed more and faster responses towards their mate's calls than towards familiar and unfamiliar distance calls, thereby confirming previously reported discrimination capabilities 14 . Conversely, in males, no such differences were found, despite trends in the expected direction. Our results thus diverge from previous studies, which demonstrated a significantly higher number of answers during the replay of the mate's than of familiar distance calls. However, in previous studies only distance calls were regarded as answers 15 or only considered the neuronal response in males' high-order auditory areas (Caudomedial Mesopallium, NCM) 38 . Furthermore, the difference between our and previous studies might be due to differences in the selection method for playback stimuli. We extracted distance calls from the normal communication flow, i.e. calls uttered when both partners remained in the same sound-proof box. In contrast, previous studies used provoked distance calls elicited by visually separating the individuals 14-17, 39, 40 . Because social context can influence the acoustic structure of a bird's call 40 , it is possible that provoked calls emitted by birds in isolation exhibit enhanced call urgency in order to initiate contact with their partner. This call urgency might in turn increase the motivation of focal subjects to respond to these calls in a playback setup, possibly explaining why males in previous experiments showed a higher vocal response to their partner's calls than in our study.
We used natural rather than synthetic vocalizations for playback to ensure that stimuli contained all necessary acoustic structures, as altered call perception is possible when using artificially created vocalizations 41,42 . Additionally, we employed long playback sessions to increase power for our analyses. Most playback studies use very few calls to avoid habituation. Instead, we attempted to mitigate habituation by continuously varying the interval between successive calls. Despite the long duration, calling rate did not decrease during the experiment or during the single series, indicating that the birds did not habituate to the playback. Additionally, differences between the results obtained for the first 30 playback calls compared to those for all 300 calls were negligible. When comparing the last 30 playback calls to the entire dataset the only noteworthy differences occurred during females' second trials which may indicate a certain degree of habituation. This is remarkable because we found that low numbers of answers (e.g. familiar kackle and hat in males, see Supplementary Figs S6a and S7a) actually increased improbable and false positive results. We repeated the entire experiment on two consecutive days to assess whether birds habituated to the playback design. Indeed we found differences between trials, but not concerning the stack call, which was always answered differentially according to familiarity level. The differences between trials are difficult to interpret and should be considered when planning experiments that contain multiple presentations of the same stimulus. Finally, we did not observe an effect of social context required for answer specificity 14,15 , the audience pairs, as their calls did not influence the results directly. Specifically, backpack microphones worn by the focal birds assured the individuality of the recordings 20 , and the effect of the audience, quantified in our models, was limited.
Stress is also a possible confounding factor in our experimental setup. Notably, corticosterone levels in zebra finches increase 24 hrs after the separation of an established pair 43 . Moreover, these hormones are associated with a reduction in vocal discrimination ability 40 . Therefore, mate separation before and during the playback experiment might have increased stress levels in the test subjects, thereby impairing their discrimination abilities. Although we endeavoured to reduce the stress for focal birds by limiting the separation period to one night before the first experimental trial, the different personalities of the test subjects may have led to differences in stress response 44 . In addition, the quality of the pair bond itself might influence the level of stress birds experience when separated from their mate; couples sharing a stronger bond might be more strongly affected by separation than those having weaker pair bonds. This might partially explain why in males, which are highly repeatable in their response 22 , we found a negative relationship between the proportion of answers during playback experiments and the baseline. Unfortunately, the small sample size of our study makes it difficult to generalize these findings; however, the correlation with measurements of pair strength is worth further investigation.
Vocal individual recognition of the so-called "soft calls" has not previously been tested in zebra finches; we provide the first evidence that at least one of these call types, the stack call, contains individual identity despite not being more individually distinct than other soft calls. This finding implies that soft unlearned acoustic signals are sufficient to determine a caller's identity and that visual cues are not required. We have identified the mechanisms underlying how birds vocally interact in a group. Namely, employing differential latency times when answering to different subjects allows a caller to address individuals specifically. Vocal recognition is a fascinating aspect of vocal communication because the ability to recognize individuals in a group of vocalizing conspecifics is a prerequisite for complex communication networks.
Material and Methods
Ethics statement. The Animals and housing conditions. A total of 12 adult zebra finches, six pairs, served as the focal birds for the experiment, plus 14 additional birds which served as the audience (i.e. as company for focal birds) 15 . All pairings were "forced", i.e. couples were formed by randomly selecting unrelated individuals from the breeding facilities of the Max-Planck-Institute for Ornithology, Seewiesen, Germany. Birds were kept in a 13/11 Light/ Dark cycle, at 24 °C and 60-70% humidity. Food (mixed seeds, and "egg food"), fresh water and cuttlebone were provided ad libitum. We performed all experiments with birds from forced pairs in the non-breeding condition. All couples had been together for at least six months, raised at least one brood and had been housed without nesting material for three months prior to the experiment. Zebra finch couples were housed in single pair-cages (123.0 cm × 37.0 cm × 38.5 cm) in two separate rooms with three experimental couples per room. Within each room couples could see and hear each other, whereas there was no acoustic or visual contact between pairs housed in different rooms, making these two groups "unfamiliar" to each other. Experimental pairs were housed with other breeding pairs, seven of which served as "audience couples" during the playback experiments.
Experimental timeline, sound recording and playback. Zebra finch couples were moved to sound-proof boxes one week before the experiment to allow for acclimatization to the new conditions. Animals were equipped with custom-made light-weight (less than 5% of average body weight) wireless microphone transmitters fitted on their back via a leg-loop harness as previously described 20 which recorded continuously throughout the experiment. To determine the vocal relationship between males and females 10, 22 , we audio recorded each pair for three consecutive days, using the recordings of the first and third day as the baseline for subsequent analysis of calling patterns. Audio was scored for four hours a day (12:00 to 16:00). Each sound-proof wooden box was equipped with a general microphone (TC20; Earthworks, USA) which was used to extract playback stimuli.
To create the familiarity level "familiar" (equivalent to a group member in the wild), we moved a couple from the same housing facility as the focal pair into their cage during the evening of day 3 (end of baseline recording). The two pairs shared a cage for approximately 24 h, separated by a wire mesh allowing acoustic and visual interaction. Their calls served as "familiar stimuli" during the playback experiments 37,38 . During the night of day 4, the "familiar" couple was removed and the male and female of the focal couple were separated. Each focal bird was placed with an unfamiliar, established pair: the "audience couple". This audience provides a social context that increases the specificity of the answers 14,15 and prevents social isolation 40 . Experiments were carried out during the morning and afternoon of days 5 and 6, resulting in one trial per bird per day. The time of testing (morning/ afternoon) was randomized, and audience couples were changed between trials (i.e. in the evening of day 5).
The replay of calls was controlled via a computer connected to an amplifier (CS-PA1, SINTRON Vertriebs GmbH, Germany), and calls were broadcast via a loudspeaker (KFC-1761S, Kenwood Electronics, UK) placed at the back wall of the sound-proof box. The sound level of the experimental signals, measured at a distance of 1 m from the loud speaker (Sound Meter, Model HD600, Extech Instruments, U.S.A), was adjusted to a peak value between 50.03 dB ± 0.87 dB (mean ± SD; minimum for the lowest call type, the tet) and 74.05 dB ± 1.15 dB (corresponding to the loudest call type, the distance call reflecting a typical level of a natural distance call) 39 , and was constant for all three familiarity levels of each call type.
The playback stimuli and the focal bird's calls were recorded (via external and backpack microphone, respectively) synchronously on separate audio channels for subsequent alignment. Each subject was presented with calls of three different individuals of the opposite sex, representing three familiarity levels: "mate" (m; partner of the focal individual), "familiar" (f; known individual), and "unfamiliar" (uf; unknown individual). Six different vocalization types were used for playback experiments: tet, stack, distance call, kackle, hat, and -in the case of females -song (for the original audio files see additional information). Playback calls were extracted from the general microphone recordings during the acclimatization phase and baseline period. In rare cases in which birds did not emit a specific call type during the sampling period, this type was omitted from the playback. Each vocalization was high-pass filtered (freq = 85 Hz), its amplitude normalized to 0.1 dB (maximal sample value) and the stimulus faded in and out to avoid rapid amplitude changes. Playback calls were presented in blocks, each block consisting of three series of playback calls of the same call type. Each series consisted of calls of an individual representing one of the three different familiarity categories (m, f, and uf). We used three randomly selected vocalizations of each call type per individual in order to mitigate pseudo-replication 45 . Within a series, each playback call was repeated 100 times for a total of 300 calls. To ensure that the playback was unpredictable, the inter-call intervals were changed randomly at each emission (within 2 ± 0.5 s, uniform distribution). Playback series within a block were interspaced by 70 ± 10 s of silence, and different call-type blocks by 130 ± 10 s of silence. The total duration of an experimental trial was approximately 2:45 h for males (15 series) and 3:30 h for females (18 series). The order of call-type blocks and of familiarity categories within blocks, as well as the order of single calls within series, were determined semi-randomly.
Sound analyses and sorting of vocalizations. Sorting of vocalizations from audio files proceeded as previously described 10,20 . Briefly, sounds that exceeded a manually set amplitude threshold were extracted for further analysis. Using custom software written in Delphi Pascal for Windows (SoundExplorer; R. F. Jansen, MPIO, 2000; see ref. 11 for GitHub address), the following parameters of each sound were computed: average frequency, modal frequency, fundamental frequency (first peak), Wiener entropy, duration, and their standard deviations. The subsequent clustering process was based on a k-means clustering algorithm 46 . After noise detection and elimination, the results were refined manually: each cluster was checked and sorting errors were corrected, resulting in a separate cluster for each call type of a bird's repertoire 10,11 . Information about the call type and the timestamp was saved for each vocalization. We used this information to determine the temporal relationship between all possible call type combinations of mates during the baseline 10,11 and during the playback experiments. During the baseline period we considered all calls emitted within 0.5 seconds from the partner's call as answers. For each combination of call types, we calculated the number of answers and the proportion of answers from the total amount of emitted calls of that specific type (see ref. 22 for details on calculations).
Scientific RepoRts | 7: 5579 | DOI:10.1038/s41598-017-05982-x How the variability of call type spectral features is related to the variation in conspecific response. We then explored whether the most individually distinct call types were the ones that were easier to discriminate. We extracted 14 acoustic features 12 of each call sonogram tested as playback stimuli (i.e. for each call type three calls for individual of each familiarity). which were then used to conduct principal component analysis (PCA) 47 for each call type. We extracted the mean frequency and its standard deviation, median frequency, skew, kurtosis, spectral flatness measure, entropy, mode frequency, frequency precision of the spectrum, peak frequency, fundamental frequency, dominant frequency, maximum dominant frequency, and duration (R package "seewave") 48 . Subsequently, we used the first two principal components as explanatory factors in a linear discriminant analysis (R function "lda") 49 with familiarity level as a predictor. We used the parameters obtained to predict (R function "predict") 49 the proportion of cases in which the calls were assigned to the correct familiarity. We ran this analysis for each focal bird and we present the average and SD of the incorrect assignments (see Supplementary Fig. S9). Call types in which the individuals are very distinct will have a lower proportion of incorrectly assigned calls. Lastly, we used the within call type variability as a predictor of the magnitude of the response to the conspecific (number of calls and latency, response to the mate -response to the familiar/unfamiliar -for the latency the average for each individual was taken) in a linear mixed model with random factors as described below.
Statistics. All statistical analyses were performed using R 50 in a Bayesian framework. We used linear mixed-effect models 51 to analyse the effect of different playback stimuli on the total number of response calls emitted during each playback series, and on the latency of the first answering call for each playback series (upper limit 1.5 s) separately by sex. Before interpreting the results, we checked whether model assumptions were met by inspecting the residuals for normality, homoscedasticity, and lack of remaining pattern. Both the total number of responses and latency were square root transformed to approximate normality. Three categorical variables served as fixed effects in both models: familiarity (3 levels: mate, familiar, and unfamiliar), playback call type (5 levels: distance, stack, kackle, hat, and tet; 6 th level for females: song) and trial (2 levels: trial A and trial B), as well as all interactions. We included individual identity (12 levels), audience (i.e. the identity of the audience pair, 7 levels) and playback order (i.e. the order in which playback series were broadcast, 15-18 levels) as random factors. Model structure was based on the study design rather than model selection. Familiarity was expected to influence both outcome variables (total number of response calls and latency of the first answering call). Playback type and trial were also hypothesized to affect the outcome differentially by familiarity in the distinct playback call types. Therefore the interactions of all these variables were included. In order to obtain parameter estimates we used Maximum Likelihood (ML) because we were most interested in fixed effects 52 . We calculated credible intervals (CrI) using the function "sim" from the R package "arm" 53 . A total of 10000 values were simulated from the joint posterior distribution of the model parameters. If the CrI of different playback categories did not overlap, the results were considered significantly different from each other. In cases where CrIs overlapped, but the fitted values differed largely between playback series, a derived calculation from the aforementioned simulated values was performed. For this purpose, simulated values of the two groups of interest were compared (10000 comparisons), and we reported the number of cases in which the value of the first group was larger than the one of the second group. If this condition held true for less than 5% of the cases, the mean response of the first group was regarded as significantly smaller than that of the second.
To determine whether birds had habituated to the playback experiment, we investigated changes in calling rate throughout the experiment. First, we counted the events occurring in 500 s bins (roughly the length of a playback series) for each bird and trial. We then performed a linear mixed model with the number of calls as the dependent variable and bins (18 levels for males, 21 for females) as the explanatory variable with individual ID as a random factor. The parameters were simulated 10000 times to estimate fitted values and CrI from the outcome of the model. This allowed an assessment of patterns and enabled us to test whether there was a statistical difference in the number of calls between the first and the last bins.
We tested whether the familiarity of the playback affected the type of answer. For each trial we scored and counted the number of calls within 0.5 s of every playback call. We then calculated the proportion of each type of answer out of the total answers for each bird. Because very low counts might easily influence the proportions, we set a threshold of 5 answers to each playback series in order to be considered (dataset included as additional information, see Supplementary Table 3). For each playback series we compared the proportion of call types of the three familiarities using a non-parametric test (Kruskal-Wallis rank sum test). We ran the test only when there were at least 8 non-null values per series (i.e. the sample size was at least 8 times the number of explanatory variables) 54 .
Additionally, we investigated whether we could explain individuals' answering rate by the pair's vocal relationship established before the experiment (during "baseline"). Among all call combinations used during "baseline", we selected only those in which each bird had used at least 3 calls in order to rule out inconsistent and rare combinations. For the resulting combinations we calculated a repeatability index 22 because only in case of a repeatable behaviour we can expect a consistent response during the experiment. We calculated repeatability according to the F ratio: the mean squares among groups divided by the mean squares within groups 55 . Finally, for repeatable call combinations, we quantified the correlation between the proportion of answers (i.e. of calls emitted within 0.5 s) during playback and baseline.
|
2023-02-17T15:05:20.979Z
|
2017-07-17T00:00:00.000
|
{
"year": 2017,
"sha1": "34ebd5cb3a8d3a953c3c7373fc115bcb69a89964",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-017-05982-x.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "34ebd5cb3a8d3a953c3c7373fc115bcb69a89964",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": []
}
|
260260101
|
pes2o/s2orc
|
v3-fos-license
|
Advances in Understanding the Role of NRF2 in Liver Pathophysiology and Its Relationship with Hepatic-Specific Cyclooxygenase-2 Expression
Oxidative stress and inflammation play an important role in the pathophysiological changes of liver diseases. Nuclear factor erythroid 2-related factor 2 (NRF2) is a transcription factor that positively regulates the basal and inducible expression of a large battery of cytoprotective genes, thus playing a key role in protecting against oxidative damage. Cyclooxygenase-2 (COX-2) is a key enzyme in prostaglandin biosynthesis. Its expression has always been associated with the induction of inflammation, but we have shown that, in addition to possessing other benefits, the constitutive expression of COX-2 in hepatocytes is beneficial in reducing inflammation and oxidative stress in multiple liver diseases. In this review, we summarized the role of NRF2 as a main agent in the resolution of oxidative stress, the crucial role of NRF2 signaling pathways during the development of chronic liver diseases, and, finally we related its action to that of COX-2, where it appears to operate as its partner in providing a hepatoprotective effect.
Introduction
In 1956, Harman described the free radical theory [1].Since then, reactive oxygen species (ROS) have been proposed as the main cause of aging and inflammatory diseases.ROS include both free radicals and oxygen intermediates, such as superoxide radical (O 2 •− ), hydrogen peroxide (H 2 O 2 ), hydroxyl radicals ( • OH) and singlet oxygen ( 1 O 2 ).The main source of ROS in vivo is aerobic respiration [2].However, ROS are also produced by peroxisomal β-oxidation of fatty acids [3], in the metabolism of xenobiotics by cytochrome P450 [4], during the stimulation of phagocytosis by pathogens or lipopolysaccharide (LPS) through nicotinamide adenine dinucleotide phosphate (NADPH) oxidases [5], and/or by the activity of tissue-specific cellular enzymes such as cyclooxygenase [6].
ROS levels are tightly regulated and normally contribute to cellular and tissue homeostasis as signaling molecules.Conversely, an excess of ROS generation and/or impaired antioxidant activity under acute and chronic oxidative stress are associated with metabolic "reprogramming," inflammation, tissue damage/dysfunction, and toxicity, leading to senescence or death [7].Thus, the free radical theory was replaced by the redox hypothesis in which oxidative stress is defined as the imbalance between oxidant and antioxidant mechanisms in favor of the former, triggering a disturbance in oxidation-reduction reactions which leads to cellular damage through the oxidation of DNA, RNA, carbohydrates, proteins, and lipids [8].
When considering the dual role of ROS from their physiological role as second messengers, to their involvement in inflammation and tissue damage, it is necessary for cells to have mechanisms that regulate redox homeostasis, maintaining antioxidant, antiinflammatory and detoxification responses.Nuclear factor erythroid 2-related factor 2 (NRF2) regulates the expression of multiple antioxidant and cytoprotective proteins and enzymes and is considered the main mediator of cellular adaptation to redox stress [9].
Cyclooxygenases (COX-1, COX-2) are key actors in the biosynthesis of prostanoids.PTGS1, the COX-1-encoding gene, is constitutively expressed in many tissues, whereas PTGS2 (prostaglandin synthase), the COX-2-encoding gene, is expressed and induced by different stimuli in several tissues and cellular types; however, in the liver, COX-2 expression is restricted to those situations in which proliferation or de-differentiation occurs [10].Thus, adult hepatocytes only express COX-2 under pathological conditions.Notwithstanding, our group has demonstrated that the constitutive COX-2 expression, specifically in hepatocytes, protects from liver injury in several models [11][12][13][14], which has allowed us to hypothesize that the induction of COX-2 plays a protective role as a physiologic response against liver injury, in part by reducing hepatic recruitment and the infiltration of neutrophils, producing a significant attenuation of oxidative stress and hepatic apoptosis, increasing autophagic flux and decreasing endoplasmic reticulum (ER) stress.
Numerous excellent reviews have been written on how NRF2 controls the cellular redox state (see for example [15]).In this review, we explore key aspects in the control of NRF2 expression and its function as a regulator of the antioxidant response, as well as other aspects beyond this function, focusing on its relevance in liver diseases.In addition, we explore the possible role it may play in the observed hepatoprotective COX-2dependent response.
NRF2 as a Sensor of Cellular Redox State
NRF2 is continuously produced by the NFE2L2 gene and is immediately degraded through the ubiquitin-proteasome system.This apparently futile mechanism is extremely useful in that it allows cells to respond rapidly to potentially harmful oxidative and electrophilic challenges.
A canonical pathway for NRF2 stabilization and degradation is widely studied and accepted.Under non-stressed conditions, NRF2 localizes to the cytoplasm where it interacts with the actin-binding protein Kelch-like ECH associating protein 1 (KEAP1).KEAP1 is a homodimeric protein that binds NRF2 with the E3 ligase complex formed by Cullin 3 and RING-box protein 1 (CUL3/RBX1).Under homeostatic conditions, the N-terminal domain of the KEAP1 homodimer binds a molecule of NRF2 that is rapidly degraded by the ubiquitin-proteasome 26S pathway by CUL3/RBX1 [9,16].Under oxidative stress conditions, KEAP1 is a redox and electrophilic sensor that undergoes cysteine modification, mainly C155, C273 and C288, which is critical for its ability to repress NRF2, which, after phosphorylation at serine (Ser) 40 by protein kinase Cδ (PKCγ), is translocated to the nucleus [17] (Figure 1).In the nucleus, NRF2 is a basic region leucine zipper (bZip) transcription factor that forms heterodimers with the small musculoaponeurotic fibrosarcoma (MAF) proteins K, G, and F, and recognizes an enhancer sequence called the antioxidant response element (ARE) [18].AREs are present in the regulatory regions of more than 250 genes (ARE genes) involved in a wide range of homeostatic mechanisms related to metabolism and redox signaling, inflammation, and proteostasis [9].
The repression of BACH1 is dominant over NRF2 activation and, for HMOX1 transcription (HO-1-encoding gene), inactivation of BACH1 is a prerequisite for NFE2L2 induction by allowing binding of NRF2 already present in the nucleus [21].Many of the NRF2 target genes were identified thanks to the use of mice deficient in NFR2 (NRF2 KO) or KEAP1 (KEAP1 KO), or through the administration of small molecules that pharmacologically activate NRF2, usually through kinases that phosphorylate the transcription factor interfering with its binding to KEAP1 [22].The repression of BACH1 is dominant over NRF2 activation and, for HMOX1 transcription (HO-1-encoding gene), inactivation of BACH1 is a prerequisite for NFE2L2 induction by allowing binding of NRF2 already present in the nucleus [21].Many of the NRF2 target genes were identified thanks to the use of mice deficient in NFR2 (NRF2 KO) or KEAP1 (KEAP1 KO), or through the administration of small molecules that pharmacologically activate NRF2, usually through kinases that phosphorylate the transcription factor interfering with its binding to KEAP1 [22].
Regarding NRF2 degradation, an alternative pathway to KEAP1 is through the glycogen synthase kinase 3 (GSK-3).This kinase phosphorylates NRF2, targeting for ubiquitination upon binding to another E3 ubiquitin ligase that is the F-box/WD repeat-containing protein 1A (β-TrCP), together with the cullin (CUL) 3/ring-box protein (RBX) complex (Figure 2).Upon activation of the protein kinase B (AKT) pathway, under stress conditions for example, AKT is able to phosphorylate and inhibit GSK-3, thus inhibiting NRF2 degradation.Insulin and WNT signaling can also trigger the activation of the AKT pathway through the protein tyrosine phosphatase 1B (PTP1B) and the insulin-like growth factor 1 (IGF-1) receptor.Both pathways lead to the phosphorylation of GSK-3, prompting NRF2 activation, facts that have been demonstrated both in a model of acetaminophen hepatotoxicity [25] and in a model of cholangiocyte expansion [26].Prior to oxidative stress, an increase in the adenosine monophosphate/adenosine triphosphate (AMP/ATP) ratio occurs, which could be sensed by the 5 AMP-activated protein kinase (AMPK) [27].In this situation, AMPK becomes activated and phosphorylates NRF2 at Ser550, causing its nuclear accumulation (Figure 2).In addition, AMPK inhibits GSK-3β, thus blocking NRF2 degradation [28].Furthermore, WNT signaling controls the zonal expression of NRF2 in hepatocytes maintaining a perivenous phenotype [29].
Regarding NRF2 degradation, an alternative pathway to KEAP1 is through the glycogen synthase kinase 3 (GSK-3).This kinase phosphorylates NRF2, targeting for ubiquitination upon binding to another E3 ubiquitin ligase that is the F-box/WD repeat-containing protein 1A (β-TrCP), together with the cullin (CUL) 3/ring-box protein (RBX) complex (Figure 2).Upon activation of the protein kinase B (AKT) pathway, under stress conditions for example, AKT is able to phosphorylate and inhibit GSK-3, thus inhibiting NRF2 degradation.Insulin and WNT signaling can also trigger the activation of the AKT pathway through the protein tyrosine phosphatase 1B (PTP1B) and the insulin-like growth factor 1 (IGF-1) receptor.Both pathways lead to the phosphorylation of GSK-3, prompting NRF2 activation, facts that have been demonstrated both in a model of acetaminophen hepatotoxicity [25] and in a model of cholangiocyte expansion [26].Prior to oxidative stress, an increase in the adenosine monophosphate/adenosine triphosphate (AMP/ATP) ratio occurs, which could be sensed by the 5′ AMP-activated protein kinase (AMPK) [27].In this situation, AMPK becomes activated and phosphorylates NRF2 at Ser550, causing its nuclear accumulation (Figure 2).In addition, AMPK inhibits GSK-3β, thus blocking NRF2 degradation [28].Furthermore, WNT signaling controls the zonal expression of NRF2 in hepatocytes maintaining a perivenous phenotype [29].
Figure 2. NRF2 regulation pathway independent of KEAP1: stabilization and degradation of NRF2 is mediated by different components in addition to KEAP1.Phosphorylation of NRF2 at Ser550 by AMPK leads to its translocation and stabilization in the nucleus.AMPK is activated when an imbalance in the AMP/ATP ratios occurs in the context of oxidative stress.Oxidative stress also activates AKT, thus phosphorylating GSK3, inactivating it and blocking NRF2 degradation.When GSK3 is active, it phosphorylates NRF2 at Ser335 and Ser338, targeting ubiquitination through βTrCP binding, along with CUL3 and RBX.Finally, under ER stress conditions, the XBP1/HRD1 complex can also bind NRF2 and mark it for degradation.Abbreviations: AMPK, 5′ AMP-activated protein kinase; AKT, protein kinase B; GSK3, glycogen synthase kinase 3; βTrCP, F-box/WD repeat-containing Figure 2. NRF2 regulation pathway independent of KEAP1: stabilization and degradation of NRF2 is mediated by different components in addition to KEAP1.Phosphorylation of NRF2 at Ser550 by AMPK leads to its translocation and stabilization in the nucleus.AMPK is activated when an imbalance in the AMP/ATP ratios occurs in the context of oxidative stress.Oxidative stress also activates AKT, thus phosphorylating GSK3, inactivating it and blocking NRF2 degradation.When GSK3 is active, it phosphorylates NRF2 at Ser335 and Ser338, targeting ubiquitination through βTrCP binding, along with CUL3 and RBX.Finally, under ER stress conditions, the XBP1/HRD1 complex can also bind NRF2 and mark it for degradation.Abbreviations: AMPK, 5 AMP-activated protein kinase; AKT, protein kinase B; GSK3, glycogen synthase kinase 3; βTrCP, F-box/WD repeat-containing protein 1A; ER, endoplasmic reticulum; HRD1, E3 ubiquitin ligase synoviolin; XBP1, X-box binding protein 1; P, phosphate.Created with BioRender.com(accessed on 6 July 2023).
Another degradation mechanism for NRF2 was proposed involving inositol requiring enzyme 1 (IRE1)/E3 ubiquitin ligase synoviolin (HRD1) present in the ER and whose expression is enhanced by activation of the X-box binding protein 1 (XBP1)-HRD1 arm under conditions of reticulum stress [30] (Figure 2).Finally, indirect regulation may occur through the modulation of miRNAs controlling KEAP1 and CUL3 [31].
NRF2 and Its Antioxidant Role
NRF2 deficiency causes an increase in ROS and oxidative stress in a cell type-dependent manner.Thus, mouse embryonic fibroblasts (MEFs) isolated from NRF2 knockout (KO) animals do not show increased ROS formation, in contrast to glioneuronal cells [32].Similar discrepancies in oxidative status can be also found between organs, with the liver having the highest oxidative burden, whereas the aorta is protected from oxidative damage by increased levels of nitric oxide (NO) and mono-nitrogen oxides (NOx) [33].The relationship between NO and oxidative stress could constitute a compensatory mechanism protecting endothelial cells without NRF2 from oxidative stress damage.The protective effect may also depend on NADPH oxidase (NOX)-4 levels, as occurs in fibroblasts from NRF2 KO mice that do not show increased ROS formation [34].
Mitochondria are responsible for more than 90% of oxygen utilization.Although most oxygen undergoes a complete reduction to water at the level of cytochrome oxidase, partial reduction accompanied by the generation of ROS can also occur, the most common being O 2 •− [2].As aforementioned, NRF2 drives the expression of the main antioxidant enzymes in the cell for oxidative stress detoxification.SOD catalyzes the conversion of O 2 •− into H 2 O 2 and molecular oxygen.Subsequently, the enzyme catalase (CAT), GPx and/or a (TRX)-dependent peroxiredoxin (PRX) reduce H 2 O 2 to water [35].Concerning the SOD enzyme, there are three isoforms encoded by three members of the SOD family in humans, mammals and most chordates: SOD1 (cytoplasmic Cu-ZnSOD), SOD2 (mitochondrial Mn-SOD), and SOD3 (extracellular Cu-ZnSOD).SOD1 is responsible for regulating basal levels of superoxide-derived oxidative stress produced in both the cytosol and mitochondria [36].SOD2 is inducible by oxidative stress, hyperoxia, environmental pollutants, and inflammatory cytokines [37], whereas SOD3 is responsible for protection against exogenous and environmental stresses, which can come from cigarette smoke, traffic exhaust emissions, solar radiation, and even food [38].
Under stress conditions, the ER plays a key role in the generation of ROS that dictate the fate of protein folding and secretion [39].In peroxisomes, CAT is the main oxidoreductase responsible for the metabolism of H 2 O 2 produced after the action of peroxisomal oxidases and xanthine oxidase, which generate ROS [3].At the plasma membrane, ROS generation begins with the rapid uptake of oxygen, the activation of NOX, and the production of the O 2 •− that SOD then rapidly converts to H 2 O 2 [40].To limit oxidative damage, the protein PKR-like endoplasmic reticulum kinase (PERK) and IRE1 phosphorylate and activate NRF2.In addition, reticulum or mitochondrial stress stimulates activating transcription factor 4 (ATF4) that cooperates with NRF2 to upregulate cytoprotective genes [41,42].
NRF2 beyond Its Antioxidant Role
In addition to its main role modeling the antioxidant response, NRF2 is involved in many other cellular pathways, at both physiological and pathological levels.
NRF2 in Autophagy and Protein Degradation
Autophagy is a transcriptionally controlled process that ensures the degradation of misfolded, oxidized or altered proteins to maintain cellular proteostasis.The most prevalent form of autophagy is macro-autophagy, and during this process, the cell forms a double-membrane sequestering compartment termed the phagophore, which matures into an autophagosome.Following delivery to the vacuole or lysosome, the cargo is degraded, and the resulting macromolecules are released back into the cytosol for reuse.Given the role of NRF2 as a sensor of oxidative stress, it is not surprising that a connection is established between this factor and macro-autophagy.In fact, the cargo protein sequestosome-1/ubiquitin-binding protein p62 (SQSTM1/p62) interacts with KEAP1 by directing it to the phagophore (precursor of the autophagosome), releasing NRF2 from its inhibition.In addition, SQSTM1 contains an ARE element and thus can be transcriptionally regulated by NRF2 creating a regulatory cycle [43,44].In addition to SQSTM1/p62, NRF2 is able to activate genes linked to autophagy initiation, cargo recognition, autophagosome formation, elongation, and autolysosome clearance [45] (Figure 3).
NRF2 also participates in the control of oxidized protein degradation.These oxidized proteins can be eliminated by a type of autophagy called chaperone-mediated autophagy (CMA), characterized by the presence of the receptor lysosomal-associated membrane protein 2A (LAMP2A) (Figure 3).The control of NRF2 at the transcriptional level is due to the presence of 2 ARE elements in the LAMP2A promoter [47].This regulation is independent of the effect of NRF2 on the macroautophagic activity [45].
Figure 3.
NRF2 beyond its antioxidant role: autophagy is able to potentiate NRF2 activity by tagging KEAP1 for degradation with p62.Furthermore, NRF2 is able to promote both macroautophagy and Figure 3. NRF2 beyond its antioxidant role: autophagy is able to potentiate NRF2 activity by tagging KEAP1 for degradation with p62.Furthermore, NRF2 is able to promote both macroautophagy and chaperone-mediated autophagy (CAM) by promoting the expression of genes related to both pathways.From an epigenetic point of view, NRF2 expression is regulated by methylation marks and miRNAs, as well as NRF2 modulates the expression of HDACs and DNMTs, in addition to miRNAs.NRF2 is highly involved with inflammatory processes; NF-κB potentiates NRF2 expression; both NFκB and NRF2 compete for CBP/p300 binding; NRF2 is degraded in an NF-κB-dependent manner; NF-κB is activated under oxidative stress in a KEAP1-dependent manner.Cell-cycle regulation is also mediated by NRF2.Its expression peaks during G1/S phases, whereas it decreases to its minimum in G2/M.The G1/S transition is regulated by the expression of promoter genes and the repression of negative regulators by NRF2, in addition to the enhancement of the expression of DNA damage detection and repairing genes by NRF2.Abbreviations: LAMP2, receptor lysosomal-associated membrane protein 2A; Me, methylation; miRNA, microRNA; NF-κB, nuclear factor kappa-lightchain-enhancer of activated B cells; CBP/p300, CREB-binding protein; IKKβ, inhibitor of nuclear factor kappa-B kinase subunit beta; IkBα, inhibitor of nuclear factor kB alpha; ROS, reactive oxygen species.Created with BioRender.com(accessed on 11 July 2023).
NRF2 also participates in the control of oxidized protein degradation.These oxidized proteins can be eliminated by a type of autophagy called chaperone-mediated autophagy (CMA), characterized by the presence of the receptor lysosomal-associated membrane protein 2A (LAMP2A) (Figure 3).The control of NRF2 at the transcriptional level is due to the presence of 2 ARE elements in the LAMP2A promoter [47].This regulation is independent of the effect of NRF2 on the macroautophagic activity [45].
NRF2, Epigenetics and miRNAs
Epigenetic modifications modulate gene expression, allowing cells to adapt to the environment through histone modifications, DNA methylation, or the modulation of specific levels of miRNAs.These epigenetic changes control transcription, cell cycle, autophagy, DNA repair, stress response, and senescence [48,49].In this context, NRF2 acts by inducing epigenetic changes in the same way that epigenetic changes are capable of modulating their own expression.Oxidative stress can suppress NFE2L2 expression through hypermethylation of CpG islands present in the NFE2L2 promoter, as has been shown in prostate tumors [50].In contrast, in colorectal cancer, NFE2L2 repression is associated with an increased frequency of demethylation [51].Furthermore, miRNAs capable of regulating NFE2L2 were identified in the context of cancer [52] and cardiovascular or neurodegenerative diseases [53,54].Recently, thanks to the analysis of embryonic fibroblasts derived from NRF2-deficient mice, or with the use of activators that modulate the KEAP1 and GSK3 pathways in hippocampal neurons, it was shown that NRF2 can in turn regulate epigenetic mechanisms by controlling the expression of HDAC1, SIRT1, and DNA (cytosine-5)-methyltransferases (DNMTs) that have ARE elements in their promoters [31].The modulation of oxidative stress-associated miRNAs (redoximiRs), such as miR-27c-3p, miR-27b-3p, miR-128-3p and miR-155-5p, was also demonstrated [55].These redoximiRs are able to modulate NRF2 levels [56]; in turn, NRF2 is able to modulate the action of these miRNAs through direct degradation [57] or by modulating their biogenesis [31] (Figure 3).
NRF2 Interaction with NF-κB and Inflammation
Oxidative stress-mediated signaling mechanisms are involved in inflammation and tissue injury.Elevated ROS production as a result of inflammatory signaling can mediate canonical nuclear factor kappa-light-chain-enhancer of activated B cells (NF-κB) activation and inflammatory gene induction; proteasome activity; antioxidant gene transcription; inflammasome activation; and cytokine secretion.There are at least two separate pathways for NF-κB activation.The canonical pathway is triggered by Toll-like receptors (TLRs) and pro-inflammatory cytokines such as tumor necrosis factor alpha (TNFα) and interleukin (IL)-1, leading to the expression of nuclear factor NF-κB p65 subunit (RELA gene), which regulates the expression of pro-inflammatory and cell survival genes [58].The alternative NF-κB pathway is activated by lymphotoxin beta (LTβ), CD40 ligand (CD40L), B-cell activating factor (BAFF), and the receptor activator of nuclear factor kappa-B ligand (RANKL), and results in the activation of transcription factor RelB/p52 complexes (RELB gene) [59].These pathways are characterized by the requirement of the inhibitor of NF-κB (IkB) kinase (IKK) subunits.IKKβ regulates the activation of the canonical pathway through phosphorylation of IkBs and requires the IKKγ/NEMO, whereas IKKα is required for activation of the alternative pathway through phosphorylation of the IKKγ subunit and the processing of p100, the precursor of p52.These events are subjected to redox control through several modes of IkBα regulation [60], but one that has recently been described involves the regulation of IKKβ stability by KEAP1.Like NRF2, KEAP1 binds to IKKβ for ubiquitination and proteasomal degradation.In the presence of ROS, KEAP1 is inhibited and IKKβ is stabilized, phosphorylating IkBα and leading to its degradation and thus the upregulation of NF-κB [61] (Figure 3).Studies using animal models or different cell types, such as microglial cells and monocytes, suggest that upregulation of NRF2 decreases NF-κB-regulated pro-inflammatory and immune responses.Overexpression of NRF2 was also shown to inhibit Ras-related C3 botulinum toxin substrate 1 (RAC1)-dependent NF-κB activation [62].Additionally, NF-κB may promote interaction of HDAC3 with MAF proteins, therefore preventing their dimerization with NRF2 [20].
At the transcriptional level, NF-κB activates NRF2 expression due to the existence of several functional binding sites in the promoter region of the NFE2L2 gene, thus inducing a negative feedback loop [63].In addition, both NF-κB and NRF2 transcription factors compete for binding of the CBP/p300 coactivator.On the other hand, NF-κB binds to KEAP1 and translocates it to the nucleus, thus favoring NRF2 ubiquitination and degradation [64] (Figure 3).Finally, E3 ligase β-TrCP labels both IkBα and NRF2 for proteasomal degradation [20], and therefore may lead to increased NF-κB activity.Thus, NRF2 and NF-κB influence each other to control antioxidant and inflammatory responses.
NRF2 in Cell Cycle
Recently, Lastra et al. showed that NRF2 levels oscillate during cell-cycle progression, reaching a peak in G1/S and being minimal in G2/M [65] (Figure 3).The decrease in NRF2 levels leads to an increase of cells in G1 as they are not ready to enter the S-phase with subsequent cycle arrest at the G1/S restriction point.The role of NRF2 in cell-cycle progression occurs both at the transcriptional level and through mechanisms independent of NRF2-binding to ARE elements in their target genes.Thus, NRF2 is required for the expression of positive regulators (cyclin-dependent kinase (CDK)-2, and the transcription factor Dp-1, TFDP1) and the repression of the negative regulators cyclin dependent kinase inhibitor 1A (CDKN1A), and cyclin dependent kinase inhibitor 1B (CDKN1B) of the G1-S transition.In addition, it is also involved in the optimal expression of genes involved in DNA damage detection (cyclin-G1, CCNG1) and repair (DNA repair protein RAD51 homolog 1, RAD51) [65] (Figure 3).On the other hand, as has been demonstrated in NRF2deficient mice, a reduction in the levels of this transcription factor releases KEAP1 from its binding and allows it to bind to other proteins such as p21 at the G1/S transition [66].Embryonic fibroblasts from NRF2-deficient mice showed decreased cell growth and a shorter half-life compared with MEFs from wild-type mice [67].
NRF2 in Liver Pathology
Liver diseases contribute to more than 3.5% of deaths in developed countries [68].One of the main functions of the liver is the detoxification and elimination of potential harmful xenobiotics.Most of the enzymes in charge of those functions are induced by NRF2 [69]; therefore, the importance of this transcription factor in liver pathophysiology is not surprising.A summary of the role of NRF2 in different models of liver pathology can be found in Table 1.
NRF2 in Liver Inflammation
Activation of the inflammasome has been linked to the oxidative stress associated with various liver pathologies.ROS are necessary for the activation of the NLR family pyrin domain containing 3 (NLRP3) in all its stages, as well as for the formation of pores in the plasma membrane leading to the release of pro-inflammatory cytokines [70].NLRP3 inflammasome is a complex of proteins that assembles in the cytoplasm in response to inflammatory stimuli and leads to the activation of the enzyme caspase-1, which is responsible for the cleavage and activation of the precursor forms of two important inflammatory cytokines, IL-1β and IL-18 [71].In this sense, NRF2 inhibits the expression of pro-inflammatory cytokines by blocking the recruitment of RNA Pol II [72].In addition, increased ROS oxidize thioredoxin that is no longer bound to thioredoxin-interacting protein (TXNIP) which can bind to NLRP3, activating the inflammasome pathway [73].In the liver, NLRP3 upregulates KEAP1, increasing hepatic fibrogenesis by decreasing NRF2 activation, and in turn increasing ROS levels and pyroptosis, further exacerbating fibrosis [74].
NRF2 and Insulin Resistance
It was shown that the enhancement of NRF2 activity in KEAP1 KO mice increased the phosphorylation of AMPK in the liver, as well as insulin-signaling in skeletal muscle, resulting in a substantial improvement of glucose tolerance [75].Moreover, NRF2 KO mice exhibit increased insulin sensitivity, which was attributed to ROS-mediated inhibition of PTP1B that antagonizes insulin-signaling [76].Thus, NRF2 plays a complex role in tissue-specific insulin resistance and additional research is needed to elucidate the full array of NRF2 functions in tissues involved in the control of whole-body glucose homeostasis.
NRF2 and Liver Regeneration
The liver is one of the few adult mammalian organs that retains a remarkable ability to regenerate itself.Resection of up to 70% of the liver mass via partial hepatectomy leads to compensatory growth from the intact tissue and fully restores organ size.NRF2 is required for the timely M-phase entry of replicating hepatocytes by ensuring proper regulation of cyclin A2 and the Wee1/Cdc2/cyclin B1 pathway during liver regeneration [77].Cell regeneration is diminished in hepatectomized NRF2-KO mice, which is associated with increased oxidative stress, reduced insulin/insulin growth factor-1 signaling [78], and reduced expression of the gene encoding a hepatotropic factor, an augmenter of liver regeneration [79].The liver may also regenerate following injury by exogenous and/or endogenous agents (e.g., alcohol, hepatitis B/C viruses, and fatty acids) that cause hepatocyte death.This process is characterized by an inflammatory reaction and extracellular matrix (ECM) synthesis/remodeling.However, if the damaging insult persists, the tissue will be repaired instead of regenerated, resulting in the excessive scarring known as fibrosis.Cholesterol affects the balance between hepatocyte proliferation/regeneration and liver tissue fibrosis in the attempt to restore organ homeostasis [80].Cholesterol induces liver regeneration and activation of NRF2 and hypoxia-inducible factor (HIF)-1α to increase hepatocyte protection against bile acids [81] and induce hepatocyte proliferation.On the contrary, bile acids promote liver injury via their detergent and cytolytic action and by inducing ER stress and mitochondrial damage [82].They also initiate the transdifferentiation of hepatic stellate cells (HSCs) into myofibroblasts, which ultimately leads to fibrosis [83].
NRF2 in Acute Liver Injury
Acute liver failure (ALF) is a serious liver injury characterized by oxidative stress, inflammatory response, and apoptosis produced by numerous factors, such as viral infection, alcohol, drug abuse, and metabolic and autoimmune disorders.Several chemicals and pathogens such as thioacetamide (TAA), LPS, concanavalin A (ConA) or acetaminophen (APAP) commonly induce an acute hepatotoxicity [84].Currently, no effective treatment options are available for ALF except for liver transplantation, so there is an urgent need to find an effective therapy for the treatment of the liver injury.NRF2 is a regulator of cellular defense pathways against the oxidative stress caused by xenobiotics; thus, its pathway might be potentially targeted as a pharmacological approach against ALF.
For example, ConA and TAA induce oxidative stress; therefore, inhibition or deletion of KEAP1 that consequently induces NRF2 activity as protective strategies ahead of liver injury are of interest [85][86][87][88][89].In a different way, APAP induces cell death through apoptosis, necrosis and ferroptosis [90,91].Ferroptosis, in particular, is triggered by lipid peroxidation caused by the activation of CYP4204E1 by APAP, which causes a drop in GSH levels and an accumulation of ROS [92].In NRF2-KO mouse models, oxidative stress is exacerbated after treatment with both APAP and ConA, in agreement with the crucial role of NRF2 in detoxifying the tissue after ALF [93,94].The administration of natural components extracted from plants in models of ALF reduced cell death and tissue damage, increasing the hepatic capacity to eliminate xenobiotics by controlling the expression of the multidrug resistance protein 3 (MRP3) transporter [95]; upregulating the AMPK/GSK3β/NRF2 signaling pathway [96]; or modifying KEAP1 cysteines, thus blocking NRF2 degradation [97].
LPS is a component of Gram-negative bacteria that can stimulate various signaling pathways such as mitogen-activated protein kinases (MAPKs) and NF-κB signaling pathways in Kupffer cells.These pathways ultimately lead to the production of proinflammatory cytokines and chemokines, which enhance local inflammation and immune-cell infiltration in the liver tissue, inducing hepatocyte pyroptosis and liver injury [98].During sepsis caused by LPS, NRF2 sumoylation is inhibited, leading to a decrease in the hepatic GSH levels causing cellular damage [99].
NRF2 in MAFLD/NASH
Metabolic-associated fatty liver disease (MAFLD) is defined as a condition where hepatic fat accumulation exceeds 5% of the liver's weight without alcohol consumption (<30 g per day).It covers a wide spectrum of pathological conditions, extending from simple steatosis (NAFLD, deposit of fat in hepatocytes) to nonalcoholic steatohepatitis (NASH, characterized by the presence of 5% hepatic steatosis and inflammation with hepatocellular damage, with or without fibrosis), cirrhosis, and ultimately leading to hepatocellular carcinoma [100].Insulin resistance seems to play a key role in the initiation and progression of the disease from simple fatty liver to advanced forms due to an increase of hepatic lipogenesis and a reduction of free fatty acid degradation [101].This alteration is followed by the second hit of oxidative stress [102,103], which induces an increase in pro-inflammatory cytokines and the activation of NF-κB, which in turn activates the hepatic stellate cells, the subsequent fibrosis, and damage to the DNA, with a failure in the synthesis of exogenous antioxidants [104].
A microarray analysis of mouse hepatic gene expression revealed that pharmacologic and genetic activation of NRF2 suppresses key enzymes involved in lipid synthesis and reduces hepatic lipid storage [105].NRF2 appears to protect the liver against steatosis by inhibiting lipogenesis and promoting fatty acid oxidation.This may be explained by the activation of ARE-containing transcription factors that regulate adipocyte differentiation and adipogenesis and by the protection against redox-dependent inactivation of metabolic enzymes [9].In the liver, triglyceride synthesis is regulated by the nuclear transcription factor-liver X receptor-α (LXRα) and its downstream gene SREBF1, encoding the transcription factor sterol regulatory element binding 1c (SREBP-1c), which induces the expression of lipogenic genes such as acetyl-coenzyme (Co) A carboxylase (ACACA) and fatty acid synthase (FASN).Some studies have reported that NRF2 activation inhibits LXRα activity and LXRα-dependent liver steatosis through the farnesoid X receptor (FXR)small heterodimer partner (SHP) signaling pathway.Moreover, NRF2 activator inhibits SREBP-1c and lipogenic genes by promoting deacetylation of FXR and inducing small heterodimer partner, which accounts for the repression of LXRα-dependent gene transcription, protecting the liver from excessive fat accumulation [106].
Liao et al. showed that NRF2 activation through the PI3K/AKT signaling pathway significantly enhances hepatocellular antioxidant capacity and relieves mitochondrial dysfunction by inhibiting NOX2 activation in mice fed a HFD, suggesting that PI3K/AKT/NRF2 signal transduction plays a role in the regulation of hepatocellular oxidative damage [107].Furthermore, hesperetin, (3 ,5,7-trihydroxy-4 -methoxyflavanone) a major bioflavonoid in citrus fruits, can trigger NRF2-mediated antioxidative processes and suppress fatty acid-induced ROS overproduction, leading to the attenuation of NF-κB activation and thus the inhibition of hepatic inflammation in NAFLD progression.In addition, hesperetin demonstrates the interrelationship between the antioxidative and anti-inflammatory effects in protecting against NAFLD [108].
Mechanisms underlying liver fibrosis include the activation of both hepatic stellate cells and Kupffer cells, resulting in functional and biological alterations [109].It was also demonstrated that NRF2 deficiency induces the activation of stellate cells and exaggerates the progression of carbon tetrachloride (CCl 4 )-induced hepatic fibrosis in mice [110].NRF2 deficiency in hepatocytes dampens the cellular antioxidant response and allows for the increased expression of pro-inflammatory genes, including IL-6, IL-1b and TNF [111].NRF2 attenuates liver fibrosis due to the disruption of Janus kinase (JAK) 2/Signal transducer and activator of transcription 3 (STAT3) signaling and the higher expression of suppressor of cytokine signaling 3 [110].Furthermore, NRF2-mediated inhibition of the transforming growth factor beta (TGFβ) signaling in stellate cells may help to decrease liver fibrosis [112].
The role of hepatocyte growth factor (HGF)/mesenchymal epithelial transition factor (c-met) axis in liver pathophysiology has been extensively investigated with a particular emphasis on aspects regarding liver regeneration, hepatocyte proliferation, and apoptosis [113].The disruption of c-met functionality aggravates the onset of NASH through the impairment of mechanisms regulating cell sensitivity to lipotoxicity, ROS production, and cell proliferation.In particular, data emerging from genomic array analysis clearly indicated an aberrant regulation of a pattern of genes responsible for increased pro-oxidant environment, amongst them the transcription factor NRF2 [114].The generation of double mutant c-met/Keap1 ∆Hepa mice further demonstrated that re-establishing a functional antioxidant activity completely reversed the accelerated pathological conditions observed in single c-met ∆Hepa mice.In particular, the reduction of oxidative stress was accompanied by a decrease in the above-mentioned pro-oxidant systems, CYP2e1, CYP4a10, and NOX2 expression [115].The amelioration of the redox balance occurred concomitantly with a reduced hepatic accumulation of triglycerides related to the inhibition of the LXR-dependent lipogenic program induced by NRF2 [116].
NRF2 in ALD
The spectrum of alcohol liver disease (ALD), a leading cause of mortality within liver disorders, refers to hepatic steatosis, alcoholic hepatitis, fibrosis, cirrhosis, and eventually hepatocellular carcinoma in some cases [117].Chronic alcohol consumption increases ROS production, ER stress, disruption of lipid metabolism and mitochondria dysfunction, decreases antioxidant levels, and enhances oxidative stress, especially in the liver as the main organ in which alcohol is metabolized.Alcohol dehydrogenase is the enzyme that transforms alcohol to acetaldehyde, a profibrogenic factor that induces GSH depletion; the generation of ROS and acetaldehyde adducts; and lipid peroxidation [118].If dehydrogenase becomes saturated, alcohol continues to be oxidized through microsomal CYP2E1, generating more adducts, ROS, and free radicals [119].Both homocysteine activation and CYP2E1 expression increase the expression of NRF2 and its target genes, especially HMOX1 [120].Thus, NRF2-deficient mice have increased mortality as a result of increased lipogenesis, glutathione depletion, and increased inflammation [121].Paradoxically, NRF2 activation also contributes to the pathogenesis of ALD via the upregulation of hepatic very low-density lipoprotein receptor (VLDLR) levels [122].Ethanol administration decreases mitochondrial glutathione concentrations in NRF2-KO mice but not in mice where NRF2 expression is enhanced [123].Compared with the dramatic phenotype in global NRF2-KO mice, the contribution of hepatic NRF2 seems to be smaller than in the liver-specific NRF2(L)-KO mouse model.There is a clear indication that NRF2 in the central nervous system plays a major role in the sensitivity to ethanol-induced lethality in the global NRF2-KO mice [124].
NRF2 in Hepatocellular Carcinoma (HCC)
The role of NRF2 during HCC development is controversial.A study analyzing several HCC human samples recently reported that mutations in either KEAP1 or NRF2 occur in approximately 12% of all cases [125], implicating that an active NRF2 pathway could induce or drive HCC development.Mutations in the tumor suppressor PTEN cause a downregulation of the PTEN/GSK-3/β-TrCP pathway through increase in phosphatidylinositol-3-kinase (PI3K)-AKT signaling, preventing the proteasomal degradation of NRF2, thus being implicated in NRF2 activation [20].It was shown that persistent NRF2 activation contributes to different pro-oncogenic pathways.First, elevated NRF2 levels may promote cancer-cell proliferation [126].Second, cancer cells with elevated NRF2 levels are less sensitive to chemotherapeutic agents [127] and ionizing radiation [128].However, in an inflammation-driven murine model of liver carcinogenesis (NEMO ∆Hepa ), liver-specific activation of NRF2 (NEMO ∆Hepa /KEAP1 ∆Hepa ) showed reduced apoptosis as well as a dramatic downregulation of genes involved in cell-cycle regulation and DNA replication.Consequently, double KO mice NEMO ∆Hepa /KEAP1 ∆Hepa displayed decreased fibrogenesis, lower tumor incidence, reduced tumor number, and decreased tumor size [74].Therefore, the NRF2/KEAP1 pathway has the role of a double-edged sword, and NRF2 inducers act to protect normal cells from carcinogens, whereas NRF2 inhibitors act to suppress the proliferation of cancer cells that evolved from persistent NRF2 activation due to mutations.
NRF2 in Ischemia-Reperfusion Injury
Ischemia-reperfusion (I/R) injury (IRI) is a pathology that occurs in situations of transplant and/or resection.The tissue remains without oxygen and nutrients, causing dysfunction, injury, and cell death, varying according to the degree and time of ischemia.Revascularization and restoration of blood flow is the only therapeutic approach, but paradoxically, it exacerbates damage to the tissue.Hepatocellular death is characterized mainly by ROS-induced necrosis and innate pro-inflammatory cytokine-mediated apoptosis in the IRI liver [129]; therefore, the role of NRF2 is crucial in this pathology and of high interest as a therapeutic target.
Several experimental animal studies have highlighted the possibility of using NRF2 modulation for IRI attenuation in liver transplantation.Thus, Ahmed et al. have recently shown that NRF2 expression is higher in human liver allografts that exhibit significantly better clinical parameters than NRF2-deficient livers [130].Ex vivo preservation by mechanical perfusion (MPN) offers a unique opportunity not only to preserve allografts, but also to consider the addition of compounds that could improve allograft viability and quality and exert a hepato-protectant effect.The modulation of NRF2 activity in organs connected to MPN decreases vascular inflammation and periportal CD3 + T-cell infiltration; increases cellular vacuolation; improves lactate clearance; and reduces transaminase alterations after MPN, pointing to NRF2 as a predictive biomarker [131].The use of Institut Georges Lopez-1 (IGL-2) preservation solution based on polyethylene glycol 35kDa (PEG35) and GSH improved mitochondrial function and reduced oxidative stress during the cold preservation of livers to be transplanted.The authors demonstrated that the effect of the IGL-2 solution was due to changes in the NRF2/HO-1 pathways, reduction of the NRLP3 inflammasome pathway, and increased mitophagy [132].
In ischemia, mitochondria are the key organelles in ROS generation.Oxygen deficiency increases the reduced state of mitochondria by increasing the NADPH/NADP+ ratio and decreasing ATP generation.NADPH is the main reducing resource of the organism, and many oxidoreduction reactions, such as the reduction of oxidized glutathione (GSSG) and thioredoxin, are carried out by oxidizing NADPH to NADP+.Most cellular NADPH is generated by the pentose phosphate pathway, and small amounts of NADPH are generated by the malic enzyme.NRF2 controls NADPH levels by regulating four of the key genes in NADPH synthesis (G6PD); phosphogluconate dehydrogenase (PGD); malic enzyme 1 (MEI); and isocitrate dehydrogenase (IDH), as well as by promoting metabolite reduction by activating NQO1 and NQO2 [133].
Cell apoptosis contributes to damage during hepatic I/R via Tollip-ASK1-JNK/p38 and through regulation of the inflammatory response and associated apoptosis via MAPK.NRF2 prevents apoptosis by upregulating the expression of anti-apoptotic B-Cell CLL/Lymphoma 2 (BCL-2) proteins and decreasing BCL-2-associated X protein (BAX), cytochrome c release and caspase activation [134][135][136].Moreover, NRF2 also modulates apoptosis through binding to ARE elements in the promoters of genes encoding for the anti-apoptotic proteins BCL-2 and BCL-XL [134,137].
Activation of the NRF2 pathway is able to protect against I/R damage through the activation of yes-associated protein 1 (YAP), which, after accumulation in the nucleus, is able to facilitate the activation of genes involved in regeneration, phase II enzymes, decreased ROS production, and the infiltration of CD68+7-Ly6G+ neutrophils [138].
The Relationship between NRF2 and Cyclooxygenase 2 (COX-2) in Liver Pathology
The COX-2 enzyme catalyzes the first step in the prostaglandin biosynthesis pathway, converting the arachidonic acid present in phospholipid membranes to an unstable intermediate, prostaglandin G2 (PGG 2 ), that will be further converted into the various types of prostanoids [139].Unlike the COX-1 isoform, which is constitutively expressed in almost all cell types in the body, COX-2 is only expressed in specific cells under certain stimuli [140].In adult hepatocytes, COX-2 expression is reduced to situations where proliferation and de-differentiation occurs, as they adopt a fetal phenotype, a stage in which they are able to express COX-2 when exposed to pro-inflammatory stimuli [141].
Despite the low expression of COX-2 in the liver, the use of non-steroidal antiinflammatory drugs (NSAIDs) appears to be effective in reducing inflammation, so COX-2 inhibition may be beneficial in the inflammatory process.Numerous studies with COX-2 inhibitors in various models of liver injury have found positive effects in reducing inflammation [142][143][144][145].However, long-term inhibition of COX-2 or its complete depletion appears to have adverse effects without additional benefit [146,147].Therefore, COX-2 inhibition does not seem to be the best way to treat inflammation, as COX-2 expression may be necessary for resolution of the pathological process.
COX-2 has been widely demonstrated to play a role in reducing apoptosis in different liver pathologies.The pathophysiological role of COX-2 expression was analyzed in studies in which its activity was reduced by chemical inhibitors or by using knockout mouse models.However, very few studies have investigated the effects of the constitutive hepatic expression of COX-2.We generated a transgenic (Tg) mouse model carrying the human (h)COX-2 gene (PTGS2) under the control of the human APOE promoter and its endogenous hepatic control region (hCOX-2 Tg) (Figure 4).In an FAS-induced apoptosis model, COX-2 overexpression, specifically in hepatocytes, correlates with lower caspase activity and BAX/BCL-2 ratios compared to wild-type mice [148].In animals with COX-2 overexpression in different stages of MAFLD, either in hyperglycemic [149], obese [11] or fibrotic [12] models, these apoptotic markers are also reduced.Along with reduced cell death by apoptosis, other damage-causing mechanisms, such as oxidative stress, are reduced.Transgenic hCOX-2 animals subjected to a methionine-choline-deficient diet show lower levels of lipid peroxidation, as well as a decreased oxidized (GSSG) to total (GSHt) glutathione (GSSG/GSHt) ratio compared to wild-type animals [12], revealing a lower generation of ROS or an enhanced antioxidant response.Very similar results are observed when COX-2 is overexpressed in animals subjected to hepatic ischemia-reperfusion [13].In this model, the antioxidant response emerges as the main cause of reduced liver damage, driven by the master regulator of the response, NRF2.Not only is it highly expressed (gene and protein), and properly localized (in the nucleus of COX-2 expressing cells), but several phase II enzymes are highly expressed (SOD1, SOD2, HO-1).In line with the idea that the oxidative stress is the main mechanism causing damage in IRI, it was demonstrated that pre-treatment with NRF2 before ischemia-reperfusion is beneficial [150].
Hepatic human COX-2 expression protected mice from the metabolic disorder and liver injury induced by a high-fat and ethanol (HF+Eth) diet, based on the clinical significance of the coexistence of ethanol drinking and the western diet, by enhancing hepatic lipid expenditure.Hepatocyte COX-2 overexpression protected the mice from HF+Ethinduced fatty liver and metabolic dysfunction.hCOX-2 Tg mice gained less weight, showed improved glucose tolerance, serum and hepatic lipid profiles, and less fatty liver damage.The anti-lipogenic effect of hCOX-2 Tg in the HF+Eth diet animals was mediated by increasing lipid disposal through enhanced β-oxidation via elevations in the expression of the peroxisome proliferator-activated receptor (PPAR) α and γ, and increased hepatic autophagy as assessed by the ratio of the microtubule-associated proteins 1A/1B light chain 3 II and I (LC3 II/I) in hepatic tissue.Various protein acetylation pathway components, including HAT, HDAC1, SIRT1, and SNAIL1, were modulated in hCOX-2 Tg mice in either control or HF+Eth diets [151].
The interaction of NRF2 with COX-2 has already been described, for example via the end metabolite 15-deoxy-Δ12,14-prostaglandin J2 (15D-PGJ2), a non-enzymatic degradation product of prostaglandin D2 (PGD2), which induces NRF2 expression.Inhibition of KEAP1 in IRI models ensures resistance to damage through reduced inflammation, macrophage and neutrophil infiltration, apoptosis, and the promotion of antioxidant-signaling [152].The final metabolite 15D-PGJ2 targets the cysteines of KEAP1 [153,154], allowing NRF2 activation, as do other anti-inflammatory molecules [155].Moreover, 15D-PGJ2, in addition to NRF2 activation, induces PPARγ which will reduce NF-κB-signaling, thereby reducing inflammation [156].The different expression patterns of COX-1 and COX-2 lead to different physiological functions.COX-1 is considered as the homeostatic stabilizer through continuous formation of prostaglandins (PGs) in the liver whereas COX-2-derived PGs are mediators of pathological conditions.Xiao et al. found that the deficiency or inhibition of enzymatic activity of COX-1 exacerbated the severity of CCl 4 -induced acute liver injury, including elevated serum aminotransferases levels, increased necrosis, and apoptosis in the liver, enhanced hepatic oxidative stress, and pro-inflammatory responses.This study of acute liver injury showed that, as observed with COX-2, PGE 2 is the prostanoid involved in protection.However, in this case the pathway in charge of the protection is the 5-lipooxygenase pathway whose metabolites are important mediators in the inflammatory response [157].
COX-2 can also metabolize fatty acids other than arachidonic acid.For example, COX-2 converts the ω-3 fatty acids docosahexaenoic acid (DHA) and eicosapentaenoic acid (EPA) into prostaglandins, and further converts them into 13-electrophilic fatty acid oxo derivative (EFOX)-D6, and 13-EFOX-D5, respectively, via dehydrogenase and nonenzymatic reactions [158], which have anti-inflammatory and antioxidant properties.COX-2-dependent EFOXs can modify cysteines, such as those in the KEAP1 protein, and, like 15D-PGJ 2 , release NRF2 and allow its activation [159].The molecular mechanisms involving COX-2 are not completely known, but the recent findings discussed above suggest that the intrinsic antioxidative system maintained by COX-2 and NRF2/ARE signaling constitute an important interaction in the aging process.Increasing levels of COX-2 expression during aging likely act to maintain antioxidative homeostasis as increases in ROS occur during cellular senescence, acute and chronic inflammation, and carcinogenesis [160].
Moreover, under conditions of oxidative stress, COX-2 expression is dependent on NRF2.In NRF2 KO melanoma cell lines, the PTGS2 gene is highly downregulated, and even with H 2 O 2 challenge, its expression is not stimulated when NFE2L2 is silenced using siRNA [161].NRF2 does not bind to the PTGS2 promoter.Instead, it is the ER stress-related transcription factor ATF4 that binds to the PTGS2 promoter and induces its expression.Jenssen and colleagues showed that in ATF4 KO, the induction of PTGS2 expression by H 2 O 2 is prevented, implying that ATF4 is required for COX-2 expression during oxidative stress [161].Furthermore, in ATF4 KO, NRF2 is also reduced under H 2 O 2 stimulation, indicating that ATF4 and NRF2 regulate each other.When NRF2 is overexpressed under control conditions, COX-2 increases, but in the ATF4 KO, COX-2 remains the same, demonstrating that NRF2-dependent COX-2 expression occurs via ATF4 [162] (Figure 5).immune cells such as neutrophils, they produce a metabolic shift that results in the accumulation of cyclic AMP (cAMP) in the cytosol that causes a shift towards an anti-inflammatory profile, thus contributing to the resolution of inflammation [159].Therefore, the canonical pro-inflammatory role a ributed to COX-2 has been challenged, and this direct relationship with NRF2 poses a paradigm shift.Beyond its relationship with inflammation, it collaborates in its resolution, apparently through the antioxidant response.
Conclusions and Future Directions
Nuclear factor erythroid 2-related factor 2 (NRF2) is a regulator of several antioxidant and cytoprotective proteins and is therefore a crucial regulator of cellular responses against oxidants and oxidative stress.It has been demonstrated that in the absence of NRF2, ROS and oxidative stress damage increase inflammation and tissue injury produced by these oxidative stress-mediated signaling mechanisms.Due to this key role in oxidative stress regulation, NRF2 deficiency has been associated with several diseases, including diabetes, hyperglycemia, ischemia, atherosclerosis, acute kidney injury, and liver pathologies.In the context of the liver, NRF2 is also an important player in the induction of detoxification enzymes and transporters that aid in the elimination of harmful xenobiotics.It also has protective functions in cell metabolism, inflammation, and fibrosis.COX-2 exerts its hepatoprotective role through the modulation of antioxidant genes, control of autophagy, amelioration of inflammation, and as demonstrated recently, modulation of mitochondrial function [14].Investigation with a COX-2 overexpression model The relationship between ATF4, COX-2 and NRF2 has already been described.In models of tunicamycin-induced ER stress, where ATF4 appears to induce NRF2 expression [42].In renal nephritic lupus, ER stress is enhanced, inducing ATF4 expression which will also drive COX-2 expression.This work also shows that autophagy is enhanced, a fact also observed by COX-2 overexpression in the liver IRI model [162].
Apart from the regulation of COX-2 expression by antioxidant factors, the derived prostaglandins also exert protective functions.In addition to the activation of NRF2 by 15D-PGJ 2 , the prostaglandins PGD 2 and PGE 2 , which initially have a pro-inflammatory role, also assist in the resolution of inflammation.When they bind to their receptors on immune cells such as neutrophils, they produce a metabolic shift that results in the accumulation of cyclic AMP (cAMP) in the cytosol that causes a shift towards an anti-inflammatory profile, thus contributing to the resolution of inflammation [159].
Therefore, the canonical pro-inflammatory role attributed to COX-2 has been challenged, and this direct relationship with NRF2 poses a paradigm shift.Beyond its relationship with inflammation, it collaborates in its resolution, apparently through the antioxidant response.
Conclusions and Future Directions
Nuclear factor erythroid 2-related factor 2 (NRF2) is a regulator of several antioxidant and cytoprotective proteins and is therefore a crucial regulator of cellular responses against oxidants and oxidative stress.It has been demonstrated that in the absence of NRF2, ROS and oxidative stress damage increase inflammation and tissue injury produced by these oxidative stress-mediated signaling mechanisms.Due to this key role in oxidative stress regulation, NRF2 deficiency has been associated with several diseases, including diabetes, hyperglycemia, ischemia, atherosclerosis, acute kidney injury, and liver pathologies.In the context of the liver, NRF2 is also an important player in the induction of detoxification enzymes and transporters that aid in the elimination of harmful xenobiotics.It also has protective functions in cell metabolism, inflammation, and fibrosis.COX-2 exerts its hepatoprotective role through the modulation of antioxidant genes, control of autophagy, amelioration of inflammation, and as demonstrated recently, modulation of mitochondrial function [14].Investigation with a COX-2 overexpression model suggests COX-2 as a potential therapeutic target to treat patients in different liver pathologies.This protection could be partially explained through the interaction with NRF2.Further research is needed to reveal the regulatory mechanisms of the KEAP-1/NRF2 pathway when COX-2 is overexpressed.Furthermore, in addition to the regulation of COX-2 expression by NRF2, the COX-2 derived prostaglandins also have a protective role.On the one hand, 15D-PGJ 2 can activate NRF2, and, on the other hand, the prostaglandins PGD 2 and PGE 2 can enhance the accumulation of cAMP, a key anti-inflammatory mediator in the cytosol that leads to a shift towards an anti-inflammatory profile.Therefore, the synergistic interaction of COX-2 and NRF2 can ameliorate inflammation and contribute to its resolution through an antioxidant response in liver pathologies.
Figure 4 .
Figure 4. Protective effect of COX-2 overexpression in hepatocytes in different liver pathology models: the protection is due to the induction or inhibition of several pathways.Apoptotic markers, such as caspases activity and the pro-apoptotic (BAX) to anti-apoptotic (BCL-2) ratio, are decreased, leading to a decrease in apoptosis.Lipid peroxidation (LPO) and the ratio of oxidized glutathione (GSSG) to total glutathione (GSHt) decrease, which is related to a reduction in oxidative stress.Induction of NRF2 and its derived phase II enzymes enhances the antioxidant response, thus contributing to a reduction in oxidative stress.Abbreviations: HFD, high fat diet; MCDD; methionine-choline-deficient diet; I/R, ischemia-reperfusion. Data is available in the original articles[11][12][13]148,149].Created with BioRender.com(accessed on 11 July 2023).
Figure 5 .
Figure 5. Interaction of NRF2 and COX-2 via ATF4: the end-product of the COX-2 pathway 15D-PGJ2 (derived from arachidonic acid (AA) metabolism) is able to modify KEAP1 cysteines, thus inducing NRF2 release.EFOX molecules, metabolized by COX-2 from DHA and EPA, are also capable of modifying KEAP1 cysteines.Once free, NRF2 translocates to the nucleus, where it induces ATF4 synthesis.ATF4 can in turn regulate NRF2 expression, in addition to inducing COX-2 expression, establishing a link between NRF2 and COX-2 expression.The ATF4 factor is also induced by the unfolded protein response (UPR), which will induce NRF2 and COX-2 expression.Abbreviations: DHA, docosahexanoic acid; EPA, eicosapentaenoic acid; EFOX, electrophilic oxo-derivative molecules.Created with Biorender.com(accessed on 11 July 2023).
Figure 5 .
Figure 5. Interaction of NRF2 and COX-2 via ATF4: the end-product of the COX-2 pathway 15D-PGJ 2 (derived from arachidonic acid (AA) metabolism) is able to modify KEAP1 cysteines, thus inducing NRF2 release.EFOX molecules, metabolized by COX-2 from DHA and EPA, are also capable of modifying KEAP1 cysteines.Once free, NRF2 translocates to the nucleus, where it induces ATF4 synthesis.ATF4 can in turn regulate NRF2 expression, in addition to inducing COX-2 expression, establishing a link between NRF2 and COX-2 expression.The ATF4 factor is also induced by the unfolded protein response (UPR), which will induce NRF2 and COX-2 expression.Abbreviations: DHA, docosahexanoic acid; EPA, eicosapentaenoic acid; EFOX, electrophilic oxo-derivative molecules.Created with Biorender.com(accessed on 11 July 2023).
Table 1 .
Summary of the NRF2 role in liver pathologies.
|
2023-07-29T15:05:23.128Z
|
2023-07-26T00:00:00.000
|
{
"year": 2023,
"sha1": "f8bb09330fe0d3ba01cbdfb15c4c52c3e169939a",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "19b78e67ebc75c6db4b4c0a04aef2103a6063ee4",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
143422719
|
pes2o/s2orc
|
v3-fos-license
|
Envelope Dynamics and Stability with non-linear Space-Charge Forces
We developed a model to calculate the stability of Gaussian beam distributions with non-linear space-charge forces in the presence of random and skew-quadrupole errors. The effect of the space-charge force on the beam matrix is calculated analytically including full cross-plane coupling in 4D phase space, which allows us to perform fast parameter studies. For stability analysis, we find the fixed points of the beam including the space-charge forces and construct a Jacobi-matrix by slightly perturbing the periodic solution. The stability of envelope oscillations is inferred by eigenvalue analysis. Furthermore, we employ envelope tracking as a complementary method and compare the results of the eigenvalue analysis with FFT data from the tracked envelope. The non-linearity of the space-charge force in combination with lattice errors and beam coupling opens up for envelope-lattice resonances and envelope coupling resonances. Hitting these resonances leads to envelope blow-up, causing an effective beam mismatch. Therefore, we finally examine the effect of beam mismatch on the envelope tune-shift and its stability.
Introduction
With the development of high-intensity linacs and accumulator rings, the space-charge force and its detrimental effects on beam dynamics became one of the main study subjects. In terms of beam dynamics, it causes tune-shifts which have the potential to drive the beam into resonances. Furthermore, it contributes to beam halo formation and emittance blow-up, which cause beam losses and a subsequent generation of radiation and heat. A thorough understanding of the underlying dynamical mechanisms is therefore essential.
In order to describe collective space-charge effects, it is common to look at the evolution of the beam envelope. Established techniques are based on envelope equations or numerical methods like multi-particle tracking from which rms values are calculated. Envelope equations are a computationally fast approach to beam envelope dynamics with space-charge and were developed by Sacherer [1]. They use linear approximations for the space-charge force and can be represented by beam matrices, containing the second moments of a beam distribution. They may also include linear coupling terms in two transverse dimensions.
In [2], these envelope equations were used to create stationary, field-generating beam distributions, which are combined with tracking of spectator particles and particle-in-cell (PIC) codes to study the development of beam halo, in particular, parametric resonances. PIC codes are usually computationally expensive, depending on grid sizes and the amount of interactions taken into account. In [3], the authors use tracking of single particles with space-charge kicks derived from the potential of an upright Gaussian distribution, which itself is derived from the rms quantities of the single particles. Through this calculation sequence, the number of applied space-charge kicks is limited if simulations are to be completed in a limited time-frame. In [4,5], envelope stability was examined by means of eigenvalue analysis and based on envelope equations with linear forces and complemented with particle tracking.
Our model is based on constructing a map for the transverse coupled beam matrix that includes the space-charge forces, as described by the Bassetti-Erskine formula [6,7]. We then proceed to determine the fixed points of the map and perform a linear stability analysis around them. From the eigenvalues of the linearized map, we infer the stability of configurations. Finding the equilibrium beam non-perturbatively first and then perturbing it to identify the coherent eigenfrequencies resembles the analysis in [8].
The model we developed is presented Section 2 and complements the es- approaches. In addition, the model's ability to treat cross-plane coupling enables us to simulate cases that include coupled beams and lattices. The section also includes a description of the ring lattice and of the method we use for envelope stability analysis. In section 3, we present and discuss our findings of envelope dynamics and stability. Finally, we examine the effect of beam mismatch on the coherent envelope tune-shift in section 3.4.
Space Charge Modeling and Methods
In [6], Bassetti and Erskine derived a closed, analytic expression for the electric field of upright Gaussian beams. Their formula was generalized in [7] to include all correlations within the 4D transverse phase space. The formula then reads contains the positions and angles of particles and w(z) is the complex error function [9]. The sum in the exponent in the above equation contains only the indices 1 and 3 of the position coordinates, since the self-force does not depend on the angles. The arguments of the complex error functions are given by We interpret the kick to the beam envelope represented by the second moments of a Gaussian distribution by its force as an average kick received by all of its constituent particles. Through this averaging, we construct a mapping for the space charge force for the ten beam matrix elements whereK = K∆ℓ and the beam perveance is given by N is the number of protons, r 0 their classical particle radius, and σ z the longitudinal beam size. The space-charge force is calculated in each longitudinal lattice slice of length ∆ℓ. The angle brackets in eq. 3 denote the averaging over the particle distribution given by where σ is the full 4D beam matrix, which is identical to the beam matrix used in calculating the electric field. The averages appearing in Eq. 3 can therefore be written as For these integrals, we derived fully analytic solutions in [10]. We are thus able to perform envelope tracking to apply non-linear space-charge kicks in a beam matrix formalism. The inhomogeneous terms in Eq. 3 are collected in a matrix (σ sc ) ij =σ ij − σ ij and added to beam matrix of the preceding step. We point out that Gaussians are not self-consistent distributions under the influence of non-linear space-charge forces. The introduced discrepancies are, however, small in the considered regime of operation with a maximum incoherent tune-shift on the order of 0.5.
In order to apply the space-charge kick in a symmetric way, we split the slices with transfer matrix R into halvesR such that R =R 2 and propagate the beam viaσ through each slice. The lattice we use throughout this work consists of a ring with 324 m circumference, comprised of 18 identical FODO-cells. We have reconstructed this lattice from information in [3]. The qualitative configuration of one cell is depicted in Fig. 1 We divide the ring in 6480 equidistant slices with length ∆ℓ = 5 cm. For 1024 turns in the ring, that amounts to more than 6.5 million slices. In each slice, the space-charge kick is applied to the beam. The analytic solutions of the non-linear space-charge kick enables us to complete such a simulation for one value of the particle density dN dz within a few minutes on a desktop computer. We find the zero-current periodic solution via the parametrization shown in the appendix of [11]. If the beam enters the lattice with its zero-current periodic solution, it experiences an injection mismatch due to the sudden presence of space-charge forces. In order to exclude the injection mismatch from our simulation studies, we find the periodic solution for the beam matrix including the non-linear space-charge force. For that, we propagate the beam one turn through the lattice and search the minimum of the cost function whereσ is a column vector with a selection from the ten independent beam matrix elements. In the reduced uncoupled case, our input parameters arê The stability of the beam matrix and its phase-advance are determined by systematically perturbing the periodic solutions about a small value ± ∆i 2 and observe the average change ǫ i,N in all other parameters after one turn trough the lattice. Each parameter change yields a column vector with the change ratio ǫ ∆ from which we construct a Jacobi-matrix J. The principle of this construction is outlined in Eq. 9 for all ten independent beam parameters.
Its eigenvalues contain one complex-conjugate pair for each oscillating mode.
Additionally, two of the eigenvalues are real and unity. They are the invariants of the system, which accounts for the beam emittance. The phase advances are extracted from angle of the complex eigenvalue.
Since the system's dynamic variable is the beam size, the angles of the eigenvalues contain twice the phase-advance of the lattice in the zero-current limit.
In the following section, we present and discuss the results of the stability analysis for test cases for undisturbed lattices, quadrupole errors and skew quadrupoles.
Envelope Stability and Dynamics
In this section, we examine the stability of the coherent envelope modes by means of eigenvalue analysis under the presence of space-charge forces. In our results, we show the fractional part of the eigentunes Q as a function of charge density, and a corresponding stability plot with eigentunes and the absolute eigenvalues in order locate resonances and instabilities. We assume a transverse emittance of 1 mm mrad in both planes. For the largest tested charge density, the coherent envelope tune-shift is −0.3 and the maximum incoherent tune-shift is −0.52.
We start the analysis with an undisturbed lattice. Then, we introduce single and later random quadrupole errors, where we expect the coherent envelope modes to react to the created stop-bands. The full cross-plane coupling enables us to examine simulation cases with single or randomly applied roll angles to quadrupoles. Additionally, the space-charge force in our model couples the coherent eigenmodes, which enables resonances between them [12].
Envelope Dynamics in the undisturbed Lattice
Before we examine the envelope stability in the test ring, we test the functionality of the eigenvalue analysis using one cell and show the results in Figure 2. Repeating the simulations with the complete ring, we find the tunes of the ring, in combination with the tune-shift, causes one mode to be shifted towards an integer tune and finally, across it. Figure 3 shows the shifted tunes and the stability analysis for this case. The mode, which crosses the integer tune, exhibits instability with moduli of the eigenvalues up to 1.17. This indicates that non-linear space-charge forces cause instability around the integer tune, even in the absence of lattice errors. In [3], crossing the integer tune leads to However, these also fall on integer tunes since they repeat in half-integer tune intervals. During integer tune crossing, the other transverse mode exhibits instability as well. We attribute this to the intrinsic space-charge coupling. This observation is consistent with [3], where the tune spread also increases for that mode. Furthermore, we find third-order difference resonance and a fifth-order sum resonance, which are space-charge driven. Eigenvalue moduli of unstable envelopes. Instability due to integer tune-crossing and spacecharge coupling (dotted ellipse), 3rd order difference resonance (solid ellipse), and 5th order sum resonance ((*)).
Envelope Dynamics with single and random Quadrupole Errors
Now, we examine the envelope stability and tunes for single and random quadrupole errors. They break the symmetry of the lattice and create halfinteger stop-bands. At the end of this section, we provide an analysis of how the stop-band contributes to emittance growth and thus, blowing up the envelope.
We start by slightly increasing the nominal strength of a single quadrupole by 1 %. The shift of the lattice tune due to this error is ∆Q ≈ 1 × 10 −3 . Figure 4 shows the shifted envelope tunes and the corresponding moduli of unstable eigenvalues. In comparison to the undisturbed lattice, the eigenvalues of the integer-crossing mode are now locked consecutively on the integer tune (dotted ellipse). The stop-band has a certain width in tune, which means the beam envelope is driven into instability if its tune falls into the stop-band. While the stop-band in one plane is hit, we find clear coupling instabilities of the opposite plane (solid ellipse). Since the beam sizes of both planes influence each other through space-charge, a rapid growth and beta-beating in one plane induces instability in the other plane.
In the next simulation, we apply a 1 % random variation to the strength of Regarding envelope stability, the number of unstable moduli in the vicinity of the integer tune is the same compared to the single 1 % quadrupole error.
The exact width and strength of the stop-band with random quadrupole errors strongly depends on the individual seed.
In Figure 5a in particular, we see instability for three charge densities, which are not yet locked onto the stop-band. Whether a stop-band extends above or below a half-integer tune depends on it being a result of an effective focusing or defocusing error. Additionally, the stop-band itself is displaced with respect to the lattice tune-shift, caused by the quadrupole error.
We increase the magnitude of the random errors in order to find the upper limit of the lattice stability. The lattice becomes unstable with random error variations higher than ±3 % for all tested seeds. Using ±3 % random errors with the same seed like in the previous simulation case, we obtain more charge densities shifting the envelope tune into the stop-band and their eigenvalue moduli slightly increase as well. This is also true for their corresponding coupling instabilities in the other transverse plane. Furthermore, the charge densities necessary to shift the envelope tune into the stop-band has increased, since the lattice tune has shifted more due to the stronger error.
In order to understand how the beam size increased when the beam crosses a stop-band, we analyze how stop-bands lead to instability and subsequent envelope blow-up. We first find the fixed-point of a beam in an undisturbed lattice.
The periodic beam envelope is parametrized as where ǫ 0 is the beam emittance and A 0 contains the respective Twiss-parameters.
The stop-band is created by adding a quadrupole Q to a rotation matrix O with focal length f and phase-advance µ. The full-turn transfer matrix is given O (μ) is a pure rotation and matrix A contains the Twiss-parameters.μ is the phase-advance, shifted by the quadrupole error and is approximatelyμ ≈ If the undisturbed phase-advance µ is close to zero or 2π, the tune-shift of the quadrupole errors moves the beam onto the integer stop-band and the decomposition of the transfer matrix R causes the matrix A to become complex, which we denote byÃ. Furthermore, the decomposition yields a rotation matrix O (μ) with a now complex phase-advanceμ = 0 + im, where the real part is locked on zero. The zero real part of the rotation yields the unit matrix and can thus be omitted. We perform the beam propagation σ = R σ 0 R T in the stop-band, which in re-composed parametrization reads During the matrix multiplication, all involved imaginary parts create real contributions, which distorts the circle in phase-space into an ellipse, thus inducing a mismatch to the beam. Figure 6 shows the deformation of a normalized phaseellipse for ten subsequent turns through a stop-band. This deformation is the envelope blow-up which is observed during stop-band crossing.
Envelope Studies with single and random skew Quadrupoles
We now turn to the effect of transverse coupling in the magnet lattice and model this by rolling one focusing quadrupole by 1 • . This causes the periodic beam matrix-the fixed point-to acquire non-zero values for the components describing transverse coupling. Moreover, the stability analysis is now based on finding the eigenvalues of a 10 × 10-matrix, which causes two additional pairs of complex-conjugate eigenvalues to appear. We show the simulation results in Introducing random roll angles to all quadrupoles along the lattice changes the locations at which coupling resonances appear. Like with random quadrupole errors, the lattice coupling may compensate, depending on the seed. We find that the lattice becomes unstable, if the variation on the random roll angles exceeds ±1 • .
Tune-shift of mismatched Beams
So far, we examined envelope stability by slightly perturbing the periodic beam solution and found that crossing resonances causes the mismatch to grow.
We therefore investigate the effect of a large mismatch and examine how the envelope tune-shift changes with the mismatch. We quantify mismatch by the factor B mag [13], which describes the amount of non-overlapping between two ellipses in phase space and which is given by where α and β are the Twiss-parameters of a periodic solution and the starred Twiss-parameters those of the mismatched beam . We apply the mismatch by manipulating the σ 11 and σ 22 elements of the periodic beam matrix such that the emittance is preserved. The absolute slope coefficients of the linear tune-shift as a function of beam mismatch is shown in figure 9b. The linear tune-shift is proportional to Since all mismatched beams stem from the same matched beam sizes, we omit the dependency on the beta-functions such that the dependency on B mag is well-approximated by ∆Q ≈ ∆Q0 √ Bmag . The approximation agrees well to the simulation results as seen in Figure 9b. The exact dependency of the tuneshift on beam mismatch varies since mismatch changes the beta-function of the envelope and thus, the integrated space-charge force which itself couples back into a change in beta-function and tune.
Conclusions
We presented a non-linear space-charge model and examined the beam envelope dynamics and stability for different lattice configurations including quadrupole errors and skew quadrupoles by means of eigenvalue analysis and as a function of charge density.
First, we tested the principle of eigenvalue analysis on a single cell where we compared its envelope tunes to those obtained by Fourier analysis from tracked envelopes. Both methods agree well, showing that the eigenvalue analysis yields correct envelope tunes. In simulations with the complete ring, we found that a mode, crossing a zero fractional tune, exhibits instabilities, despite the absence of lattice errors. We explain this by interpreting the space-charge force as defocusing quadrupole error, which creates a stop band at zero fractional tunes. The main qualitative difference between our model and approaches with single particle tracking is that any form of instability leads to a growing envelope whereas single particles would either be lost during instability or rearrange themselves, leading to different rms quantities. Furthermore, we found higher order, space-charge driven resonances.
Applying a single quadrupole error creates stop-bands. In this case, more eigenvalues lie in the stop-band because its width accepts a broader range of shifted envelope tunes. We found, that whenever an envelope is in the stopband and becomes strongly unstable, the other transverse envelope is affected as well, albeit with weaker instability. This is attributed to the intrinsic coupling between the transverse planes via the space-charge force.
Introducing random quadrupole errors did not lead to broader stop-bands.
However, the exact width of the stop-band depends on the error seed. For the same reason, triggered stop-band instabilities and coupling resonances change because they are the result of the combination between coherent envelope tuneshift and error-induced shift in the lattice tune. Increasing the strength of the error within the same seed led to more unstable eigenvalues with larger moduli.
We found the stability limit of the lattice with random error variation between ±3 % for many tested seeds. Beyond that, the tune of the lattice becomes imaginary and thus, inherently unstable.
We presented an analysis of stop-band-induced instabilities and showed that the initially circular phase-ellipse is stretched during stop-band crossing. This deformation is the result of an imaginary phase-advance in the stop-band, which causes a mismatch that leads to an increased beam size.
Our model enables us to treat coupled lattices, where 4 eigenmodes needed to be observed. We have presented two examples of coupling resonances, which appeared after giving a roll angle of 1 • to a single focusing quadrupole. In the first example, we found an unstable envelope mode with a flip-flop behavior. This behavior is indicated by an eigenvalue pair lying on the tunes 0 and 0.5, respectively. The unstable flip-flop behavior was confirmed by envelope tracking. Secondly, we found strong coupling resonances whenever two modes exhibit nearly identical tunes. The respective eigenvalue moduli clearly increased and the envelope tracking confirms simultaneously growing envelopes with strong growth already in a short time-frame. Introducing random roll angles to all quadrupoles changes occurrences of coupling resonances and depend on the individual seed. We found that the lattice becomes unstable if the variation on the random roll angles exceeds ±1 • .
Additionally, we examined the effect of beam mismatch on the linear tunesshift by manipulating the σ 11 and σ 22 elements of the periodic beam matrix. We found that the linear tune-shift becomes weaker with larger mismatch and we showed with a simple proportionality, that the ratio between the mismatched and the matched tune-shift is well-approximated by ∆Q mismatch ∆Q0 ≈ 1 √ Bmag .
Acknowledgements
This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.
|
2019-05-02T10:50:10.000Z
|
2019-05-02T00:00:00.000
|
{
"year": 2019,
"sha1": "bc8bbc223d952b9eb05fdb509626a82cfe1a5a03",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1905.00660",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ec34e0b7c89cc49c46cc0ef94b7a7e56a9b2ff08",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
139314613
|
pes2o/s2orc
|
v3-fos-license
|
Effect of interlayer bonding quality of asphalt layers on pavement performance
The quality of interlayer bonding at the interfaces between the asphalt layers in flexible pavements affects the overall pavement performance. Lack or partial lack of interlayer bonding between asphalt layers can cause pavement’s premature failures such as rutting, slippage of the wearing course, cracking or simply a reduction in the calculated fatigue life of the pavement structure. This paper shows the case studies of investigation of actual or potential premature failure of newly reconstructed and constructed pavements where low quality of interlayer bonding has a dominant meaning. In situ and laboratory tests were performed and followed by analytical calculation of pavement structure where thicknesses of layers and maximum shear strengths obtained from the tests were used. During the investigation it was found out that a low quality of tack coat as well as the same aggregate gradation in the bonded asphalt mixtures were the main reasons behind the weak quality of interlayer bonding. Partial interlayer bonding has a strong influence on reduction of calculated fatigue life of pavement. The summary of the paper includes recommendations on how to avoid the low quality of interlayer bonding of asphalt layers.
Background
The problem of lack or partial lack of interlayer bonding quality between asphalt layers is a common reason of dispute between contractor and investor. Poor quality of interlayer bonding causes pavement's premature failures such as rutting, slippage of the wearing course, cracking or simply a reduction in the calculated fatigue life of the pavement structure [1]- [12]. The laboratory test of interlayer shear strength performed in a Leutner device [13] is the standard procedure in Poland for controlling the quality of interlayer bonding. Polish requirements [14] state minimum levels of shear strength for specimens drilled out from pavement. The problem described in this paper concerns three cases of pavements where shear strength obtained for specimens did not fulfill the Polish requirements. The article is focused on the problem of weak interlayer bonding between the asphalt base and the binder course or between two lifts of the asphalt base course. A detailed analysis of fatigue life is presented for one selected case.
The interlayer bonding strength depends on several factors including quantity of tack coat, type of bitumen emulsion used for the tack coat, difference between asphalt mix grading in respective layers, poor compaction of the upper layer etc. [7], [15]- [20]. In some cases when interlayer bonding is 2 1234567890
BESTInfra2017
IOP Publishing IOP Conf. Series: Materials Science and Engineering 236 (2017) 012005 doi: 10.1088/1757-899X/236/1/012005 limited or does not pass the requirements but there is some degree of bonding still, the interlayer bonding can be expected to increase due to aging, traffic load and simply time [21]- [23].
Objectives
The primary objectives of this study were as follows: • to reveal how limited interlayer bonding can impact the decreases in calculated fatigue life of pavement structure, • to show the investigation of three cases of newly constructed pavements where the quality of interlayer bonding did not fulfill Polish requirements, • to present some recommendation on how to avoid low quality of interlayer bonding of asphalt layers.
Case studies
Three cases of different road sections were used in the analysis: • Road 1 is a newly constructed expressway, the highest (express) technical standard (R1S), for heavy traffic and represents the highest construction standard and investor control. • Road 2 is a newly constructed national road, the high (highway) technical standard (R2H), for heavy traffic and represents the highest construction standard and investor control. • Road 3 is a reconstructed city street, the medium (local) technical standard (R3L), for low medium traffic and represents the medium construction standard and investor control.
In roads R1S and R2H the problem of interlayer bonding occurred between two lifts of asphalt base course made from asphalt concrete AC22P with neat bitumen 35/50. The tack coat was applied properly with the usage of bitumen emulsion C60B3ZM (bitumen emulsion dedicated for tack coat of asphalt layers acc. to PN-EN13808) in an amount of bitumen within the range of 0.3-0.5 kg/m 2 . In terms of interlayer bonding, the construction process followed good practices. Nevertheless, bonding strength measured in the Leutner test was under the required value of 0.6 MPa at the temperature of +20°C but higher than 0 MPa (see Figure 1). The reason of low interlayer bonding strength was the non-homogenous surface of the bottom base course due to segregation of mixture (see Figure 3) and the same gradation of aggregate used for two lifts of the base course. In consequence the interlocking between two lifts of the asphalt base course was relatively weak.
In the case of road R3L the problem of interlayer bonding occurred between an asphalt base course made from AC22P and a binder course made from AC16W, both with neat bitumen 35/50. In this case shear strengths between two layers were much lower than the required 0.7 MPa at the temperature of +20°C. Additionally, 30% of cored specimens disintegrated during coring, that is the core of the binder course and the base course separated spontaneously, therefore those specimens were marked with 0 MPa of interlayer shear strength (see Figure 2). Unlike in the case of roads R1S and R2H, the reason of weak bonding quality in road R3L was not gradation of aggregate in the asphalt mix, but rather poor quality of the tack coat sprayed in combination with improper compaction of the upper layer -the binder course. Despite the usage of the proper bitumen emulsion C60B3ZM in amount of 0.3-0.5 kg/m 2 , the sprayed surface was not homogenous, which is visible in the Figure 4. Another problem identified in road R3L was the insufficient thickness of the asphalt base course, which additionally and ultimately contributed to the decrease in fatigue life of the pavement structure.
In all the aforementioned cases, the unsatisfactory bonding strength was the reason of dispute between the contractor and the investor. On the one hand, lower quality of interlayer bonding contributes to a decrease in fatigue life but, on the other, milling of layers and their subsequent reconstruction causes considerable losses for the contractor. The presented analyses were performed in order to assess whether the existing pavement structure can bear the designed traffic or whether its layers need to be re-built.
Pavement structure model
For the sake of analysis the pavement structure is modeled as a multilayer elastic half-space. The scheme of the model is presented in Figure 5. It is assumed that each layer is homogenous, isotropic and elastic. The calculations of stresses, strains and deflections inducted in pavement by a single wheel load were performed in BISAR ® software [24].
The load is a vertical force Q V = 50 kN, applied at a circular contact area and uniform contact pressure q=850 kPa. Two cases of load were considered: a load moving with constant speed and with horizontal force Q H = 0 kN; and a load moving with deceleration and horizontal force Q H = 0.6 kN Q V = 30 kN. The additional horizontal force was considered in the analysis of shear stresses at the interlayer surface. Material properties of respective asphalt layers were assumed for two temperatures: • +20°C for analysis of shear stress (similar to results obtained from the Leutner test), • +13°C for analysis of fatigue life (the equivalent temperature for Poland [25]).
The detailed mechanical properties of each layer of pavement structure are given in Table 1. BISAR ® software enables modeling of interlayer friction by reduced shear spring compliance ALK [24]. Full bonding between layers occurs when the shear strength of interlayer bonding is close to or a little lower than the shear strength of the used asphalt mixtures, that is when the results of the Leutner test meet the requirements. Full slip between two asphalt layers practically does not occur because of roughness of their surfaces. The modeling of interface parameter for limited bonding requires the performance of a series of calculations with different values of parameters ALK in the range from 0 to 200 m [26], [27], that is, respectively, full bond and full slip (non-bound).
Effect of reduced interlayer bonding on shear stresses
First stage of mechanistic analysis considered shear stresses on the interface between the asphalt base and the binder course. Calculations were performed for temperature of +20°C, same as in the Leutner test. The results of calculations are given in Figure 6. It can be concluded that for full (100%) bonding (parameter ALK = 0 m) shear stresses obtained from calculations are two times lower than the required 0.7 MPa from the Leutner test. It is also visible in Figure 6, that at low values of bonding, below 30%, the shear stresses are close to 0 MPa. Thus it was assumed that for places where specimens cored out from pavement had disintegrated, the bonding can be assumed as 30% due exclusively to friction at the layer interface. It is visible from comparison of Figures 2 and 6 that shear strength for the majority of specimens is lower than stresses obtained from mechanistic calculations. It means that two layers in pavement will displace horizontally in relation to each other, but interlayer bonding will bear some part of the shear stress, lower than its strength. It is also possible that when the maximum shear stresses caused by the traffic exceed its strength, the two layers will slip. Slip of layers -especially in the case of the wearing course -results in high displacement between them and causes characteristic semi-circular cracks, leading to loss of pavement roughness [4]. The risk of slip is especially high for sections with frequent acceleration and deceleration of vehicles, e.g. within the zones of crossroads. However, layer slipping is more often observed for cases of poor bonding quality between the wearing course and the binder course and it is rather not expected to occur between the binder course and the base or between two lifts of the asphalt base course.
It should be noted that the direct comparison of shear strength derived from the Leutner test with the stresses from mechanistic calculations is a considerable simplification due to different shape of the load. There is still a gap of knowledge pertaining to relation between the results of laboratory tests and the mechanistic calculations. Nevertheless, it is certain that at the level of stress between 0 and 0.3 MPa the interlayer bonding takes on values lower than 100% but the full slip between layers does not occur, and it is expected that interlayer bonding is greater than 30%.
Effect of reduced interlayer bonding on fatigue life of pavement structure
The analyses of fatigue life were performed for properties of asphalt mixtures obtained for equivalent temperature of +13°C, used for design of flexible pavements in Poland [25]. Calculations were performed for different levels of interlayer bonding from 0% (full slip) to 100% (full bonding) and for 4 thicknesses of asphalt layers from 18 cm to 24 cm. Two criteria were considered: bottom-up fatigue cracking of asphalt mixtures acc. to AASHTO 2004 [28] and the Asphalt Institute criterion of subgrade permanent deformation [29]. The fatigue life of pavement structure is represented by the minimum value obtained from the criteria of fatigue asphalt cracking and subgrade permanent deformation. The results of calculations are given in Table 2 for thickness of asphalt layers equal to 24 cm, and in Figure 7 critical point is determined directly under wheel load at the level where the horizontal strains ε xx in asphalt layers reach their maximum. For full bonding of asphalt layers the critical point (maximum ε xx ) occurs at the bottom of all asphalt layers in the asphalt base course. With the deterioration of quality of interlayer bonding between the base and the binder course, strains at the bottom of the asphalt base and strains at the bottom of the binder course increase simultaneously. For a limited level of bonding (parameter 0<ALK<200 m) strains at the bottom of the binder course are higher than at the bottom of the asphalt base and, therefore, fatigue cracks will be initiated in the binder course (see Table 2). However, for the considered case of pavement structure in R3L the fatigue life obtained from permanent deformation criterion is lower than fatigue life obtained for bottom-up cracking initiated in the binder course.
The Table 2 and the critical criteria are marked in graphs in Figure 7. Figure 7 also shows the trend of impact of bonding level and asphalt layers thickness on the type of critical fatigue criteria. It is visible in the Figure 7 that a slight decrease in interlayer bonding causes a significant decrease in fatigue life, and this decrease is higher for thicker pavements. For example, with total asphalt layers thickness of 24 cm, slight decrease in interlayer bonding from 100% to 70% causes significant decrease in fatigue life by 50%. Further decrease to 30%, which represents zero shear strength, causes decrease in fatigue life by almost 85%. It can be concluded from Table 2 and Figure 7 that lack of interlayer bonding cannot be accepted by the investor and rebuilding of pavement layers is necessary. Partial lack of interlayer bonding, shown by shear strength in the Leutner test below required values, but above the maximum shear stress actually present in the pavement, can be accepted, provided that the quality of the asphalt mix and layer thicknesses fulfill all the other requirements.
Summary and recommendations
As shown by means of mechanistic-empirical analysis, lack or partial lack of interlayer bonding between asphalt layers can cause reduction in fatigue life of pavement structure.
Case studies included analyses of three road sections. Despite proper technical specification and technology of construction the shear strength of interlayer bonding did not fulfill the requirements. Potential reasons of unsatisfactory interlayer shear strength obtained in the Leutner test are as follows: • Lack or low quality of an interlayer tack coat.
• Too low or too high quantity of bitumen emulsion for spraying a tack coat.
• Non-homogenous surface of the lower layer and weak interlocking due to insufficient compaction of the upper layer or the same gradation of bonded layers.
These reasons -in case of an otherwise good practice -would imply the main recommendations for contractors willing to reduce the risk of limited or zero interlayer bonding between asphalt layers.
Reduction of interlayer bonding between the asphalt base and the binder course or two lifts of the asphalt base course will result in: • Premature initiation of fatigue cracks both on the bottom of asphalt layers in the base course and in the binder course, which accelerates the distress of pavement. • Much faster occurrence of critical level of asphalt fatigue cracks for thicker pavements.
• Faster occurrence of permanent deformation rather than fatigue cracks for thinner pavements.
• For very low quality of interlayer bonding critical level of permanent deformation of thick asphalt pavement can occur before fatigue cracks.
|
2019-04-30T13:03:17.447Z
|
2017-09-01T00:00:00.000
|
{
"year": 2017,
"sha1": "b479dba307d525789dd80449be09cd8453bc228f",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/1757-899X/236/1/012005/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "f97d1663922a0ca03088be9f0977468d73dd9886",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Physics"
]
}
|
136325510
|
pes2o/s2orc
|
v3-fos-license
|
Attempt to Apply Surface-Conductive PAN as a Precursor for aPAN Ionic Electroactive Polymer Gel Fabrication
Chemically activated polyacrylonitrile (aPAN) displays ionic electro-mechanically active polymer properties. Thin, gel-like fibre is a technically feasible form of aPAN, as it quickly shrinks or swells in response to a variation in electrolyte pH, soaking it in. A prerequisite for direct electrical stimulation of aPAN fibres through electrolysis–produced variations in pH is their electrical conductivity, commonly achieved by complex surface modification of already-formulated aPAN. The paper presents an alternative approach involving the exploitation of electro-conducting surface-modified PAN fibres as a precursor for fabrication of aPAN. The electrical conductivity of precursor PAN fibres was achieved by the chemical formation of a copper sulfide complex covering.
A cross-linked polymer with attached groups of polyacrylic acid (PAA) displaying an anionic poly-electrolytic character is often cited in the literature as a typical IPG. It is produced from polyacrylonitrile (PAN) in a two-phase conversion process called "activation", proposed by Umemoto [3]. This procedure involves the initial thermal oxidation of PAN, which leads to the creation of a PANox form partially cross-linked by thermallyinduced pyridine rings, but still containing a certain amount of non-reactive nitrile -CN groups. The subsequent second stage of the process entails the saponification of PANox in hot aqueous solution of strong alkali, leading to hydrolysis of the remaining -CN groups to carboxylic -COOH groups of the PAA acid and the formation of a so-called activated PAN (aPAN) network of presumed structure, shown in Figure 1 (see page 30).
The resulting aPAN hydrogel alternates its volume in response to a change in pH: in acidic conditions it shrinks, while in alkaline it swells. It is a result of interactions between -COOH and electrolyte-originated protons and changes in the molecular conformational structure of the aPAN cross-linked network according to the molecular reaction mechanism proposed by Schreyer [4]. As transferring aPAN strands between acidic and alkaline solutions is not technically feasible in practical application, a change of pH is thus produced by means of a reversible electro-chemical reaction. Fibres n Introduction The idea of "artificial muscles" -materials that mimic the mechanical performance of natural muscle tissue -is practically realised in electroactive polymers (EAPs). EAPs are plastics and polymeric composite materials which change their geometric dimensions significantly (or other mechanical properties, such as viscosity or elasticity) in response to electrical stimuli [1]. EAP gels (named as Ionic Polymer Gels IPG) comprise a sub-family of ionic EAPs. IPGs are hetero-phase material systems encompassing solid, cross-linked and partially dissociated polymer (polyelectrolyte) interpenetrated by interstitial liquid (solvent; usually water), remaining in thermodynamic equilibrium with the polymeric matrix. The special feature of IPG is its substantial reversible volume change in response to pH variations. Placing IPG in the anodic or cathodic region of an electrochemical cell allows to electrically churn out changes in the pH of the gel liquid phase and thus produce its shrinkage or swelling as a result of electrical stimuli. From an application point of view it is highly desirable to induce pH changes directly in the volume of the IPG, possibly by means of electrodes deposited on the surface of the gel or hosted directly in its volume. As the kinetics of the electromechanical reaction of IPG is limited mainly by diffusion in the liquid phase, the volume change rate depends -among other factors -on the dimensions of the gel actuator. In practice, this means that the elements of artificial IPG muscles DOI: 10 of aPAN are located in the direct vicinity of the anode or cathode of the electrolytic cell, where the pH may be electrically varied. However, such an approach is also not perfect as it requires a massive (in relation to aPAN strand dimensions) electrolytic cell and electrodes, which are bulky (e.g. made of metallic mesh) or laborious to make (e.g. by winding a thin Pt wire around aPAN strands). Therefore for a direct electrical stimulation of aPAN fibre its surface must be made to be electro-conductive in order to produce an electrode that stays in intimate contact with the aPAN gel. Such conductivity is achieved either by post-activation chemical Pt metallisation [1] or by making up aPAN composite structures encompassing Ag, polyaniline or polypyrolle [1]. Hou at al. showed that the electrospinning of a PAN solution containing up to 35% of multiwall carbon nanotubes (MWCNT) surface-modified with car-boxylic groups furnishes PAN-MWCNT composite nanofibres approx. 200 nm in diameter with carbon nanotubes wellaligned along the fibres length [5]. Gonzalez and Walter exploited such an approach to make aPAN fibres electro-conductive by means of carbon nanotubes (MWCNT) and graphite modification. PAN-MWCNT composite nanofibre mats, with significant agglomeration of conductive additive material were successfully produced by electrospinning. However, when subjected to thermal annealing and hydrolysis the integrity of aPAN mats was significantly reduced, producing weak, difficult-to-handle structures not applicable as electricallyactivated IPG [6]. An approach to apply activated carbon to compose electrodes and to exploit the electric double layer effect (as exercised in supercapacitors) for actuation in supramolecular nanocomposite ionogel containing UV-photopol-ymerized hydroxyethyl methacrylate interpenetrated with ionic liquid (1-butyl-3-methylimidazolium tetrafluoroborate), was recently exploited by Liu et al. as a method to produce non-water based IGP applicable for a wide temperature range [7].
Treatments used to establish conductivity in aPAN are thus complicated, multistage, and require toxic, explosive or flammable chemical reagents. Moreover such conductive phases are non-replicable, unstable and weakly bonded with the IPG fibre core. As a result their delamination after a very limited number of electrochemically-induced shrink-swell cycles is observed, leading to the disappearance of electrically-driven mechanical action.
The paper presents an experimental attempt to reverse the conventional formulation sequence of PAN → aPAN → → conducting aPAN. In such a customized approach, raw PAN fibres were initially made to be surface-conducting and then exploited as a precursor to produce aPAN. A wet xanthate process was adopted to produce surface-conductive PAN by surface modification to form a superficial layer made of copper sulfide complexes bonded to nitrile functional groups of the underlying PAN fibre core. It provided a continuous, highly conductive, robust and strongly PAN fibrebonded covering resistant to wear and multiple washing. It is a well-established process developed and commercially employed by the Textile Research Institute. Electro-conducting PAN fibres commercially produced this way are known under the brand name Nitril-static (N-S) and are used to manufacture charge dissipative yarns and nonwovens [8]. Surfaceconducting PAN precursor with a copper sulfide complex covering was then subjected to the classical chemical activation process, involving thermal oxidation and saponification, transforming firm PAN fibres to gel-like aPAN filaments displaying properties representative of IPG. along the cross-section of the fibre (Figure 3.b). Low quality SEM image shown in Figure 3.b (and also in Figure 4.b) was due to the so-called environmental mode used for the EDS-SEM analysis in order that its results not be hindered by the PAN precursor specimen Au metallisation used in the standard SEM mode of operation (the environmental mode uses low pressurization of the SEM chamber and thus residual humidity to disperse electron-beam-generated charges, which, however, results in low quality SEM images fuzzed and blurred due to electron beam dispersion). Such ESD-SEM line analysis indicated, however, that the sulfide covering was not strictly superficial, because the majority of Cu and S atoms were located to a depth of about 2 µm, corresponding to about 20 % of the diameter of the fibres. This fact explains the good adhesion of the conducting layer to PAN fibre and its resistance to wear and wash. The experimentally determined resistance of 1 cm of the PAN precursor fibre was approx. 1 MΩ. As-layer strongly attached to the PAN fibre surface, owing to co-ordinate bonding with PAN nitrile functional groups [9]. Such electro-conductive sulfide cladding displayed both good mechanical endurance and resistance to multiple washing. The as-processed PAN precursor fibres had a distinctive, olive-greenish hue and diameter of approx. 22 µm (Figure 3.a).
The surface of the PAN precursor fibres was minimally rough (Figure 3.a inset) and generally homogeneously, composed of a negligible number of minuscule dust-like protrusions.
EDS-SEM composition microanalysis of the PAN precursor surface revealed C, Cu, S, and O as its main atomic components. The ratio of Cu:S = 1:1 determined suggested that the main compound present in the superficial section of the fibre was CuS (which may also be responsible for its olive-greenish coloration). The EDS-SEM line composition microanalysis focused on 2 constituents of CuS i.e. Cu and S, and was also carried out wet treatment steps undertaken in a hot water bath containing: n 5% w/w CuSO 4 . 5 H 2 O and n 7% w/w Na 2 S 2 O 3 . 5 H 2 O (concentration given in relation to the mass of the fibres processed). The acidity of the reaction bath was kept at pH 3 by the addition of a suitable amount of formic acid. Figure 2 illustrates the schematically thermal and reactional treatment steps leading to the formation of copper complexes strongly bonded to the PAN surface. Finally the fibres were thoroughly rinsed in distilled water and hot air dried. The above-mentioned xanthating process was also applied in order to restitute the electro-conductivity of aPAN gel fibres, which will be discussed later in the text.
In order to obtain aPAN electroactive gel, the PAN precursor fibres were subjected to a traditional two-phase conversion process [4]. First the fibres were thermally oxidised in ambient air at 220 °C for 90 minutes and then slowly cooled down; oxidation process parameters were chosen according to Shreyer [4]. Loose strands of PANox were then bonded into bunches containing a few hundred fibres, for each of which chemo-resistant epoxy resin was used (460 DP, 3M). Bundles of PANox fibres were then subjected to saponification in boiling 1 N aqueous solution of LiOH for 30 minutes. The resulting aPAN gel fibre bundles were finally thoroughly rinsed in distilled water for 30 min.
Microscopic examination and fibre composition analysis was made using an SEM microscope (Hitachi S-3400N, Japan) fitted with an EDS analyser. The reaction of aPAN to pH variation was monitored visually using an optical microscope (MMT 800BT, Microlab, Poland). Measurement of the resistance of individual fibres was carried out using an electrometer (Keithley 6517, USA) working at 10 V test voltage. In order to do that two electrodes made of copper foil adhesive tape (1181, 3M) were affixed to a dielectric substrate and separated by a 10 mm gap. The electrodes were fitted with cable connectors to the electrometer and single polymeric fibres were then attached between electrode pairs using electro-conducting glue (EP77M-F, Masterbond). n
Results and discussion
The electrical surface conductivity in PAN precursor fibres was related to the continuous copper sulfide Cu x S y suming that the thickness of the sulfide layer was 2 µm, its mean resistivity was approx. 1.2 Ωcm, which stayed in a good agreemnet with literature data which declared that more than 87% of electro-conducting PAN fibres displayed a resistivity in the range 1 -5 Ωcm [10].
After the completion of the first phase of the activation process i.e. after thermal oxidation, the PANox fibres become uniformly black. SEM microscopic analysis showed (Figure 4.a) that the thickness of the fibres decreased to approx. 18 µm and the microstructure of the oxidized fibre surface was not noticeably distorted.
Unfortunately as a result of the oxidation process, fibres lost their initial good electrical conductivity, manifested by a resistance exceeding 1000 MΩ, as experimentally determined for 1 cm long PANox fibres.
EDS-SEM composition analysis made
along the oxidized fibre cross section (Figure 4.b) indicated evident changes in the distribution of Cu and S in the subsurface region of the fibre as well as a substantial change in the Cu:S ratio in this area. It suggests that the sulfide layer was partially diffused towards the fibre core and supposedly some part of the original CuS was also thermally transformed into other Cu-S compounds. Literature sources provide a different CuS transformation temperature: 220 °C [11], 130 -330 °C [12], 181 -313 °C [13] and the temperature range cited depends mainly on the way CuS was synthesised. It can therefore be presumed that when the PAN precursor fibres were heated a thermo-chemical transformation of CuS into chalcocite Cu 2 S or digenite Cu 1.8 S may have already started at 130 -180 °C. The resulting black colour of the PANox precursor fibres further supports this assumption as chalcocite is a black mineral. As a result of CuS diffusion and transformation into Cu-reach forms, a significant deterioration of electrical conductivity was observed. Chalcocite Cu 2 S is substantially more resistive (5.7 × 10 5 Ω/sq [14]) than digenite Cu 1.8 S (346 Ω/sq [14]) and both display p-type semiconducting properties. On the other hand covellite CuS exhibits metallic-like conduction [15] and is characterized by low resistivity (3.6 × 10 -6 --6.3 × 10 -7 Ωm [16]).
Despite the negative result of the first phase of the activation i.e. obtaining non-conductive PANox fibres, they were subjected to saponification in order to test whether such altered PANox fibres may be correctly saponified to produce the aPAN form, whether sulfide surplus may interfere with the saponification process, and whether it is possible to recreate a sulfide covering on such gel aPAN fibre boundless using the xanthating method. Saponification process parameters, including the use of LiOH, were chosen according to Choe [17]. PANox fibres were bonded together with chemo-resistant glue into boundless prior to saponification because loose PANox fibres tend to form unusable tangles when boiled. As a result of saponification, proper elastic fibres of aPAN were fabricated, which significantly contracted when the pH was reduced from 14 (1 N LiOH) to 7 (distilled water). Strong volumetric variations of gel aPAN fibres caused by a pH shift were clearly evident in microscopic images (Figure 5). An increase in pH from 1 to 7 was accompanied by over 180% relative thickness expansion of the fibres from approx. 40 µm (Figure 5.a) to approx. 75 µm (Figure 5.b).
A further increase in pH up to 14 brought about a thickness rise of up to approx. 135 µm (Figure 5.c), almost reaching the 330% relative thickness change related to the pH range 1 -14. Due to the isotropic character of the expansion process, the length change of the fibres was expected to be similar. It was also observed that among the correctly pH-responding aPAN fibres there were some exotic fibres that reacted correctly by compression in acidic conditions but displayed virtually no volumetric change in the pH range 7 -14 ( Figure 5.d). However, the reason for this effect was unidentified.
Finally an attempt was made to re-create a sulfide complex layer on aPAN gel fibres using the same procedure as applied to produce the covering on PAN precursor fibres. However, as a result only a certain fraction of aPAN gel fibres was covered with a layer of Cu x S y (Figure 6.a) and the coating was deposited only along parts of the fibre length. The sulfide layer was not dense and it consisted of porous agglomerates built up on the filament surface (Figure 6.b). The recreated coating was therefore completely different in nature from the covering formed on the original PAN precursor fibres, where the sulfide layer was built in the surface and sub-surface part of the fibre and represented its solid section. As the re-formed covering was non-continu- ous it did not provide a path for electrical current flow, making the aPAN gel fibre non-conductive. The failure in re-creation of the conductive sulfide covering may be identified as related to the minimal amount of -CN functional groups left in the aPAN once the saponification process (turning -CN into -COOH) was terminated. On the other hand, the sulfide formation method required the existence of nitrile groups, which were responsible for creating centers for Cu atom coordination bonding. The areas of aPAN on which the sulfide layer was re-established probably contained residual -CN groups that had not been transformed in -COOH during the course of saponification. Such a flimsy sulfide layer could also have arisen as a result of the precipitation of sulfides from the solution and their physical deposition on the fibre -the spongy structure of the restituted sulfide conglomerates, which was weakly linked to the surface of the fibres, testifying to this assumption. An additional factor which led to the weak coverage of fibres with a properly attached sulfide layer was also related to the necessity to carry out the xanthating process using bundles of aPAN, which seriously hampered the penetration of reagents between individual fibres. As a result, only a minor fraction of fibres was fitted with electrically-conductive surface entities, which translated into the electrical discontinuity of such structures and to the very high electrical surface resistance of aPAN subjected to Cu x S y restitution.
n Conclusion
It was experimentally shown that electroconductive PAN can be used as a precursor to form proper aPAN gel, responding to a shift in pH from 1 to 14 with a more than 300% relative thickness change.
Unfortunately the process of thermal oxidation of the electro-conductive PAN precursor leads to the disappearance of its surface electrical conductivity, associated with the diffusion and thermal transformation of CuS into a less conducting sulfide type. An attempt at restitution of this layer on aPAN gel fibres was not effective due to the lack of coordination bonding sites, required for the formation of properly bonded CuS. Further experiments need to be performed in search of an effective CuS deposition technique for aPAN gel fibres, which may include the application of dyes which display the capacity of coordinative bonding to metal sulphides [18].
|
2019-04-29T13:15:58.499Z
|
2016-09-01T00:00:00.000
|
{
"year": 2016,
"sha1": "9c47ddc52824ee0960ae6d12b6b0531026bda21d",
"oa_license": null,
"oa_url": "https://doi.org/10.5604/12303666.1215523",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "447c002fa6159ea25fe44d87b5e06ec907532ae9",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
14412812
|
pes2o/s2orc
|
v3-fos-license
|
Effects of a Sliding Plate on Morphology of the Epiphyseal Plate in Goat Distal Femur
The aim of this study was to observe the effects of a sliding plate on the morphology of the epiphyseal plate in goat distal femur. Eighteen premature female goats were divided randomly into sliding plate, regular plate and control groups. Radiographic analysis and histological staining were performed to evaluate the development of epiphyseal plate at 4 and 8 weeks after surgery. In the sliding plate group, the plate extended accordingly as the epiphyseal plate grows, and the epiphyseal morphology was kept essential normal. However, the phenomenon of the epiphyseal growth retardation and premature closure were very common in the regular plate group. In addition, the sliding plate group exhibited more normal histologic features and Safranin O staining compared to the regular plate group. Our results suggest that the sliding plate can provide reliable internal fixation of epiphyseal fracture without inhibiting epiphyseal growth.
Introduction
The growth characteristics of epiphyseal plate determine that the treatment method of epiphyseal fracture is different from that of normal fracture. The ideal treatment method should provide adequate stability to permit early mobilization, preserve or optimize fracture biology without inhibiting epiphyseal growth, avoid serious complications, and achieve these goals in a cost-effective manner [1][2][3]. Fractures of the distal femoral growth plate are remarkable in that are third most common epiphyseal fracture in children (after wrist and ankle), yet they carry a risk of growth disturbance in up to 90% of cases [4,5]. Salter-Harris system is used to classify these injures [6]. Salter-Harris type-I and II fractures can be treated with closed reduction and percutaneous pin or screw fixation. Salter-Harris type-III and IV fractures are best treated with an open reduction and percutaneous or internal fixation, usually with Kirschner wires or screws that do not cross the physis [7,8]. However, we often encounter the complex fractures of the distal femoral growth plate in the clinic, and the Kirschner wires or screws fixation itself may not be reliable. Regular plate fixation can be relied on for fixation but may retard vertical growth of the bones and lead to the femoral valgus deformity and leg-length discrepancy [9]. In order to tackle these problems, we designed a sliding plate to observe the effects of it on the morphology of the epiphyseal plate, and develop its feasibility on providing reliable internal fixation of epiphyseal fracture without inhibiting epiphyseal Ivyspring International Publisher growth.
Designing of sliding plate
In collaboration with Double Engine Medical Material Co Ltd (Xiamen, China), we have designed a sliding plate, for which we have obtained a patent (patent number: 200620009578.4). this internal device is made of titanium alloy and is a transformational approach based on regular anatomical plate. To realize the vertical slide function, the internal device is composed of two parts. The head, 68 mm in length, 13 mm in width and 4 mm in thickness, is used to fix the femoral condyle of goat, and the body, 63.5 mm in length, 9.3 mm in width and 2.5 mm in thickness, is used to fix the diaphysis. The head has a drawer-like slot along which the body portion can slide (Fig. 1).
Experimental animals and grouping
All animal experimental procedures were approved and in accordance with the Institutional Animal Care and Use Committee of the authors' institution. Eighteen female, 8-10 weeks old goats were maintained in the animal care facility for 10 days to become acclimated to diet, water, and housing under a 12 hour/12 hour light/dark cycle. All goats were divided randomly into sliding plate group (n=6), regular plate group (n=6) and control group (n=6). In sliding plate group were fixed with sliding plate, regular plate group were fixed with regular plate and in control group were exposed without internal fixation.
Surgical procedure
All surgeries were performed by the first author under endotracheal intubation and general anesthe-sia. The femoral condyle was exposed and the cartilage membrane surrounding the physis was protected. A Kirschner wire with a diameter of 1.5 mm was introduced at the lateral condyle, which was located using intra-operative X-ray radiography in order to ascertain the relative location of the Kirschner wire and physis and to avoid damaging the physis itself. Sliding plate was introduced at the lateral condyle. Using the same method, regular plates were placed respectively. In control group were exposed without internal fixation. At two time points (postoperative 4 and 8 weeks), three animals were obtained for positive and lateral femur X-ray film on operational femur in each group respectively, and the histological staining was performed to evaluate the development of epiphyseal plate.
Light microscope examination
The fresh bone specimens were fixed with formaldehyde, decalcified, and embedded in paraffin. Each slide was cut to 5 μm thickness, and routine hematoxylin and eosin (HE) staining was performed. Under light microscopy, all the tissue slides were put into a VIDAS automatic image analysis machine. The thickness of the epiphyseal was measured under 40x magnification, and proliferation cells and hypertrophic cells were counted.
Histochemical analysis with Safranin O
Specimens were isolated from the distal femur, fixed with 20% neutral formaldehyde, and decalcified with decalcification buffer. The femoral condyle was cut down from the center. After a series of dehydration procedures, routine slides were cut from paraffin-embedded tissues at 5 μm thicknesses individually. All slides from the total samples (including those taken at different times) were stained under the same conditions and with Safranin O. Safranin O binds to proteoglycan, so a stronger red staining with Safranin O indicates a higher amount of proteoglycan, from which we can find the concentration of cytosolic proteoglycan and study the growth of cartilage cells.
Statistical analysis
The data was analyzed using SPSS13.0 statistics software. P<0.05 was considered to be statistically significant, P<0.01 was considered to be extremely statistically significant.
Radiographic analysis
Radiographic analysis of the femurs in the goats revealed that the sliding plate group could be ex-tended as the femur grew. The epiphyseal morphology in the sliding plate group was kept essential normal. However, the phenomenon of the epiphyseal growth retardation and premature closure were very common in the regular plate group (Fig. 2 and Fig. 3).
Light microscope examination
The HE staining examination at 4 weeks and 8 weeks after surgery indicated that cartilage cells in both the sliding plate group and the control group were longitudinally oriented and extended upwards in an orderly fashion. Vigorous mitotic processes were observed in the upper cell column, with abundant hypertrophic cells. In the regular plate group, the cells were narrower and disordered, and cartilage cells were less ordered in comparison to both the sliding plate group and the control group. At 4 weeks and 8 weeks after surgery, There were significant differences in the epiphyseal plate thicknesses, proliferation cell counts and hypertrophic cell counts in the sliding plate group vs. the regular plate group (P<0.05), and there were no significant difference between the sliding plate group and the control group (P>0.05). Results are shown in Table 1 and Fig. 4.
Safranin O histochemistry staining
Cartilage cells and cytosol from all the groups were stained with Safranin O, which exhibited a red color. Compared with tissues taken at 4 weeks or 8 weeks after surgery, Safranin O staining was significantly stronger in the sliding plate group compared to the regular plate group. There was no significant difference in Safranin O staining in the sliding plate group and the control group at either 4 or 8 weeks after surgery. In the regular plate group, the Safranin O staining exhibited a lighter color in animals at either 4 and 8 weeks after surgery (Fig. 5).
Discussion
Internal fixation methods employed to treat distal femoral epiphyseal fractures in children include external fixation [10], screws [11], Kirschner wires [11], flexible intramedullary nailing [12], and regular plate fixation [13,14], etc. Each method has its advantages and disadvantages, and some degree of disturbance in longitudinal femoral growth may occur, according to the severity of the original injury or surgical trauma and method chosen. A better prognosis is obtained with fixation techniques that do not compress the growth plate.
For complex fractures of the distal femoral growth plate, it is difficult to fix and recover the fracture by using screws, Kirschner wires or flexible intramedullary nailing. Regular plate fixation can provide reliable fixation, but it can not be taken out in time. As the epiphyseal plate grows, the steel plate produces a vertical force which is reverse in direction to epiphyseal growth. This mechanical compression force restricts epiphyseal growth and retards normal bone growth [15][16][17]. In order to overcome the drawbacks, we designed the sliding plate which is conjectured to slide with epiphyseal growth, while simultaneously providing reliable fixation of bone fractures. Therefore the sliding plate avoids restriction of epiphyseal growth, and greatly reduces the risk of complications like epiphyseal growth retardation, epiphyseal premature closure and angular deformity.
The sliding plate is a transformation based on traditional anatomical plate. To realize the vertical slide function, the sliding plate is made of two parts. The head part is used to fix the femoral condyle, and the body part is used to fix the diaphysis. The head part has a drawer-like slot along which the body part can slide. The biomechanical design of the sliding plate enables it enough strength in anti-compression, anti-bending, and anti-torsion. The strength and stability of the sliding plate comes from its firm drawer-like combination which is strengthened at the overlap of two parts, and from its stable screw-plate locking design. Examination of imaging studies performed on the present study indicated that in contrast to the regular plates, the sliding plates can extend in concert with epiphyseal growth. In addition, using light microscope examination and histochemical analysis with Safranin O, we confirmed that the sliding plates do not suppress epiphyseal growth, thus potentially reducing secondary epiphyseal injury and preventing epiphyseal growth retardation and premature closure.
When applying the sliding plate for distal femoral epiphyseal fractures, we should pay attention in following aspects: the head part and body part of the sliding plate should be united to the utmost extent to avoid their compressing slide in vertical direction, which can exert compressive stress on the fracture and impair epiphyseal growth, and the screws of the body part should be embedded completely into the iron plate so that it can not get the head part stuck and failing to slide.
Conclusion
It was possible to conclude that the sliding plate allows longitudinal bone growth, without blocking the distal femur growth plate if it is appropriately placed. Although these findings cannot be directly extrapolated to treatment of distal femoral epiphyseal fractures in children, they form a useful basis for future studies.
|
2014-10-01T00:00:00.000Z
|
2012-02-05T00:00:00.000
|
{
"year": 2012,
"sha1": "b9098b616252f1ddb4c37a5ebbee72f7f5b67ec7",
"oa_license": "CCBYNCND",
"oa_url": "http://www.medsci.org/v09p0178.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9098b616252f1ddb4c37a5ebbee72f7f5b67ec7",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
1059543
|
pes2o/s2orc
|
v3-fos-license
|
Anomalous nonidentity between Salmonella genotoxicants and rodent carcinogens: nongenotoxic carcinogens and genotoxic noncarcinogens.
According to current data, the capacity to cause nonprogrammed or unscheduled cell proliferation in target tissues, a common characteristic of chemical carcinogens, may play a more important role in the development of tumors than does genotoxicity. This paper provides strong support for the validity of this conclusion. Ames-negative nongenotoxicants may be considered to be carcinogenic primarily because of their ability to induce cell proliferation in animal tissues and organs. In addition, such nongenotoxic carcinogens may also provide latent and modest DNA (equivocal) modifications that never lead to Ames-positive events. Conversely, noncarcinogenesis by Ames-positive agents is likely to be linked to a lack of stimulation of cell division. Nongenotoxic and genotoxic carcinogens rely on both cell proliferation and equivocal DNA modification for their full carcinogenicity. Such equivocal DNA modifications do not appear to be formed by tumor promoters. The role of cell proliferation may provide a favorable milieu for the occurrence of genetic instability, give rise to selective "apoptosis-resistant abnormal cells," and then affect clonal expansion of these cells. Therefore, understanding the influence of nongenotoxic and genotoxic carcinogens on cell proliferation capability is a key point in determining the mechanisms of chemical carcinogenesis. Considering the contradictory and common features of genotoxicants and carcinogens, early detection of nonprogrammed cell proliferation is the most effective approach to predict human and rodent carcinogenicity.
According to curnt data, the capacity to caue nonprogrammed or un ule cell proliferation in tat tssues, a commo rc c of mica rcinonW my play a more impor- 40-46 (1996) Enormous progress has been made within the past several decades in assessing human cancer risk from chemical carcinogens. In fact, attempts to estimate risk potential have given rise to contradictions between Salmonella genotoxicity and rodent carcinogenicity with the existence of nongenotoxic (Ames-negative) carcinogens and genotoxic (Ames-positive) noncarcinogens (1)(2)(3)(4)(5). The anomaly in question has provided us with a clue to interpretation of the relationship between genotoxicity and carcinogenicity. In terms of the cause of human carcinogenesis, the focus should now be placed on the role of nonprogrammed cell proliferation caused by exogenous agents rather than on their genotoxicity (6)(7)(8)(9).
Preston-Martin et al. (10) claimed that human cancers are reflections of sustained cell proliferation caused by cell proliferative factors (consisting of chemical agents, hormones, etc.) becau4e nondividing cells in adults such as nerve cells and cardiomyocytes never develop tumors. In addition, Croy (11) assumed that estimation of genotoxic effects alone does not provide an accurate assessment of cancer risk to humans from chemical exposure. At present, when newly developed chemicals give rise to Ames-positive events, they are almost always restricted from release into the human environment by government regulation. Ames-positive events however do not always equate with carcinogenic events. It is necessary therefore to detect carcinogenicity by examining each chemical's ability to stimulate cell proliferation.
More recently, Mason (12) and Okey et al. (13) reported that some polycyclic aromatic hydrocarbons, which are genotoxic carcinogens, have the capability to increase cell proliferation through dioxin-aromatic hydrocarbon (Ah) receptor ligand complexes. Data also suggest that cell proliferation increased by genotoxic carcinogens is closely related to the development of tumors. This is the basis for the studies presented in this paper, which question to what extent noncarcinogens, as well as nongenotoxic and genotoxic hepatocarcinogens, exert cell proliferative action on hepatocytes in vivo (14)(15)(16). The results show that most of the hepatocarcinogens tested clearly accelerate hepatocyte division whereas the majority of noncarcinogens gave no such effect.
The data suggest that the capacity to cause cell proliferation is common to nongenotoxic and genotoxic carcinogens. The mechanisms underlying this proliferation remain unclear in many cases, but the role of cell division in carcinogenesis is certainly a key point in the development of tumors. This paper reviews issues regarding nongenotoxic carcinogens and genotoxic noncarcinogens and proposes an interpretation in terms of nonprogrammed cell proliferation.
Definition of Nongenotoxic Carcinogens The status of nongenotoxicants and genotoxicants should be evaluated using the standard Ames test alone, including a liver S9 mix from rats, mice, or hamsters. There are two principal reasons for this: 1) the standard Ames test has hitherto supplied a large number of the screening data on existing noncarcinogens as well as carcinogens, which provides the highest value of overall concordance, compared to other established genotoxicity tests (1); and 2) the simply defined terminology facilitates a general understanding to scientists studying mutation, cancer, and other fields. According to the definition of nongenotoxic carcinogens, at least 30% of existing carcinogens can be assigned to this category (1-4). Jackson et al. (17) reported that almost all putative nongenotoxic carcinogens can be shown to be genotoxic when tested with a combination of several genotoxicity tests. The combined test system, however, also includes tests to detect tumor-promoting agents that have cell proliferative capabilities. Therefore, the data lead to questions about whether nongenotoxic carcinogens examined are indeed genotoxic.
In addition, almost all nongenotoxic carcinogens are also believed to induce genotoxicity in Salmonella TA102 (18), which is supersensitive to active oxygen production. Screening data obtained with TA102 have, however, been limited so far. Festing (19) indicated that F344 rats and B6C3F1 mice are resistant to some genotoxicants in U.S. National Toxicology Program carcinogenesis bioassays and has argued for the necessity of a multistrain approach. Many scientists recognize that Ames tests are too sensitive to DNA damage to reliably estimate whether human and rodent carcinogenicity will actually result from long-term exposure.
Based on the definition of nongenotoxic carcinogens presented above, I believe that nongenotoxic carcinogens always result in cell proliferation and that most latent and modest DNA (equivocal) modifications never lead to Ames-positive events. Further details are presented below.
Cell Proliferation Caused by Nongenotoxic Carcinogens and Genotoxic Carcinogens
Most published reports have indicated that induction of carcinogenesis by nongenotoxic carcinogens depends on the capability to produce cell proliferation (20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30)(31). There is, however, no absolute consensus because of the lack of adequate screening data on the exact relation between increased cell proliferation and nongenotoxic carcinogenicity (32). Therefore, I conducted a comprehensive screening program to examine whether nongenotoxic hepatocarcinogens cause an increase in cell proliferation after a single administration of each chemical to male F344 rats (9 weeks of age) or male B6C3F1 (8 weeks of age) mice with the maximum tolerated dose (MTD) and one-half the MTD (14)(15)(16). After treatments at 24, 39, and 48 hr, hepatocytes were prepared by a collagenase-perfusion technique and then incubated for 4 (14); however, it was subsequently found to give weak RDSpositive results (M. Miyagawa and Y. Uno, personal communications). The results showed that almost all hepatocarcinogens tested increased cell division, whether or not they belonged to nongenotoxic or genotoxic categories, indicating that the potential to cause proliferation is common to carcinogens regardless of whether they have genotoxic potency, as suggested by Cohen et al. (33).
The present RDS test has both strengths and weaknesses for predicting hepatocarcinogenicity of unknown chemicals. The strength of the test is linked to its short-term nature, and its weakness is the high doses required in comparison to the results of 2-year animal assays. Some samples showed false-positive or false-negative RDS events relative to their established hepatocarcinogenicity ( Table 1). The falsepositive RDS events may be principally caused by acute hepatotoxicity leading to regenerative cell proliferation when acute MTD levels are applied to animals. Falsenegative results may occur because exposure was performed by single gavage in the experiments. With longer-term exposure, such as in subacute and chronic toxicity experiments, eventual induction of drugmetabolizing enzymes appears to biotransform highly toxic intermediates. This would not be expected to occur with shortterm exposure. The second cause of false negative results may be differences in drug distribution resulting from the use of a sin-gle treatment versus the long-term exposure in 2-year animal assays. Thus, the RDS approach may not give a perfect match for site, sex, or species in carcinogenicity. Similar considerations are applicable to any short-term experiments used to predict chemical carcinogenicity at the whole-body level. The present RDS data reveal that the test is extremely useful for early detection of nongenotoxic hepatocarcinogens.
In considering the potential of chemicals to stimulate cell proliferation, RDS Environmental Health Perspectives * Volume 104, Number 1, January 1996 events can be classified into three categories: 1) binding to the steroid-thyroid-retinoic acid receptor, a type which is not related to liver injury, 2) growth factor-binding due to chemical injury to the liver or other organs, and 3) a result of tumor promoters that never cause equivocal DNA modifications. The chemical-binding steroid receptor superfamily is known to include the steroid hormonelike receptor, the peroxisome proliferator-activated receptor (PPAR), and probably the Ah receptor (34-37. Each receptor might act as a ligand for some of the RDS-positive carcinogens listed in Table 1: a steroid hormonelike receptor for dehydroepiandrosterone and 17aethynylestradiol; the PPAR for dehydroepiandrosterone, clofibrate, di(2-ethylhexyl)adipate (DEHA), di(2-ethylhexyl)phthalate (DEHP), phenobarbital sodium, tetrachloroethylene, trichloroethylene, and Wy-14,643; and the Ah receptor for benzo[a]pyrene, p,p'-DDT, p,p'-DDE, and polybrominated biphenyls (34,36,37).
Although there is no clear evidence for an increase in cell proliferation through the PPAR receptor (37), some proto-oncogenes promoting cell proliferation, e.g., fos and jun, are known to be activated (34,38,39).
Our RDS data suggest that formation of PPAR-ligand complexes also leads to cell proliferation (Table 1). Thus, steroidsuperfamily receptor-mediated cell proliferation can be considered to be principally involved in early development of hepatocyte RDS induction. In addition, ligands may simultaneously give rise to equivocal DNA modifications. Progression of both phenomena in the same cells may result in effective disruption of cell-cycle controls so that hepatocarcinomas eventually arise. Among these receptors, particular attention is now being paid to the Ah receptor because it can react with a wide range of nongenotoxic and genotoxic agents as a ligand and it principally regulates induction of CYPlAI to metabolically activate polycyclic aromatic hydrocarbons (13).
Several growth factors are also known to increase cell proliferative events in specific cells in tissues (40), but the relationship between the roles of growth factors and cell proliferation in carcinogenesis is extremely difficult to assess. With regard to hepatocyte cell proliferation, the action of hepatocyte growth factor (HGF) investigated by Nakamura and his co-workers (41,42) must be considered: HGF secreted from liver and other organs in response to chemical injury is known to promote the division of hepatic parenchymal cells by means of paracrine/endocrine mechanisms; moreover, the nature of the HGF-binding receptor has also been identified as a met proto-oncogene product (43). Other growth factors taken into the liver are epidermal growth factor and transforming growth factor-fl1. At present, there is no simple explanation of how growth factors act in combination to bring about complicated events in vivo.
Of several RDS-positive carcinogens listed in Table 1, at least 13 samples are generally well known to be hepatotoxicants in rats and mice (13,14): aldrin, benzene, carbon tetrachloride, chloroform, pentachloroethane, pentachlorophenol, tannic acid, 1,1,1,2-tetrachloroethane, terachloroethylene, thioacetamide, trichloroacetic acid, trichloroethylene, and 1,1,2trichloroethane. Thus, damage-induced HGF is likely to play an important role in hepatocyte RDS induction. In addition, the ligand may give rise simultaneously to equivocal DNA modifications, as in the case of the steroid-superfamily receptor. Interestingly, hepatocyte RDS events do not always directly reflect hepatotoxicity, either pathologically or biochemically (e.g., when HGF is secreted through paracrine mechanisms). Therefore, increased hepatocyte RDS events might not be due to simple hepatotoxicity to target population.
In vivo liver-tumor promoters such as butylated hydroxytoluene and lithocholic acid, which are classified as noncarcinogens, might cause increased hepatocyte RDS events by some mechanisms (Table 1). Such agents principally act through protein kinase C to stimulate cell division (44). Details of the combination of different parameters involved in hepatocyte RDS induction by particular agents require further attention.
Examination of early RDS inductive events provides us with a reliable and simple test to determine whether unknown chemicals possess proliferative stimulus potential. The data obtained so far indicate that RDS-positive and Ames-positive agents are hepatocarcinogens in humans and rodents except for 4-nitro-o-phenylenediamine and RDS-positive and Ames-negative agents, which are possible hepatocarcinogens. The data show that RDS-negative and Ames-negative agents are probably noncarcinogens in the liver. Thus, use of the two tests in combination is an effective approach for classification purposes.
What is the significance of early hepatocyte RDS induction for hepatocarcinogenesis? The process may lead to cell death for almost all RDS-inductive hepatocytes by means of apoptosis, which involves one of the normal functions of the p53 tumor-suppressor gene (45). There are four points to support this: 1) experimental data show that the maximum peak of early RDS is observed approximately 24 to 48 hr after chemical treatments and it disappears soon afterwards (14,15,23); 2) it is generally considered that normal p53 function is maintained within 48 hr under conditions of RDS induction; 3) it appears that unequivocal DNA modifications formed by genotoxic hepatocarcinogens greatly contribute to apoptosis (45); and 4) there is no direct correlation between hepatocyte RDS incidence and the hepatocarcinogenic potency so that the affected hepatocytes are unlikely to be precursor cell candidates for subsequent hepatic adenomas and carcinomas.
The measured RDS event after a single exposure may simply reflect the early in vivo response to chemical exposure, which is likely to be immediately followed by homeostasis. In 2-year animal bioassays, however, it is likely that such cell division occurs continuously and repeatedly at low levels in target issues. As a result, affected cells, apoptosis-resistant cells, will be expected to persist and play a role in the generation of malignant tumor cells over long application times.
Differences between Nongenotoxic
Carcinogens and Tumor Promoters Development of tumors is likely to require at least two initial steps: equivocal DNA modifications and nonprogrammed cell division. The mechanisms underlying carcinogenesis are widely understood to involve multistage, initiation, promotion, and progression processes. Experiments using two-stage animal models have supported the terminology "tumor-initiating" and "tumor-promoting" agents, leading to confusion. Pure tumor promoters have been characterized as not only incomplete carcinogens but also as nongenotoxicants.
Hildebrand et al. (46) and Perera (47) have indicated that the term "tumor promoters" should be limited to the discussion of two-stage model systems in which tumor development is examined after the application of an initiating agent. The term "nongenotoxic carcinogen" should be used to designate an Ames-negative agent that is capable of causing the development of malignant tumors in 2-year bioassays when animals are exposed to that agent alone. Definite differences between nongenotoxic carcinogens and tumor promoters are not likely to be established; therefore, confusion in the application of terminology will remain.
Tumor promoters are often considered to be carcinogenic when they test positive in 2-year animal bioassays, which leads to the interpretation that tumor promoters are equal to nongenotoxic carcinogens. These nongenotoxic carcinogens are some-Volume 104, Number 1, January 1996 * Environmental Health Perspectives Review* Non-ienotoxic carcinoqens and genotoxic noncarcinogens times defined in terms of their carcinogenic potency in a range of animal species, strains, sexes, tissues, or organs, in contrast to the more limited promoter case. Experimental data for nongenotoxic carcinogens tend to show that the two groups are equivalent in practice, and tumor promoters may cause cell proliferation that is tissue or organ specific in animals (48). A representative nongenotoxic carcinogen, benzene, which is a rodent carcinogen as well as a human carcinogen, induces tumors in a wide range of animal species and strains in both sexes and in many tissues and organs (4). It is questionable, therefore, whether all nongenotoxic carcinogens are simply tumor promoters.
The confusion in use of the terms "nongenotoxic" and "tumor promoters" principally occurs when the cause of human cancers is considered. Current understanding suggests that the capacity of chemicals to cause cell proliferation is more important than the initiating effects of the chemicals. As far as the occurrence of human cancers is concerned, it may not be necessary to make a strict distinction between nongenotoxic carcinogens and tumor promoters as responsible agents. In the experimental field, however, a strict discrimination is always needed for regulatory authorities to properly assess human cancer risk.
My hypothesis is that hepatocarcinogenicity is due to stimulation of cell proliferation and production of equivocal DNA modifications. While the ability of chemicals to induce hepatocyte proliferative division can be determined in RDS experiments, equivocal DNA modifications are difficult to estimate. Overcoming this problem would be of great assistance in assessing human cancer risk. After examination of in vitrolin vivo DNA-binding adducts caused by more than 200 different DNA-binding agents, Hemminki (49) reported on the relationship between ultimate DNA adducts and malignant tumor development. A clear understanding awaits comprehensive in vivo DNA-binding data and sufficient quantitative results with regard to dose dependency of response.
For example, DNA adducts have not yet been demonstrated for the representative human carcinogen benzene. The microsomal oxidation of benzene to phenol and of phenol to catechol and hydroquinone are known to be major pathways in the metabolism of benzene. The nature of the ultimate carcinogenic metabolite however has not been resolved, although benzene oxide, catechol, hydroquinone, and benzoquinone have each been proposed as important contributors to its carcinogenicity (50). Of the benzene metabo-lites reported by Leanderson and Tagesson (51), catechol and hydroquinone are known to form 8-hydroxydeoxyguanosine in vitro (52). Thus, determination of equivocal DNA modifications requires further investigation because it is extremely important for distinguishing so-called nongenotoxic carcinogens from tumor promoters.
In the interpretation of increased RDS events, a key point is whether it is possible to distinguish nongenotoxic carcinogens from tumor promoters. The difference is based on whether the chemical possesses the ability to cause equivocal DNA modifications (e.g., oxidative DNA adducts). With regard to cell proliferation induced by tumor promoters, we can speculate that inductive RDS events involve the cascadepathway protein kinase C (44) without equivocal DNA modifications, and such cell proliferative conditions might themselves never lead to tumorigenesis. Nongenotoxic carcinogens, on the other hand, that produce equivocal DNA modifications (e.g., oxidative DNA adducts) were exemplified by 8-hydroxydeoxyguanosine (52). Such DNA adducts are considered to never cause genotoxic events in the standard Ames test, but they might contribute to disruption of normal cell division, although any causative significance for oxidative DNA adducts in cell proliferation remains to be proven.
Data on formation of oxidative DNA adducts have been obtained for several carcinogens in both in vivo and in vitro experiments (53). With hepatocarcinogens and experimental animal exposure in vivo, at least four agents, [DEHA and DEHP (54), 2-nitropropane (55), and polychlorinated biphenyls (56)] have been shown to cause 8-hydroxydeoxyguanosine in hepatocyte DNA in rats; of these agents, DEHP and DEHA also increased hepatocyte RDS events (Table 1). Thus, oxidative DNA stress might lead to development of hepatocarcinogenesis in cooperation with cell proliferation.
Some of the nongenotoxic noncarcinogens examined also induced hepatocyte RDS events (Table 1). Such false-positive RDS events might be due to a tumor-promoting action on hepatocytes in vivo. As indicated by Ledda-Columbano et al. (39), differences between nongenotoxic hepatocarcinogen-induced and tumor promoterinduced or mitogen-induced cell proliferation might be distinguished by analyzing the nature of overexpression of proto-oncogenes during early hepatocyte RDS induction. This approach might similarly be important to distinguish nongenotoxic carcinogens from tumor promoters.
. A representative promoter without carcinogenic potency, 12-O-tetradecanoyl-phorbol-13-acetate (TPA) was found to be carcinogenic in a long-term study when it was repeatedly applied to the skin of BALB/c mice (57). Proliferation of keratinocytes is known to be controlled by TGF-R1 (40), and hepatocytes respond to HGF (41,42). In addition, keratinocytes in the normal skin of adults are always present as immature, intermediate, and mature cells; hepatocytes in the normal liver of adults are a homogeneous population of mature, differentiated cells. In considering these backgrounds, cell-cycle control mechanisms of keratinocytes probably differ from those of hepatocytes. Namely, skin carcinogenesis by TPA might be due to chronic cell proliferation involving immature keratinocytes that may be more susceptible to cell-cycle checkpoint disruption than mature cells, leading to genetic instability and therefore skin carcinogenesis. The hypothesis that tumor promoters cause cell proliferation without equivocal DNA modifications may therefore be limited to mature cell cases (e.g., hepatocytes and renal tubule cells). This is of interest in view of the finding that almost all existing nongenotoxic carcinogens induce malignant tumors in the liver and kidney (1,4,5). Genotoxic Noncarcinogens Genotoxicity tests have been principally performed with bacteria (the Ames test), mammalian cell lines (in vitro chromosome aberration test), and hematopoietic cells (in vivo mouse micronucleus test). The existence of genotoxic noncarcinogens is considered to be a reflection of the properties of the biological indicator cells used in established genotoxicity tests. Each biological indicator cell is independently capable of progressing through the cell cycle and dividing, but this is not necessarily the case for cells in tissues, such as hepatocytes and renal tubular epithelial cells. This is probably due to the functions ofgap junctions (58)(59)(60).
The cells used in genotoxicity tests are far more susceptible to conversion of genotoxic events to fixed mutations than cells existing in tissues. The applied systems have a number of defects in other areas, e.g., overapplication of drug-metabolizing enzymes, overdoses, and a relative lack of a detoxication process in vitro. Application of new genotoxicity tests that use mammalian cells with functioning gap junctions and a normal complement of enzymes is thus needed to prevent false-positive data for genotoxic noncarcinogens in the future.
In this respect, Cunningham et al. (27) reported that, although 2,4and 2,6diaminotoluene analogs were equally genotoxic for the Ames test, only 2,4-diaminotoluene was hepatocarcinogenic and increased hepatocyte RDS induction (Table 1). Therefore, they argued that induction of liver tumors is likely to require both genotoxic action and stimulative potential for cell proliferation.
In addition, Goldsworthy et al. (8) reported interesting data on hepatocyte lacl genotoxic events and hepatocyte proliferation in transgenic mice using two alkylating agents, N-dimethylnitrosamine as a representative hepatocarcinogen and methylmethane sulfonate as a nonhepatocarcinogen. Both chemicals are known to be positive in in vivo mouse hepatocyte unscheduled DNA synthesis tests (61) as well as in the three types of genotoxicity tests. The data showed that N-dimethylnitrosamine caused lacl genotoxic events and hepatocyte proliferation in hepatocytes in vivo, whereas methylmethane sulfonate induced neither.
These findings also provide us with evidence that hepatocytes in tissue in vivo need cell proliferation for genotoxic lesions to become fixed and carcinogenesis to result. Thus, determination of proliferative response is considered to be most appropriate to predict hepatocarcinogenicity.
Relationship between Cell Proliferation and Chemical Hepatocarcinogenesis
Nongenotoxic carcinogens and tumor promoters can also act to clonally expand spontaneously occurring, initiated cells. Although clonal expansion by both types of agents may be generally accepted as contributory to development of tumors, at present it is unlikely to be accepted as a theory of tumor development for the reasons described below. Ward et al. (62) reasoned that, if the rate of spontaneously occurring, initiated cells is similar for each specific tissue, then more spontaneous cancers should occur in larger organs. Liu et al. (63) reported that HGF inhibited proliferation in glutathione S-transferase placental form-positive rat hepatocytes (putative preneoplastic foci cells) induced by N-diethylnitrosamine, whereas it stimulated cell division in nonlesion areas. Moreover, Schulte-Hermann et al. (64) claimed that putative preneoplastic foci cells of rat livers exhibit approximately 10-fold higher rates of apoptosis than normal hepatocytes. These points must be taken into account in any explanation based on clonal expansion. With regard to carcinogenesis by genotoxic and nongenotoxic carcinogens, the theory of genetic instability, which involves disruption of growth arrest checkpoints, is now considered to be of great advantage to understanding mechanisms of action.
The following summary of how cell pro-liferation might act in carcinogenesis takes into account recent information. First of all, an imbalance of the deoxynucleotide pool occurs (65,66), stimulating dihydrofolate and DNA polymerase a, for example. In the first stage of cell proliferation, this imbalance might be triggered by some factors, such as steroid-superfamily receptor-ligand complexes or HGF-receptor-ligand complexes, as described above. Of these factors, some are known to contribute to overexpression of some proto-oncogenes, which leads to further progression of nonprogrammed cell proliferation (34)(35)(36). In the second stage, appreciation of the role of sustained cell proliferation in the carcinogenesis procegs requires a comprehensive understanding with regard to methylation status (67)(68)(69). Namely, a chronically maintained high rate of cell division can give rise to an imbalance in normal DNA methylation levels involving 5-methylcytosine (67)(68)(69)(70), leading to hypermethylation or hypomethylation of intracellular DNA. Alteration of DNA methylation levels may directly cause genetic instability that can result in spontaneous DNA changes (C to T transitions), which are a type of equivocal DNA modifications.
In the third stage, when genes responsible for controlling normal DNA replication such as p53 (71) become inactivated, nonprogrammed cell division can progress without G1 arrest involving the repair of DNA alteration (68)(69)(70)(72)(73)(74)(75). Thus, disruption of cell-cycle control check points increases nonprogrammed cell proliferation with elevated genetic instability, leading to the possibility of malignant tumor cells arising in the future. Namely, in genetic instability theory, an abnormality of p53 functions is considered a key factor in resolving the relationship between cell proliferation and chemical carcinogenesis.
A number of recent studies have provided detailed understanding of the complexity of cell-cycle control mechanisms. With regard to the initiation of programmed cell proliferation that Taya (76) reviewed, the normal RB tumor-suppressor gene plays an extremely important role (77). When pRB is phosphorylated at the G1/S border with active cyclin-dependent kinase (Cdk)-G1 cyclin complexes, its linked transcription factor (E2F) (78) is released, which activates genes relating to progression of cell proliferation. Thus, phosphorylation involving Cdk-G1 cyclin complexes is promoted by the products of proto-oncogenes (myc, ras, etc.) and is inhibited by products of tumorsuppressor genes (p15, p16, p27, etc.), thus controlling normal cell division by means of the actions of "brakes and accelerators." It has been argued that the mechanisms of normal cell division are disrupted in carcinoma cells (76).
With regard to inductive RDS events, namely nonprogrammed cell proliferation induced by chemicals, I have proposed that the initiation step requires biological stimuli that may give rise to an imbalance in the normal DNA methylation status. There are a number of ways in which this might also trigger unscheduled RDS. Taya (76) claimed that cell division is also accelerated when tumor-suppressor gene products (e.g., p15, p16, p27) are directly inactivated by exogenous chemicals. The DNA methylation status is affected by any imbalance between DNA methyltransferase and its demethylase activities in target cells (69). It needs to be clarified whether genotoxic or nongenotoxic carcinogens might induce elevated activity of either DNA methyltransferase or its demethylase. DNA methylation processes are known to require choline and methionine; by administering a diet deficient in both chemicals, hepatocarcinomas can be induced, indicating a role for hypomethylation of hepatocyte DNA (79). An imbalance in the DNA methylation status may thus exert effects without the necessity of sustained cell proliferation.
Finally, hepatocarcinogenesis should be considered from the standpoint of two events that occur with the lesion progression process and increasing age in experimental animals. In 2-year animal bioassays, it appears that progressively more malignant clones are repeatedly created from background altered populations. Also, with increasing age, the normal function of tumor-suppressor genes tends to gradually and spontaneously disrupt various cell types. By means of both continuous processes, apoptosis-resistant, semi-abnormal hepatocytes, which can go through the first gate of the pathway to tumors with accumulated genetic instability, may be selected. Such genetic instability will contribute to stepwise disruption of oncogenes and some tumor-suppressor genes. When tumor-suppressor genes lose their normal function, apoptosis-resistant abnormal hepatocytes may be able to go through the next gates leading to malignancy. This hypothesis is based on the occurrence of apoptosis-resistant abnormal hepatocytes and is supported by the data of Roberts et al. (80), who reported that the majority of the hepatocytes generated during chemicalinduced hyperplasia were protected from apoptosis during liver regression. In conclusion, risk assessment of chemical agents should focus on control of early cell proliferation in vivo-, this present short-term test is available for this purpose.
The San Francisco office of the Natural Resources Defense Council, a national nonprofit public interest organization, seeks a senior scientist with a Ph.D. or M.D. and relevant work experience to promote the prevention of adverse health effects from exposure to toxic chemicals. We will also consider an individual with a Masters Degree and highly relevant work experience.
The position involves bringing scientific analyses and knowledge to advocacy in various forums. Candidates should have expertise in cutting-edge toxics issues, such as the special vulnerability of children or other disproportionately exposed subpopulations to some toxics, endocrine disruption, or other noncancer endpoints. The ability to keep abreast of scientific advances, to translate technical issues into simple lay language, and to conduct outreach to persons affected by toxics as well as the scientific and medical communities is required. Salary is commensurate with experience.
Equal Opportunity Employer. People of color are encouraged to apply.
|
2014-10-01T00:00:00.000Z
|
1996-01-01T00:00:00.000
|
{
"year": 1996,
"sha1": "6c80a240a834eb8cb65761d06ffc4c16849cbade",
"oa_license": "pd",
"oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.9610440",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6c80a240a834eb8cb65761d06ffc4c16849cbade",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255925643
|
pes2o/s2orc
|
v3-fos-license
|
Esculetin Alleviates Nonalcoholic Fatty Liver Disease on High-Cholesterol-Diet-Induced Larval Zebrafish and FFA-Induced BRL-3A Hepatocyte
Non-alcoholic fatty liver disease (NAFLD), defined in recent years as metabolic-associated fatty liver disease (MAFLD), is one of the most common liver diseases in the world, with no drugs on market. Esculetin (ESC) is an active compound discovered in a variety of natural products that modulates a wide range of metabolic diseases and is a potential drug for the treatment of NAFLD. In this study, we used an HCD-induced NAFLD larval zebrafish model in vivo and an FFA-induced BRL-3A hepatocyte model in vitro to evaluate the anti-NAFLD effect of ESC. Lipid lowering, anti-oxidation and anti-inflammation effects were revealed on ESC and related gene changes were observed. This study provides a reference for further study and development of ESC as a potential anti-NAFLD/MAFLD drug.
Introduction
Since nonalcoholic fatty liver disease (NAFLD) and nonalcoholic steatohepatitis (NASH) were first described in the 1990s, they have gone from obscure liver diseases to the most prominent cause of chronic liver disease with a 25% prevalence worldwide [1]. NAFLD involves a spectrum of lesions ranging from hepatic steatosis to NASH, is characterized by hepatocyte injury, apoptosis, cell death and inflammation, and further deteriorates into fibrosis and cirrhosis [2]. Recently, an expert group proposed changing the term "NAFLD" to a new definition, MAFLD (metabolic-related fatty liver disease), considering the primary role of metabolic dysfunction [3,4]. However, due to the lack of comprehensive cognition of the multiple pathogenesis factors of NAFLD/MAFLD, there are still no Food and Drug Administration-approved drugs on the market for NAFLD.
The pathogenesis of NAFLD is complex. A classic hypothesis, the "Two hits" theory, elucidates the pathogenesis of NAFLD in two respects: hepatic overlord-lipid accumulation and oxidant stress [5,6]. Animal and clinical data indicate that many cases of hepatic steatosis do not develop necrotizing inflammation or fibrosis. However, lipid peroxidation is the mechanism that links the hepatic steatosis to necrotizing inflammation and fibrosis and is the "second hit" in the "two hits" theory [6]. As new research develops and pathologists gain new insights into NAFLD, the "second hit" theory is no longer sufficient to explain the complex pathogenesis of NAFLD. The new concept of "multiple hits" has emerged, involving insulin antagonism, lipid toxicity, oxidative stress, inflammation, liposomes, bile acid, etc. [7]. What both theories have in common is a recognition of the important role of lipid oxidation and inflammation. Therefore, antioxidant and anti-inflammatory therapies should be considered beneficial for NAFLD.
The Zebrafish (Danio rerio) genome is 87% similar to that of human beings and has become a widely used disease model in multiple fields in recent years [8][9][10]. NAFLD Larval zebrafish model was comprehensively studied several years ago [11], and been widely used as a research model or screen model in studying anti-NAFLD drugs [12,13]. The test of pathology, biochemistry index and genome expression changes in NAFLD has been well established in larval zebrafish for compound screening, and the features of low-dose and low-time consumption make it a viable option for active compounds.
Esculetin (ESC) is one of the active compounds discovered from multiple nature products, such as Cortex Fraxini, Citrus limon Osbeck, Euphorbia lathyris, Artemisia capillaris and Cichorium glandulosum Boiss et Huet. ESC has been reported take the effects on reversing metabolic related disorder [14]. Some researchers have studied the effects of ESC on insulin resistance in diabetic model animals and found that ESC increases insulin sensitivity [15]. Other researchers examined the liver preservation effects of ESC in animal models of acute liver injury and found that ESC improves liver fibrosis through mechanisms related to energy metabolism and lipid pathways [15]. In addition, ESC improves oxidative stress levels and immune balance [16]. Since many reports have claimed that ESC takes the effect on various disease with multiple mechanism related to NAFLD, however, the specific study of whether ESC could prevent NAFLD effectively is still lacking.
In this study, we used NAFLD larvae zebrafish model in vivo and the hepatocyte model in vitro to investigate the potential role of ESC ameliorating NAFLD.
Lipid Lowering Effect of ESC on an HCD-Induced Larval Zebrafish Model
The >99% purity esculetin ( Figure 1A) was used for the experiment. To investigate the role of ESC in regulating lipid metabolism, a NAFL larval zebrafish models was established ( Figure 1B). Benzabate (BZT, 10 µM/L) was used as positive drug, and three doses (ESC-L 5 µM/L, ESC-M 10 µM/L and ESC-H 25 µM/L) of ESC were established for further test. The survival rate of each group of zebrafish was recorded ( Figure 1C). Significantly, compared with HCD group, the survival rate in BZT, ESC-H and ESC-M groups was reversed to the control group. The lipid accumulation in whole fish was stained by Nile red ( Figure 1D), and fluorescence intensity of each fish were quantified. Obviously, the lipid accumulated in abdomen of fish (point by red arrows), and lower fluorescence intensity can be observed in all drugs groups. Further oil-red staining showed that both BZT and ESC reduced liver steatosis ( Figure 1E). Lipid levels of TG and TC were measured with kit and both ESC and BZT groups were found to have a significant lipid reduction effect on TG and TC ( Figure 1F,G). In the HE staining, macrovesicular steatosis was observed in hepatic cells of the HCD group; however, macrovesicular steatosis in the BZT and ESC-H group was less observed in hepatic cells ( Figure 1H). The results showed that the ESC had a lipid lowering effect on NAFL larval zebrafish model, and multiple staining confirmed that ESC reversed the hepatic steatosis.
Anti-Oxidant Effect of ESC on an HCD-Induced Larval Zebrafish Model
As the lipid peroxidation plays a key role in the "second hits" of NAFLD, we investigated the anti-oxidant effect of ESC on the HCD-induced larval zebrafish model. A fluorescence probe DCFH-DA was used to detect the ROS in vivo of larval zebrafish ( Figure 2A). Compared with the control group, a higher fluorescence was observed in the abdomen of fish (point by red arrows) of the HCD group, where existed the same area of lipid accumulation shown in Figure 1D. However, ESC decreased the fluorescence of ROS dose-dependently compared with the HCD group. From the test results of MDA ( Figure 2B), an end product of lipid peroxidation, ESC showed a significantly lowering effect on MDA. Additionally, the test of glutathione peroxidase indicated that ESC may take the effect of an antioxidant by increasing the antioxidizing system ( Figure 2C). (H) HE stain of larval zebrafish liver, and the hepatic steatosis is pointed by red arrows. Bar indicates means ± SD. ** p < 0.01, *** p < 0.001 represent compared with the control; n.s. represents no significance; # p < 0.05, ## p < 0.01, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, n indicates the replicates of the experiment). (H) HE stain of larval zebrafish liver, and the hepatic steatosis is pointed by red arrows. Bar indicates means ± SD. ** p < 0.01, *** p < 0.001 represent compared with the control; n.s. represents no significance; # p < 0.05, ## p < 0.01, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, n indicates the replicates of the experiment). men of fish (point by red arrows) of the HCD group, where existed the same area of lipid accumulation shown in Figure 1D. However, ESC decreased the fluorescence of ROS dose-dependently compared with the HCD group. From the test results of MDA ( Figure 2B), an end product of lipid peroxidation, ESC showed a significantly lowering effect on MDA. Additionally, the test of glutathione peroxidase indicated that ESC may take the effect of an antioxidant by increasing the antioxidizing system ( Figure 2C). Bar indicates means ± SD. *** p < 0.001 represents compared with the control; n.s. represents no significance; # p < 0.05, ## p < 0.01, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, n indicates the replicates of the experiment).
NAFLD Related mRNA Expression Changes in ESC on the HCD-Induced Larval Zebrafish Model
The previous results confirmed that ESC could take an anti-NAFLD effect by lipid lowering and antioxidant on the larval zebrafish model. To further reveal the mechanism of ESC on multiple pathogenesis aspects of NAFLD, we performed a real-time quantitative polymerase chain reaction (RT-QPCR) experiment to detect the mRNA expression changes in lipogenesis, lipid metabolism, oxidation stress, fibrosis, and inflammation. As shown in Figure 3A, fatty acid synthase (fasn) mRNA expression decreased in the ESC group compared with the model group. However, there was no significant change in the expression of serbf1, a sterol regulatory element-binding factor. As for lipometabolismrelated mRNA expression ( Figure 3B), the peroxisome proliferator-activated receptor alpha (ppara), recombinant carnitine palmitoyl-transferase 1A (cpt1a) and peroxisome proliferator-activated receptor gamma (pparg) significantly increased in the ESC group compared with the model group. In addition, the expression of the acyl-coA oxidase (acox) Bar indicates means ± SD. *** p < 0.001 represents compared with the control; n.s. represents no significance; # p < 0.05, ## p < 0.01, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, n indicates the replicates of the experiment).
NAFLD Related mRNA Expression Changes in ESC on the HCD-Induced Larval Zebrafish Model
The previous results confirmed that ESC could take an anti-NAFLD effect by lipid lowering and antioxidant on the larval zebrafish model. To further reveal the mechanism of ESC on multiple pathogenesis aspects of NAFLD, we performed a real-time quantitative polymerase chain reaction (RT-QPCR) experiment to detect the mRNA expression changes in lipogenesis, lipid metabolism, oxidation stress, fibrosis, and inflammation. As shown in Figure 3A, fatty acid synthase (fasn) mRNA expression decreased in the ESC group compared with the model group. However, there was no significant change in the expression of serbf1, a sterol regulatory element-binding factor. As for lipometabolism-related mRNA expression ( Figure 3B), the peroxisome proliferator-activated receptor alpha (ppara), recombinant carnitine palmitoyl-transferase 1A (cpt1a) and peroxisome proliferator-activated receptor gamma (pparg) significantly increased in the ESC group compared with the model group. In addition, the expression of the acyl-coA oxidase (acox) changes was not significant. These results reveal the potential lipid regulatory mechanisms of ESC.
In terms of oxidative stress-related genes, mRNA expression of nuclear factor-like 2 (nrf2) was not significantly changed. However, heme oxygenase 1 (hmox-1) expression was significantly higher in the ESC group than in HCD group ( Figure 3C). The results indicated that HO1 related pathway is key to the reduction of ROS effects in ESC. To inflammatory gene expression, the expression levels of interleukin-1 beta (il-1b), tumor necrosis factor alpha (tnf-a), and interleukin-1 (il-6) were significantly lower in the ESC group than in the HCD group. changes was not significant. These results reveal the potential lipid regulatory m nisms of ESC. In terms of oxidative stress-related genes, mRNA expression of nuclear factor (nrf2) was not significantly changed. However, heme oxygenase 1 (hmox-1) expr was significantly higher in the ESC group than in HCD group ( Figure 3C). The indicated that HO1 related pathway is key to the reduction of ROS effects in ESC. flammatory gene expression, the expression levels of interleukin-1 beta (il-1b), tum crosis factor alpha (tnf-a), and interleukin-1 (il-6) were significantly lower in th group than in the HCD group.
Effects of ESC on FFA-Induced BRL-3A Hepatocyte In Vitro
Abundant results in vivo showed the anti-NAFLD effect of ESC on lipid low and antioxidants, and multiple mRNA expression indicated the potential mechan ESC on reducing lipid accumulation, antioxidant activity, and inflammatory res however, a direct effect of ESC on the hepatic cells still needs to be investigated. The gene expression level of inflammation on larval zebrafish. Bar indicates means ± SD. * p < 0.05, *** p < 0.001 represent compared with the control; n.s. represents no significance; # p < 0.05, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, indicates the replicates of the experiment).
Effects of ESC on FFA-Induced BRL-3A Hepatocyte In Vitro
Abundant results in vivo showed the anti-NAFLD effect of ESC on lipid lowering and antioxidants, and multiple mRNA expression indicated the potential mechanism of ESC on reducing lipid accumulation, antioxidant activity, and inflammatory response; however, a direct effect of ESC on the hepatic cells still needs to be investigated. A FFA-induced BRL-3A model was established and ESC was treated with high (ESC-H 25 µM) and low (ESC-L 10 µM) doses. In terms of the staining of Nile red shown in Figure 4A, from the fluorescence picture we can see that ESC reduced fluorescence intensity significantly in the 25 µM group. TG and ROS level ( Figure 4B,C) were both decreased in the ESC-25 µM group compared with the model group. Further results from WB ( Figure 4D) showed that ESC increased PPARγ and HO-1 protein expression and decreased the SREBP-1c protein expression. Summary of the study. Bar indicates means ± SD. ** p < 0.01, *** p < 0.001 represents compared with the control; n.s. represents no significance; # p < 0.05, ## p < 0.01, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, indicates the replicates of the experiment).
Discussion
Though the pathogenesis of NAFLD is complex and still revealing in multiple novel aspects, the name of NAFLD even changed for its uncertain pathogenesis to MAFLD; lipid metabolism disorder and oxidative are the two key factors in the time of NAFLD or the new time of MAFLD [7]. Results from larval zebrafish in vivo and BRL-3A hepatocyte in vitro showed the specific lipid lowering and antioxidant effect of ESC on NAFLD, suggesting that ESC may be a potential therapeutic agent for further development of NAFLD/MAFLD. However, as a novel active compound extracted from natural plants, the production manufacturer of ESC still needs further development. NAFLD larval zebrafish is a novel in vivo screen model for compounds that are hard to produce [11], (E) Summary of the study. Bar indicates means ± SD. ** p < 0.01, *** p < 0.001 represents compared with the control; n.s. represents no significance; # p < 0.05, ## p < 0.01, ### p < 0.001 represents compared with the model. p < 0.05 was considered as statistically significant calculated by One-way ANOVA followed by Tukey's test (n = 3, indicates the replicates of the experiment).
Discussion
Though the pathogenesis of NAFLD is complex and still revealing in multiple novel aspects, the name of NAFLD even changed for its uncertain pathogenesis to MAFLD; lipid metabolism disorder and oxidative are the two key factors in the time of NAFLD or the new time of MAFLD [7]. Results from larval zebrafish in vivo and BRL-3A hepatocyte in vitro showed the specific lipid lowering and antioxidant effect of ESC on NAFLD, suggesting that ESC may be a potential therapeutic agent for further development of NAFLD/MAFLD. However, as a novel active compound extracted from natural plants, the production manufacturer of ESC still needs further development. NAFLD larval zebrafish is a novel in vivo screen model for compounds that are hard to produce [11], demonstrating a 7 of 11 systematic study for ESC study in vivo, and providing reference data for further pr-clinical study on a classical rodent model for ESC.
PPARγ is one of the peroxisome proliferator-activated receptors (PPAR) and a member of the nuclear receptor transcription factor superfamily that regulates the expression of target genes related to energy regulation [17]. PPARγ regulates the pathophysiological process in various diseases related to metabolism and inflammation, and is famous as a target of insulin sensitizer thiazolidinedione drugs (TZDs) [18]. PPARγ is proved to be closely associated with hepatic lipid metabolism and play important roles in NAFLD [19] The hepatic PPARγ plays a role in the development of fatty liver in the NAFLD patients. Hepatic PPARγ independently regulates the liver lipid accumulation in mice, and in recent years PPAR-γ expression has been identified as an additional signaling that modulates the SREBP-1c to trigger hepatic steatosis. As the results showed, ESC promoted PPARγ gene expression in vivo and protein expression in vitro. This indicates that PPARγ may be a key regulator for ESC in regulating lipid metabolism. Sterol regulatory elementbinding protein-1c is a key lipogenic transcription factor activated by insulin. SREBP1c is considered to be a master regulator of hepatic lipogenesis; it has the ability to regulate lipogenic gene expression, fatty acid, and TG homeostasis. In addition, the role of SREBP-1c in de novo lipogenesis and NAFLD pathogenesis has been widely recognized and makes it a potential therapeutic target for the treatment of NAFLD [20,21]. Meanwhile, the specific lipid synthesis regulation gene fasn and the protein of SREBP-1c were decreased by ESC in vivo and in vitro, which further explain the mechanism of ESC on lipid regulation. Moreover, ESC increased gene expression of homox-1 and protein expression of HO-1, a downstream target of Nrf2 with antioxidant and anti-inflammation effect [22], clarify the mechanism of ESC on lowering ROS and increasing GSH-px, which is the mechanism of antioxidant on ESC.
Maintenance of Zebrafish and Treatment
Wild-type AB-line zebrafish was reared in filtered circulating water with a light cycle of 14:10 (light: dark) hours at 28.5 • C. Embryos were produced naturally using wild-type AB-line maturated zebrafish. Zebrafish embryos grew freely in egg water between 28.5 • C and 5 days after fertilization, with a light cycle of 14:10 (light: dark) hours (dpf). Larval zebrafish were then randomly divided into six groups (n = 100 for each group).
HCD was given from 6dpf. The treatment groups were treated with drugs from 8dpf, while the control group was fed a standard diet. The experiment was stopped at 21dpf. All groups performed maintenance according to the schedule shown in Figure 1B. Additionally, this zebrafish model was believed to be a nonalcoholic fatty liver (NAFL) model for further study. All the animal experiments was approved by Ethical Committee of China Pharmaceutical University (SYXK(SU)2021-0010), and Laboratory Animal Management Committee of Jiangsu Province. All the experiment was followed the Jiangsu Provincial standard ethical guidelines in using experimental animals under the ethical committees mentioned above.
Oil Red Staining and Histopathology
Zebrafish were anesthetized by using 0.05% tricaine before full staining. We dissolved 0.5 mg/mL oil red with isopropyl alcohol and diluted with water (3:2, v:v) to produce the oil red working liquid. Furthermore, the oil red working solution was filtered until the supernatant was clear and transparent. Meanwhile, 4% paraformaldehyde-fixed zebrafish were stored at 4 • C for 24 h prior to the experiment. Samples were washed 3 times and then soaked for 5 s in isopropyl alcohol. After sample preparation, the sample was dyed with the oily red working liquid. The zebrafish samples were stained for 1 h. The zebrafish blocks were sliced to 4 µm for HE staining. Photos were taken in stereoscope (Olympus SZX16, Olympus, Tokyo, Japan).
Fluorescent Staining and Quantification
Nile red stains and DCFH-DA stains were used in our experiments. Nile red is a kind of photo-stable lipophilic dye with bright red fluorescence, commonly used in neutral lipid staining. Nile red has an excitation wavelength of 543 nm and scattering wavelength of 598 nm. DCFH-DA is a cell-permeable probe used to detect reactive oxygen species (ROS). ROS converts non-fluorescent samples to fluorescent samples by producing dichlorofluorescence (DCF) in living cells at excitation and scattering wavelengths of 480 nm and 525 nm, respectively.
In the trial, zebrafish were maintained in darkness at 28.5 • C for 30 min at 0.5 µg/mL Nile red or 10 µM/L DCFH-DA. Zebrafish were dyed, washed three times with egg water, anesthetized with 0.05% tricaine, and then immobilized with 4% CMC-Na. Nile red or DCFH-DA stained images were observed and taken using a fluorescent stereoscope (Olympus SZX16). The exposure intensity and time corresponding to the same stain were consistent for comparison.
Biochemical Measurement
Each group contains three samples, including six zebrafishes; 0.05% tricaine was euthanized and biochemical samples were shredded with ultrasound. Levels of triglyceride (TG), total cholesterol (TC), malondialdehyde (MDA), and GSH-px were measured using commercial assay kits (Jiancheng, Nanjing, China) as per the manufacturers' instructions.
Real-Time Quantitative PCR Analysis
A real-time RT-qPCR was used to detect expression of genes involved lipid metabolism, oxidation, and inflammation. Each group collected 20 larval zebrafish and euthanized them with Trizol reagent to extract total RNA. A reverse transcription kit was used the cDNA collecting process (PrimeScript RT Master Mix, Takara, Japan). Corresponding mRNA expressions was quantified using qPCR reagent (SYBR Green, Takara, Japan). All steps of the experiment were carried out in accordance with the manufacturers' protocol in the kit. Oligo in QPCR was purchased from Genscript (Nanjing, China) as shown in Table 1. Expression levels of each target mRNA were calculated by normalizing the 2 −∆∆Ct method to GAPDH.
BRL-3A Cell Culture and Treatments
Rattus norvegicus hepatocellular cell line BRL-3A (American Type Culture Collection, Manassas, VA, USA) was cultured at 37 • C in DMEM supplemented with 10% fetal bovine serum (FBS) and 1% penicillin-streptomycin and in an atmosphere of 5% CO 2 . BRL-3A cells (1 × 10 6 in number) were seeded in 100 mm plates until 70% confluent. The model group of BRL-3A cells was treated with 1mM FFA for 24 h. The H-ESC group was treated with 1Mm FFA combination with 25 µM ESC for 24 h. The L-ESC group was treated with 1mM FFA combination with 10 µM ESC for 24 h. The control group was treated with BSA and NaOH at final concentrations of 1% and 0.2 µM, respectively.
Statistical Analysis
We applied Graph Pad PRISM (Version 8.02) (Graph Pad Software, San Diego, CA, USA) to analyze the data. Mean ± SD was presented in the form of all the data. One-way ANOVA was used in the significance calculation, and p < 0.05 in different groups was regarded as statistical significance.
Conclusions
In summary ( Figure 4E), ESC took an anti-NAFLD effect by regulating lipid accumulation, anti-oxidation, and anti-inflammation in the HCD-induced NAFLD larval zebrafish model in vivo and FFA-induced BRL-3A hepatocyte model in vitro. Further gene and protein expression tests revealed that PPARγ, SREBP-1c, and HO-1 are key regulators of ESC on lipid-lowering and antioxidant activity. Further pharmacology study and mechanism research on mature models or clinical study could be performed after a mature production of ESC is established, and this study supplies a reference for ESC to be developed into a potential anti-NAFLD/MAFLD drug in the market. Institutional Review Board Statement: All the animal experiments was approved by Ethical Committee of China Pharmaceutical University (SYXK(SU)2021-0010), and Laboratory Animal Management Committee of Jiangsu Province. All the experiment was followed the Jiangsu Provincial standard ethical guidelines in using experimental animals under the ethical committees mentioned above.
Informed Consent Statement:
This study did not involve humans.
Data Availability Statement:
The data used to support the findings of this study are available from the corresponding author upon request.
|
2023-01-17T17:15:26.746Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "2423b3de645d0ec8709448f9240244bbc87fff1f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/2/1593/pdf?version=1673598313",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fda3494037944cd6e532986dfe0f4272ce730e11",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
239618657
|
pes2o/s2orc
|
v3-fos-license
|
Trafficking of Mononuclear Phagocytes in Healthy Arteries and Atherosclerosis
Monocytes and macrophages play essential roles in all stages of atherosclerosis – from early precursor lesions to advanced stages of the disease. Intima-resident macrophages are among the first cells to be confronted with the influx and retention of apolipoprotein B-containing lipoproteins at the onset of hypercholesterolemia and atherosclerosis development. In this review, we outline the trafficking of monocytes and macrophages in and out of the healthy aorta, as well as the adaptation of their migratory behaviour during hypercholesterolemia. Furthermore, we discuss the functional and ontogenetic composition of the aortic pool of mononuclear phagocytes and its link to the atherosclerotic disease process. The development of mouse models of atherosclerosis regression in recent years, has enabled scientists to investigate the behaviour of monocytes and macrophages during the resolution of atherosclerosis. Herein, we describe the dynamics of these mononuclear phagocytes upon cessation of hypercholesterolemia and how they contribute to the restoration of tissue homeostasis. The aim of this review is to provide an insight into the trafficking, fate and disease-relevant dynamics of monocytes and macrophages during atherosclerosis, and to highlight remaining questions. We focus on the results of rodent studies, as analysis of cellular fates requires experimental manipulations that cannot be performed in humans but point out findings that could be replicated in human tissues. Understanding of the biology of macrophages in atherosclerosis provides an important basis for the development of therapeutic strategies to limit lesion formation and promote plaque regression.
INTRODUCTION
Atherosclerosis is characterised by a chronic, low-grade inflammation in the arterial wall. As the underlying pathology for myocardial infarction and stroke, it is the leading cause of death worldwide (1). The inflammatory response in the arterial wall is initiated by the hypercholesterolemia-induced subendothelial retention of apolipoprotein (apo)B-containing lipoproteins, mainly low-density lipoprotein (LDL), at sites of non-laminar and low shear stress blood flow. These sites are characterised by a higher abundance of macrophages (2)(3)(4), inflammation-primed endothelial cells (5) and particularly in humans a pro-retentive thickened intima rich in smooth muscle cells and altered extracellular matrix (6)(7)(8). The subendothelial retention makes the lipoproteins susceptible to enzymatic and non-enzymatic modification. In particular, oxidation of LDL triggers a sterile inflammatory reaction by activating the endothelial cells to upregulate adhesion molecules and secrete chemokines which attract monocytes and other leukocytes. Modification of lipoproteins also promotes their uptake by macrophages and vascular smooth muscle cells (VSMC) leading to the appearance of foam cells. Additionally, oxidized LDL contains several bioactive molecules, including oxidized phospholipids, which act as damage-associated molecular patterns (DAMPs), and together with early cholesterol crystal formation cause an activation of surrounding innate immune cells (9,10). The continuous influx, retention and modification of apoB-containing lipoproteins, together with the defective resolution of inflammation and dysfunctional clearance of dead cells (efferocytosis) fuel the chronic inflammation (11). The persistent inflammatory activity also leads to the generation of autoantigens and involvement of the adaptive immune system at later stages of the disease (12,13).
Resident arterial macrophages play a crucial role in tissue homeostasis and serve as immune sentinels within the tissue. Adventitial macrophages, for instance, are important regulators of collagen production and the arterial tone (14). At areas of low blood velocity and shear stress, macrophages beneath the endothelium survey the environment to detect pathogens or potentially hazardous deposits (15). As such, aortic intimaresident macrophages are among the first cells to encounter trapped apoB-containing lipoproteins at the initiation of hypercholesterolemia (2)(3)(4). Mainly based on their expression of CD11c, these subendothelial phagocytes were initially described as dendritic cells, but recent results have challenged this view and have identified macrophages as the main cell type to first encounter the trapped lipids (16). Furthermore, in mice with a deficiency of monocytes and macrophages, a delayed and almost abolished development of atherosclerotic plaques can be seen (17)(18)(19)(20)(21). This further underlines the importance of the monocyte-macrophage axis in the initiation of atherosclerotic disease. With the development of mouse models for atherosclerotic regression, it has become clear that macrophages are not only important drivers of the disease, but their plasticity and diverse repertoire of homeostatic functions also makes them important effectors in atherosclerotic regression (22).
Since the description of the Mononuclear Phagocyte System, the prevailing paradigm was that tissue resident macrophages are continuously seeded from circulating monocytes. In recent years, however, it has become obvious that under homeostasis the tissue macrophage pool is mainly maintained through local proliferation and does not solely depend on monocyte influx (23)(24)(25). Monocyte-independent seeding of resident tissue macrophages starts early in embryonic development. Macrophages originating from the extra-embryonic yolk sac (YS) populate tissues during embryonal development as erythro-myeloid progenitor (EMP)-derived macrophages and persist into adulthood (26,27). Microglia in the central nervous system are for instance exclusively derived from YS progenitors, without input from blood monocytes (28,29). However, in most organs, a second wave of monocyte-derived macrophages, originating from definitive haematopoietic stem cells within the fetal liver and bone marrow (BM), co-colonize the tissues (30,31). The question of tissue macrophage ontogeny has critical implications. EMP-derived macrophages migrate to tissues at the time of organogenesis and seem indispensable in various developmental processes (32)(33)(34)(35)(36). This developmental and homeostatic function might prevail in adult life, generating an important link between macrophage ontogeny and function. Indeed, we and others have found that macrophages of different ontogeny perform distinct tissue-specific functions and maintain a specific phenotype (37)(38)(39)(40)(41). Delineating monocytemacrophage ontogeny and trafficking might improve our understanding of the maladaptive chronic inflammatory response in atherosclerosis development, as well as their role in atherosclerosis regression. Ultimately, this could lead to targeted approaches tackling the high rates of global cardiovascular mortality and morbidity resulting from atherosclerosis.
In this Review we address the knowns and unknowns of the trafficking, dynamics and fates of vascular monocytes and macrophages in the healthy and atherosclerotic aorta. Analysing these properties in human tissues is complicated by the availability of human material and models. Therefore, we will focus primarily on results from the mouse as a model organism but put these results into human context where possible at the end of this Review.
MONOCYTES AND MACROPHAGES IN THE HEALTHY AORTA
The development and broad accessibility of novel highdimensional analysis techniques, such as multi-parameter flow cytometry, single-cell RNA sequencing (scRNA-seq) and cytometry by time of flight, has enabled scientists to obtain a clearer picture of leukocyte diversity in the healthy mouse aorta. These studies revealed that myeloid cells, and in particular macrophages, are the dominant immune cell type in the healthy arterial wall (16,37,42,43). Arterial macrophages are primarily located in the fibrous outer arterial layer, the adventitia. Only a small number of macrophages can be found in the innermost layer, the intima, just below the endothelial cells. Based on histological and scRNA-seq data, it's estimated that up to 10% of the arterial macrophages are located in the intima, whereas 90% are positioned within the adventitial layer (16,37,44).
The aorta is populated with macrophages early on during embryonic development. Macrophages can be observed in the fetal aorta at embryonic day 16.5 and most likely start inhabiting the niche from embryonic day 9.5 onwards (27,32,44). This prenatal wave of macrophages colonising the aorta is dominated by YS EMP-derived macrophages which travel to the aorta without a monocyte intermediate. After birth, the brief influx of blood monocytes, which consequently differentiate into tissue resident macrophages, contributes to the aortic macrophage pool (37,44). Despite the monocytic influx, YS EMP-derived macrophages are not replaced by BM-derived macrophages, as has been suggested previously. Rather, the entire adventitial macrophage pool of EMP-and BM-derived macrophages continues to expand in numbers until 45 weeks of age, with YS EMP-derived macrophages being the dominant tissue-resident macrophage population (Figure 1). In aged mice, at around 90 weeks, a general drop of adventitial macrophage numbers mainly affecting EMP-derived macrophages can be observed (37). In contrast to adventitial macrophages, macrophages residing in the intima have recently been reported to seed almost exclusively after birth (16). Using various mouse models, including CD115, CX3CR1 and Flt3 reporter mice, Williams et al. showed that the macrophages inhabiting the intimal layer originate exclusively from definitive haematopoiesis. Interestingly, intimal macrophages are primarily found in locations of increased hemodynamic stress, which are predilection sites for atherosclerosis development (2,3). Although we did not specifically investigate this aspect, our results show no site-specific tropism of adventitial macrophages throughout the aortic segments, in contrast to intimal macrophages (37). This puts further emphasis on understanding the origin, dynamics, and function of intima-resident macrophages. Given the critical role of intimal macrophages in atherosclerosis development, it will be interesting to see the results by Williams et al. confirmed with more efficient fate-mapping models, such as the recently generated Rank Cre (45) and Ms4a3 Cre mice (30), in a quantitative approach.
Under homeostasis, the adult arterial macrophage pool experiences little dynamic. Adventitial EMP-and BM-derived FIGURE 1 | Vascular macrophages and monocytes in the healthy mouse aorta. Influx of EMP-derived macrophages into the tissue during embryogenesis starts around embryonic day 9.5. Macrophages settle within the aortic adventitia and sustain solely through local proliferation. Around embryonic day 18.5, monocytes from the bone marrow seed the aorta and differentiate within the adventitia (forming a population of BM-derived tissue resident macrophages) as well as within the intima (forming a separate population of intima-resident macrophages). This recently defined population of intimal macrophages is heavily seeded perinatally but maintains solely through local proliferation. Thus, the adventitia is colonised with macrophages of dual origin which self-sustain numbers through proliferation, and in the case of adventitial BM-derived cells, replenishment from circulating monocytes. The number of cells of different ontogeny varies throughout life, with numbers changing in age adopted from (37). The fate of monocytes migrating into the steady-state aorta follows several possible fates: differentiation into BM-derived macrophages, further migration towards lymphatics, apoptosis, or migration back into circulation. macrophages self-maintain with minimal input from monocytes entering the arterial wall. Using irradiation-free chimeras and parabionts, we and others have found that over a period of 9 months, only 20% of the arterial macrophages are replenished by monocytes (37,44). In addition, this number seems to be a constant, as we observed a similar 20% monocyte input after a 3month observation period (37). Contrary to the macrophages residing in the adventitia, the intimal macrophages appear to not be replaced by monocytes under homeostasis (16). Besides the quantitatively limited replenishment of the macrophage pool in the adventitia, monocytes have important homeostatic functions in the vasculature. Non-classical Ly6C low CCR2monocytes, which derive from Ly6C high CCR2 + in mice, crawl along the endothelium to survey the cellular integrity and sense dangers, as well as to remove cellular debris (55)(56)(57). Ly6C low monocytes are, however, thought to rarely cross the endothelial barrier into the tissue (57,58). In contrast, classical Ly6C high monocytes are highly mobile and extravasate, mainly guided by their CCR2 expression. A population of transiently sessile monocytes has been found in the lungs and skin of mice in their steady-state (59). These 'tissue monocytes' have previously also been identified in the spleen, which acts as a reservoir to quickly mobilize immune cells upon inflammation (60). Contrary to splenic monocytes, the Ly6C high monocytes in lung and skin can survey the tissue environment and transport antigens to lymph nodes (59). Although the question of sessile monocyte existence has not been addressed for the arterial wall, monocytes are readily identified in the healthy arterial wall (3,42,43,61). Their homeostatic turnover and ability to recirculate into the blood or leave via afferent lymphatics into adjacent lymph nodes, similarly to the surveying monocytes in lung and skin, remains to be determined.
The high motility of Ly6C high monocytes comes with the price of potentially spreading infectious agents (62,63). This might in part explain the presence of infectious agents within atherosclerotic plaques (64). A remaining question is the location of vessel wall entry of Ly6C high monocytes. It has not yet been clarified if and to what extent classical Ly6C high monocytes enter the healthy vascular wall via vasa vasorum in the adventitia or from the luminal side. This is of particular interest in advanced plaques, where intra-plaque sprouting of leaky vessels occurs and could drive the chronic inflammation through the constant supply of monocytes (65).
Analogous to the heterogenous homeostatic functions of monocytes, we and others have found that resident adventitial macrophages have a diverse functional outfit. Traditionally, macrophages have been divided in classically activated M1 and alternatively activated M2 macrophages (66). These two states are, however, in vitro-based extremes on opposite poles of a continuum of macrophage functionality. Novel multiparametric analysis methods have established the high plasticity and different nuances in the macrophage functional outfit (67)(68)(69), also in the aortic wall. Recent integrated analyses of scRNA-seq datasets from healthy and atherosclerotic mouse aortas revealed the presence of 5 major macrophage subsets (70,71). As described in more detail below, these subsets are (I) inflammatory, (II) triggering receptor expressed on myeloid cells 2 (TREM2) + , (III) interferon inducible, (IV) resident-like and (V) cavity macrophages. These five subsets can be found both in the atherosclerotic and healthy aorta, although the complexity of macrophage phenotypes increases in atherosclerotic aortas (70,71). Strikingly, by employing scRNAseq in Rank Cre Rosa26 eYFP mice we were able to map the transcriptional heterogeneity in adventitial macrophages to their origin. The healthy mouse adventitia harbours a macrophage subset with a homeostatic and anti-inflammatory transcriptional profile that derives almost exclusively from YS EMPs. These macrophages were characterised by a high expression of the hyaluronan receptor encoding gene Lyve-1, a known marker for resident macrophages, which maps them to the macrophage subset responsible for the regulation of aortic collagen production (14), and to the resident-like macrophages described above. Furthermore, EMP-derived macrophages expressed high levels of stabilin 1 (Stab1) and growth arrest specific 6 (Gas6), both of which are important for efferocytosis, a process crucial for the inhibition of atherosclerosis (37,(72)(73)(74). In contrast, a cluster that lacked eYFP transcript expression and was comprised of monocyte-derived macrophages expressed gene sets that were associated with pro-inflammatory properties, including Il1b (37), similar to the subset of inflammatory macrophages. Thus, there seems to be a division of labour in arterial macrophage subsets of different ontogeny, where EMP-derived macrophages are responsible for homeostatic processes like collagen production and efferocytosis. BM-derived macrophages in turn are in a poised state for defending the arterial integrity against pathogens. Thus, it would not be surprising if macrophages of diverse origins play different roles during atherosclerosis progression and regression, given their distinct set of functions.
ENHANCED MONOCYTE INFLUX AND MACROPHAGE PROLIFERATION DURING ATHEROSCLEROSIS DEVELOPMENT AND PROGRESSION
The intima-resident macrophages are among the first cells exposed to the increased influx of apoB-containing lipoproteins during hypercholesterolemia. These cells are critical in atherosclerosis initiation. The aorta of mice engineered to lack aortic intima-resident macrophages displays decreased lipid deposition in the early stages of atherosclerosis (4,16). Within days of sustained hypercholesterolemia, the capacity of macrophages to metabolize the accumulating lipids and cholesterol is overwhelmed. This leads to the deposition of lipid droplets within the macrophage cytoplasm, resulting in the typical foam cell appearance, and even macrophage death. Macrophage death and defective clearance are known to be major drivers of the atherosclerotic process (16,75,76).
Initially, foam cells appear to be exclusively derived from resident intimal macrophages in the mouse (16). Of note, in humans, VSMCs also play a role in the early development of foam cells (77). Continuous inflammatory triggering by the persistent influx of apoB-containing lipoproteins causes a substantial monocyte recruitment within the first 1-2 weeks of hypercholesterolemia (16,78,79). The subendothelial inflammatory foci lead to the expression of adhesion molecules on activated endothelial cells and the secretion of chemokines, most importantly of CCL2/MCP-1, CX3CL1 and CCL5 (80) These factors are essential for the infiltration of primarily Ly6C high monocytes into the developing atherosclerotic plaque (81,82). Combined absence of all three chemokine-chemokine receptor pairs results in an almost complete inhibition of lesion development (82)(83)(84). Intravital imaging studies suggest that the luminal ('inside-out') recruitment is important in the early phases of plaque development, whereas ('outside-in') recruitment via adventitial vasa vasorum is the main route for myeloid cells to enter advanced plaques (78,85). More quantitative approaches with adoptive transfer of monocytes and bead labelling found that both the influx and luminal recruitment routes are important already in early atherosclerotic development, and persist in advanced plaques (86,87). The route of plaque-invading monocytes is an important avenue of research, as these cells have been recognized to fuel the inflammatory reaction in developing plaques and blocking their entry might represent a promising therapeutic target.
In addition to causing a local inflammatory responses and recruitment of Ly6C high monocytes into the vessel wall, hypercholesterolemia induces a Ly6C high dominated monocytosis (81,82). Elevated levels of cholesterol in haematopoietic stem cells foster the formation of lipid rafts and stabilisation of growth factor receptors, which promote their myelopoietic activity and monocytosis (88)(89)(90). Supplementing the enhanced myelopoiesis in the bone marrow, extramedullary haematopoiesis in the spleen contributes to increased production of monocytes and marked recruitment into the developing atherosclerotic lesion (60,91). Other lifestyle-related factors such as hyperglycaemia or stress also have the potential to enhance myelopoiesis and fuel the cycle of monocyte production and entry into the plaque (92)(93)(94). Importantly, the circulating monocytes are poised for pro-inflammatory reactions with increased levels of surface receptors such as CD86 and TLR4, as well as increased levels of reactive oxygen species, among other features (44,(95)(96)(97). Thus, hypercholesterolemia leads to augmented recruitment and an increased number of circulating monocytes with a heightened inflammatory potential. A topic that warrants further investigation is the role of recently identified monocyte subsets that appear during inflammatory conditions, such as segregated-nucleus-containing atypical monocytes (98), in atherosclerosis development.
The recruited Ly6C high monocytes are thought to primarily differentiate into intimal macrophages (75). Data from developing atherosclerotic plaques is lacking, but it is conceivable that Ly6C high monocytes have alternative fates within the lesion ( Figure 2). As has been shown for sterile liver injury, Ly6C high monocytes can exhibit distinct monocytespecific functions, including the uptake of trapped apoBcontaining lipoproteins (99)(100)(101). In this way, monocytes participate in the vicious cycle of cellular apoptosis and necrosis following the metabolic stress of intracellular cholesterol accumulation. Some Ly6C high monocytes might also recirculate into the blood and lymph and present antigens, including de novo generated autoantigens to the cells of the adaptive immune system (12,13,59). Ly6C low monocytes, on the other hand, show an intensified patrolling behaviour at atheroprone sites, which display increased endothelial damage during hypercholesterolemia (58,102). Despite their main task as patrolling intravascular cells, Ly6C low monocytes can also be found in the atherosclerotic plaque, highlighting their potential to extravasatealthough to a lesser extent than classical Ly6C high monocytes (70,82). These cells display an anti-inflammatory transcriptional signature with elevated transcripts for cholesterol efflux and vascular repair, thereby promoting the inflammation resolution (58). It is still debated whether the extravasation of Ly6C low monocytes is of importance in the atherosclerotic disease process (58). If so, the anti-atherosclerotic phenotype of Ly6C low monocytes presumably ameliorates the disease process and enhancing Ly6C low monocyte extravasation might be a potential therapeutic target.
Akin to other chronic inflammatory diseases, the initial, mainly CCR2-dependent, recruitment of monocyte-derived macrophages is a crucial pathomechanism in the development of atherosclerosis (82)(83)(84)(103)(104)(105)(106). However, as the atherosclerotic lesion progresses, monocyte recruitment becomes less important, as evidenced by studies with CCR2-or monocyte-depletion models (21,(107)(108)(109). The limited impact of blocking monocyte recruitment on progression of advanced plaques might be due to a failure of monocytes to penetrate the lesion. A recent report suggests that monocytes cannot migrate deeply into the lesion and only accumulate superficially, similar to tree ring formation (87). This is, however, contradicted by results showing migrating CD11c + , which appear to be similar to foamy monocytes or macrophages, within atherosclerotic plaques (100,101,110). Consequently, there might be other reasons for the non-reliance on monocyte entry in progressing plaques, such as local macrophage proliferation, as discussed below.
Hypercholesterolemia leads to a substantial influx of monocytes. It has recently been suggested that the perinatally seeded intima-resident macrophages are completely replaced by invading monocyte-derived macrophages within weeks of hypercholesterolemia (16). Although similar results have been observed for liver Kupffer cells during Listeria infection (111), this contrasts with the fate of adventitial macrophages in other models of sterile and non-sterile aortic inflammation. We and others found that, after a transient recruitment of monocytederived macrophages in the acute phase, the resident macrophage population prevails even in chronic inflammatory models (37,44). Our results focused on adventitial macrophages, but the different response of intimal macrophages in atherosclerosis is an intriguing characteristic that might be contributing to the defective inflammation resolution in atherosclerosis (61,81). Given that in mice the entire adventitial macrophage pool requires approximately one year for a complete cell turnover (44), it is likely that the turnover of intimal macrophages during hypercholesterolemia is accelerated by mechanisms like emigration or cell death, which in turn could fuel atherosclerotic development. Nonetheless, it is possible that intima-resident macrophage numbers also rebound after the cessation of hypercholesterolemia, but this remains to be elucidated.
The fate of the intima-resident macrophages is of particular interest in the context of atherosclerosis, as we know from other inflammatory conditions that most monocytes recruited under inflammatory conditions do not stably engraft as resident macrophages. These transient macrophages disappear upon the resolution of inflammation (25,39,112,113), which has also been shown in the aorta (44). Future studies will have to elucidate the macrophage composition and origin within the intimal niche after the cessation of hypercholesterolemia. Studies focussing on the lung and other tissues have also found that some de novo recruited macrophages are not transiently resident but persist even after inflammation resolution. Importantly, these macrophages were shown to acquire an epigenetic memory of the inflammatory situation (inflammation-imprinted resident macrophages), which might have detrimental effects on tissue repair or repetitive insults (113). Whether the newly recruited atherosclerotic macrophages share fates with inflammatory macrophages in other tissues and vanish after removal of the inflammatory stimulus is the topic of ongoing research.
Despite the increased macrophage apoptosis and necrosis in atherosclerotic plaques, macrophage numbers are stable throughout disease progression (49). It has been estimated that in early atherosclerosis monocyte recruitment accounts for approximately 70% of the macrophage replenishment, while more than 85% of the macrophages in advanced plaques stem from in situ proliferation (49). Interestingly, whereas proliferation of intimal macrophages increases as the atherosclerotic lesions progresses, adventitial macrophages do not proliferate more, as if they were not affected by the ongoing inflammatory process (49). Macrophage loss in atherosclerotic plaques is mainly a result of cell death. In infectious settings, macrophages emigrate from the site of infection either via reverse transendothelial migration or via lymphatics to clear the inflammatory triggers and present them to the adaptive immune system (62,114). Hypercholesterolemia, however, suppresses emigration signals via CCR7 and macrophage FIGURE 2 | The origin and fate of macrophages in murine models of atherosclerotic plaque formation and regression. Intima-resident macrophages are the first to encounter accumulating apoB-containing lipoproteins but are replaced by recruited macrophages within weeks. It is unknown whether resident adventitial (EMP-and BM-derived) macrophages can invade the intima at any point of atherosclerosis progression or regression. Monocyte recruitment is the dominant source for plaque macrophages during early atherosclerosis, whereas local macrophage proliferation takes over at later stages. In other inflammatory disorders, macrophages can migrate either transendothelially or via lymphatics to clear the inflammatory triggers and present them to the adaptive immune system. However, hypercholesterolemia supresses emigration, leading to a continuous accumulation of cells, resulting in increased apoptosis and secondary necrosis. Upon the cessation of hypercholesterolemia, the fate of localised macrophages is yet unknown. Research from other inflammatory disorders has shown that, upon removal of the inflammatory stimuli, a number of macrophages will clear through apoptosis (transient macrophages), but inflammatory imprinting can define a population of surviving macrophages which have an acquired epigenetic memory -which might have detrimental effect on disease resolution and recurrence of inflammation. Additionally, a novel wave of monocyte recruitment defines a fresh population of reparatory macrophages, which aid in tissue clearance and tissue repair. migratory capacity, leading to a continuous accumulation of macrophages and increased local cell death with the development of a necrotic core (62,(115)(116)(117)(118). In general, the migration behaviour of plaque macrophages has been characterised as 'dancing on the spot', i.e. macrophages do not migrate within the plaque but only extend and retract their dendrites (87,110,119). This inability to migrate begs the question whether resident adventitial macrophages are capable of crossing the muscular media and migrate into the developing plaque.
Phenotype and functions of macrophages are governed by transcriptional regulation. It has been suggested that the transcriptional programs of intima-resident macrophages and recruited monocyte-derived macrophages converge on a similar foamy macrophage profile early in hypercholesterolemia (16,71). But developing atherosclerotic plaques harbour many heterogenous subsets of macrophages. As described above, efforts to integrate the various scRNA-seq studies of the murine atherosclerotic plaque have defined 5 distinct macrophage subsets: (I) inflammatory, (II) TREM2 + , (III) interferon inducible, (IV) resident-like and (V) cavity macrophages (70,71). Inflammatory macrophages show elevated expression levels of pro-inflammatory cytokines like interleukin 1b and tumor necrosis factor. The strong proinflammatory gene profile and the importance of interleukin 1b in atherosclerotic disease points towards a major role of this macrophage subset in aggravating the chronic atherogenic inflammation. These cells were furthermore characterized by high CCR2 expression and presumably are transient inflammatory macrophage descendants from invading Ly6C high monocytes. Macrophages expressing TREM2 have been identified as the foam cell population in atherosclerotic plaques (120)(121)(122). TREM2 is a transmembrane glycoprotein that can interact with apolipoprotein E and TREM2 + macrophages show a transcriptomic signature enriched for lipid metabolism pathways, pinpointing their role in lipid and cholesterol handling (38,123). TREM2 + macrophages have previously been shown to possess anti-inflammatory functions (124). The TREM2 + macrophage subset in atherosclerotic plaques is also characterized by dramatically decreased expression levels of proinflammatory molecules like interleukin 1b, tumor necrosis factor or NLR family pyrin containing domain 3 (Nlrp3) (120,121). Furthermore, TREM2 + macrophages have been found to express increased levels of CD11c (110), similar to foamy monocytes (100,101). Interestingly, TREM2 expression is found in a variety of disease-associated macrophages, including microglia during neurodegenerative disease and lipid-associated macrophages in obesity (38,125). Even in our dataset of adventitial macrophages during angiotensin II-induced arterial inflammation, we were able to identify TREM2 + macrophages (71). Consequently, TREM2 + macrophages might represent a phenotype that is associated with tissues exhibiting increased lipid deposition and apoptosis. In both scenarios, macrophages capable of handling lipid depositions are required for tissue homeostasis. Additionally, apoptotic cell death, which leads to increased lipid and cholesterol deposition, is associated with anti-inflammatory cell functions (76). It is interesting to note that TREM2 + macrophages do not seem to be related to only one ontogeny but can derive from both YS and BM (38,71,125). The interferon-inducible macrophages are a rather small subset in the atherosclerotic plaque (70). These macrophages are characterised by expression of several interferon-inducible genes, including Isg15 and Irf7. Future studies will have to investigate this so far unknown subset but given the pro-atherosclerotic role of type I interferon signalling, interferon-inducible macrophages might be detrimental in the course of the disease (126). The identified resident-like macrophage subset is characterised by high expression of Lyve-1, a gene that is important in resident adventitial macrophages for regulation of collagen production in the arterial wall (14). Similar to our dataset of Lyve-1 expressing macrophages in the healthy aorta (37), resident-like macrophages in the atherosclerotic plaque showed increased gene expression of Gas6 (48,70,121). Consequently, residentlike macrophages might be important in the efferocytotic clearance of apoptotic plaque cells and thus be major influencer of the balance between progressing and regressing atherosclerotic plaques. A drawback of the scRNA-seq studies is that we cannot distinguish between intimal and adventitial macrophages. Thus, the presence of resident-like macrophages in atherosclerotic aortas does not provide evidence for a role of Lyve-1 + resident macrophage in the intima-focussed atherosclerotic disease process. The discovery of a macrophage subset expressing a gene signature reminiscent of 'cavity macrophages' is an interesting aspect, in light of recent reports. Mature macrophages from serous cavities like the peritoneum or pericardium have been shown to invade surrounding tissue during sterile inflammation where they play important roles during tissue repair (127)(128)(129). The presence of cavity macrophages in atherosclerotic aortas indicates that adjacent macrophages, from serous cavities or potentially even the adventitia, can invade the atherosclerotic aorta and intima.
Even though resident adventitial macrophages constitute 90% of the aortic macrophages and are the only arterial subset originating from EMPs, their involvement in the atherosclerotic disease process is unclear. The role of adventitial macrophage subsets, including the YS EMP-derived adventitial macrophages, warrants further investigation. As described above, EMP-derived adventitial macrophages show a distinct transcriptional signature of anti-inflammatory and efferocytic functions, that is preserved during chronic arterial inflammation (37). Failing efferocytosis, in particular, has been shown to be a major pathogenic factor in atherosclerotic development (76). Adventitial EMP-derived macrophages seem to be predestined to counteract this failure and inhibit the inflammatory cycle within atherosclerotic plaquesif they invade the growing lesion. As outlined herein, plaque macrophages show diminished migratory behaviour, and adventitial macrophages are not thought to invade the growing plaques. On the other hand, the presence of cavity macrophages suggests that certain macrophage subsets might still be able to invade the developing lesions. Also, a CD11c + cell subset, which resemble foamy macrophages, has been shown to be actively migrating within the plaque. These results warrant further investigation on the trafficking of adventitial macrophages and macrophages of different ontogenies during the various stages of atherosclerosis development.
MACROPHAGE EGRESS AND MONOCYTE MIGRATION IN REGRESSING ATHEROSCLEROTIC PLAQUES
Atherosclerosis is characterised by a failure to resolve the inflammatory response. The continuous influx and retention of apoB-containing lipoproteins represents a persistent inflammatory stimulus. The lowering of blood lipid levels, in particular cholesterol levels (130), allows the resolution phase to commence. The resolution or regression of atherosclerotic plaques can lead to a reduction of plaque size, but most importantly results in the scarring and stabilisation of advanced lesions, lowering the risk of myocardial infarction and stroke (22). The reduction of plaque leukocyte abundance and a phenotypic switch in plaque cells are important hallmarks during atherosclerosis regression (131). Macrophages are highly plastic cells, and as such are fundamental players in the tissue repair processes seen in atherosclerosis regression.
Traditional mouse models of atherosclerosis, like the LDL receptor and apolipoprotein E knockout mouse, have greatly contributed to our understanding of the atherosclerotic disease process. These models, however, lack the ability to normalise hypercholesterolemia and induce regression. Fortunately, in recent years several mouse models of atherosclerosis regression were developed (132)(133)(134)(135)(136)(137)(138). The common denominator of these models is the normalisation of cholesterol levels after a phase of hypercholesterolemia to induce advanced atherosclerotic plaques. Examples include the transplantation of atherosclerotic aortic segments into normocholesterolemic mice or the inducible deficiency of the microsomal triglyceride transfer protein, as in the Reversa mouse (132,133). The variety of regression models, as well as their individual limitations, such as surgical inflammation and a lack of lymphatic anastomosis in the transplantation model might be the reason for the heterogeneous results regarding the fate of macrophages in atherosclerosis regression. A novel approach uses antisense oligonucleotides, targeting the LDL receptor to transiently cause hypercholesterolemia and induce atherosclerotic plaques. In this model, regression can be induced either by discontinuing the antisense oligonucleotides or through treatment with sense oligonucleotides for the LDL receptor (136). The LDL receptor antisense method offers a promising approach, as it allows scientists to omit time-and labour-intensive crossbreedings when using transgenic animals in regression. Furthermore, due to its limited off-target effects, antisense treatment is even used in human hyperlipidaemic disease (139,140).
A hallmark of atherosclerosis regression is the reduction of the plaque macrophage content (87,117,(141)(142)(143)(144)(145)(146). Macrophage emigration from arteries via afferent lymphatics or reverse transendothelial migration aids the host defence by presenting antigens to the adaptive immune system (62,114). As described above, hypercholesterolemia blunts the CCR7-guided emigration via the expression of neuroimmune guidance cues, including netrin 1 and semaphoring 3E, and by increasing plasma membrane cholesterol content which affects intracellular signalling as well as other mechanisms (62,(115)(116)(117)(118)147). Not surprisingly, the reversal of hypercholesterolemia has been shown to induce CCR7 expression in plaque macrophages and with it their efflux via afferent lymphatics (117,(142)(143)(144)(148)(149)(150). Whether lesional macrophages leave the regressing plaque via reverse transendothelial migration, as well as the quantitative relevance of macrophage emigration to the overall loss of plaque macrophages has not yet been clarified. Increased macrophage emigration has been observed in several different models of atherosclerosis regression, including the aortic transplantation, the Reversa mouse and apoB-targeted antisense oligonucleotide treatment (117,(142)(143)(144)(148)(149)(150), whereas other reports have found no difference in macrophage emigration behaviour during regression (87,145,146). Importantly, emigration of plaque macrophages to lymph nodes might aid the development of the recently described post-resolution phase, although it is unknown whether this establishment of adaptive immunity takes place in atherosclerosis regression (151).
Another common mechanism of leukocyte removal during tissue repair is programmed cell death via apoptosis (152). Effective clearance of apoptotic cells by macrophages avoids secondary necrosis and suppresses inflammation. Additionally, efferocytosis aids tissue repair by inducing a pro-resolving phenotype in phagocytosing macrophages (76). Whereas a recent report identified increased macrophage apoptosis as part of the regression mechanism (149), other studies did not find elevated numbers of apoptotic macrophages in regressing plaques (145,146,150). In order for apoptosis to act as a proresolving stimulus, efferocytosis needs to be functional. In atherosclerosis progression, however, defective efferocytosis is an essential pathogenic mechanism (11). The role of macrophage apoptosis in atherosclerosis regression remains elusive, and further studies investigating the presence and functionality of efferocytosis in atherosclerosis regression are warranted.
As macrophage numbers in advanced atherosclerotic plaques are primarily maintained through local proliferation, another means of reducing the plaque macrophage burden is through the suspension of proliferation. Indeed, a decrease in proliferating macrophages can be observed within 3 weeks of regression (145,149). An inhibition of macrophage proliferation upon cessation of hypercholesterolemia is not an unexpected finding, as the retained and modified apoB-containing lipoproteins are potent inducers of M-CSF, contributing to an increase in local macrophage proliferation in advanced plaques (49,153,154).
In addition to macrophage survival and proliferation, monocyte recruitment is another factor influencing plaque macrophage numbers. The reversal of hypercholesterolemia presumably blunts the heightened monocytopoiesis and normalises circulating Ly6C high monocyte levels. However, so far, no difference could be detected in studies evaluating the monocyte frequency even after 4 weeks of regression (145,146). These intriguing results warrant further studies focusing on the timing and return to a steady-state haematopoiesis following the onset of normocholesterolemia. Nonetheless, monocyte extravasation is not only dependent on the number of circulating monocytes, but also on their potential to invade the regressing plaque. Similarly to mechanisms halting proliferation of plaque macrophages, decreased de novo generation of macrophages from immigrating monocytes would result in a reduction of plaque macrophage abundance. Experimentally, several groups have detected a suppressed migration of Ly6C high as well as Ly6C low monocytes into the regressing plaque by using the adoptive transfer of labelled monocytes, as well as by monocyte tracking with fluorescent beads (145,146,155). The quantitative relevance of this effect might, however, be limited. Härdtner et al. estimated that the limited monocyte recruitment accounts for only about 25% of plaque macrophage reduction (145), whereas another report found no suppression of monocyte influx in regressing plaques, despite using similar methods (149). In summary, there are various mechanisms at play reducing the abundance of inflammatory macrophages in regressing atherosclerotic plaques. Presumably, all four mechanisms mentioned herein are relevant for ameliorating the inflammatory burden, likely occurring at various stages of regression. Longitudinal studies of macrophage trafficking, in combination with fate-mapping models and other methodologies capable of tracing the fates of lesional macrophages will hopefully advance our understanding of the cellular dynamics in regression.
The diminished monocyte influx during atherosclerosis regression is an interesting avenue for further research. The resolution and repair phase after myocardial infarction, as well as following sterile injuries in other organs, depends on the continuous influx of monocytes, which consequently differentiate into reparatory and pro-resolving macrophages (156)(157)(158)(159)(160). The importance of monocyte migration into the arterial wall to facilitate inflammation resolution and tissue repair has recently also been established for atherosclerosis regression. Applying the aortic transplantation models in numerous chemokine receptor knockout and reporter mice, Rahman et al. found that inhibiting the entry of Ly6C high , but not Ly6C low monocytes, into the atherosclerotic plaque during normocholesterolemia abrogates atherosclerosis regression (161). Analogous to their phenotype in the steady-state, Ly6C high monocytes might not necessarily differentiate into macrophages, but instead participate in tissue repair with their monocytespecific functions. In a model of sterile liver injury, as well as during the resolution phase after myocardial infarction, recruited classical Ly6C high monocytes performed a phenotypic switch to non-classical Ly6C low monocytes, which was crucial for optimal tissue repair (99,157). The precise functions of circulating Ly6C low monocytes during atherosclerosis regression have not yet been clarified. Given their role in the integrity of the endothelium, it is conceivable that intravascular Ly6C low monocytes participate in the reorganisation of the endothelial layer during the plaque size reduction. It will be interesting to see first results of studies focussing on the role Ly6C low monocytes during atherosclerosis regression, for instance in a mouse model with a Ly6C low monocyte-specific deficiency (162).
Akin to the inflammation-poised phenotype of monocytes circulating during hypercholesterolemia and atherosclerosis development described above, resolution-dedicated monocyte subsets have been found to be present in the inflammatory resolution of sepsis and colitis (163). However, as to whether the reparatory Ym1 (chitinase-like protein 3) + Ly6C high monocyte subset described is also present during atherosclerosis resolution has not been investigated. Nevertheless, Ly6C high monocytes have been found to exhibit an altered surface expression of various proteins during the regression of atherosclerosis (146). This underlines the importance of the quality over the quantity of the monocyte response and offers an explanation as to why atherosclerosis regression continues undisturbed in studies with suppressed monocyte recruitment.
Emerging scRNA-seq studies of atherosclerosis regression have been providing us with an insight regarding the heterogeneity of the remaining and recruited macrophages in regressing plaques (48,149). Interestingly, the same, previously mentioned, five main macrophage clusters present during atherosclerosis have also been observed in regressing plaques (48,70,149). This might be less surprising for the subsets of cavity-like and TREM2 + macrophages. As mentioned above, cavity macrophages have been found to be essential mediators of tissue repair (127)(128)(129). The scRNA-seq studies of atherosclerosis regression, however, are unable to inform us about the location of the analysed macrophages, and thus it is unclear as to whether these cavity macrophages have invaded the intima, or if they participate in the resolution of the intimal inflammation However, TREM2 + macrophages are known to be equipped for lipid handling, and the accumulation of extracellular lipids is part of the tissue repair when dead and apoptotic cells need to be cleared by efferocytosis, a process that is increased in regressing plaques (149).
Although these studies identified the same major macrophage clusters in regressing plaques as in atherosclerosis development, there were subtle differences in expression levels representing a spectrum of activation states (48,149). The subset of inflammatory macrophages, for instance, showed decreased expression levels of Il1b and Nlrp3, compared to atherosclerotic macrophages before the induction of atherosclerosis regression (149). Interestingly, the described interferon-inducible macrophages had increased transcription levels of signal transducer and activator of transcription 6 (Stat6), which is known to induce type 2 or reparatory immune responses (149). Notably, when atherosclerotic aortic segments were transplanted in normocholesterolemic Stat6-deficient mice, atherosclerosis regression was abrogated, which was associated with a proinflammatory phenotype of plaque macrophages (161). A question that has not been finally resolved is if the already present plaque macrophages can be repolarized by the regressing conditions to adjust their functional program towards a reparatory phenotype or if an influx of de novo reparatory macrophages is required. The study by Rahman et al. found that Ly6C high monocyte influx is an absolute requirement for plaque regression and differentiation of reparatory macrophages (161). This is in line with evidence that inflammatory macrophages cannot be repolarized to reparatory macrophages (164) undefined. In other reports inflammatory macrophages could be repolarized to a reparatory phenotype, although only a limited number of phenotypic markers were assessed (165,166). Furthermore, an elegant in vivo tracking approach found a phenotypic adjustment of individual macrophages from inflammatory inducible nitric oxide synthase-expressing to arginase-expressing macrophages in a model of chronic central nervous inflammation (167). If and to what extent a local phenotypic switch of macrophages occurs in the regressing atherosclerotic plaque remains elusive and warrants further studies.
Interestingly, when Lin et al. broke down the transcriptional differences in macrophages during progression and regression in more detail, they identified one substantial macrophage subset and 42 distinctly regulated genes that were predominantly present during regression. The macrophage subset was characterised by high expression of Stab1, which is important for efferocytosis. In addition to Stab1, Gas6 represents another upregulated molecule important for efferocytosis (48). Intriguingly, we have previously found that adventitial EMPderived macrophages are characterised by high expression levels of both Stab1 and Gas6 (37). The presence of a regressionspecific macrophage subset expressing a similar signature might indicate a role for EMP-derived macrophages in the tissue repair during atherosclerosis regression, and even hint towards the migration of these prenatally seeded adventitial macrophages into the intima. Relatedly, EMP-derived macrophages are known to be important regulators of tissue repair in the heart (168,169). So far, it was assumed that adventitial macrophages do not cross the media and immigrate into the intima, but future studies will have to re-evaluate the fate of adventitial EMP-derived macrophages during atherosclerotic disease.
HUMAN TRANSLATABILITY
The wide array of available methods, including genetic fatemapping models, intravital imaging or tracking of adoptively transferred cells, makes the mouse an ideal model system for studying the trafficking behaviour and dynamics of monocytes and macrophages. Although these models allow us to study the trafficking of mononuclear phagocytes in the mouse vasculature, ultimately the goal is to advance our understanding of these features in the human-being. Since similar scientific manipulations are unfeasible in the human, descriptive studies are used to determine the translatability of results in the mouse to the human situation.
In mice YS EMP-derived macrophages seed the aorta during early embryonic development. Haematopoiesis is a conserved process between men and mice, with an initial haematopoietic wave originating in the extra-embryonic YS, followed by a transition to intra-embryonic definitive haematopoiesis (170,171). We and others have previously identified primitive macrophages in the human YS that show a phenotype similar to mouse EMP-derived macrophages in the mouse (27,172,173).
A recent study employing scRNA-seq on human embryonic tissue at different time points of organogenesis found tissueresident macrophages originating from the YS as well as the fetal liver (174), thus providing evidence for an initial seeding of vascular macrophage during early human embryogenesis and corroborating rodent studies. A second wave of monocytederived macrophage presumably follows the initial seeding with YS-derived macrophages in the human. Direct evidence for this in the arterial wall is lacking but studies in other human tissues were able to translate the results of mouse studies to humans. Langerhans cells in the skin have been shown to be YSderived resident epidermal macrophages seeding the tissue in the first wave, whereas dermal macrophages are of monocytic origin in mice (31,175). In humans with an inherited severe monocytopenia, a dramatically reduced frequency of CD14 + dermal macrophages but sustained numbers of Langerhans cells could be observed (176,177). The skin is an easily accessible organ with different macrophage ontogenies that enables investigation of macrophage trafficking in humans. Other future options might include the study of conserved epigenetic marks between the murine and human system also in the cardiovascular system and especially the arterial wall.
Like in the mouse, macrophages are a major subset or even the dominant immune subset in the non-atherosclerotic arterial wall of humans, although the human arteries also contain significant numbers of T lymphocytes (16,37,42,43,(178)(179)(180). Arterial phagocytes can be found in the intima, directly beneath the endothelial layer, mirroring their function as immune sentinels, as well as in the adventitia (2,(180)(181)(182)(183)(184)(185). Arterial resident macrophages are more prevalent in the adventitial layer than in the intima, although the difference is less pronounced compared to the mouse (186,187). As would be expected for immune sentinels, intima-resident macrophages can be found more frequently at atheroprone sites, which show non-laminar and low shear stress blood flow (2,180,183). Studies in other human organs have provided evidence that tissue-resident macrophages self-sustain mainly through local proliferation without monocyte input, although there might be differences depending on the macrophage subset. In studies of sex-mismatched hand allografts, YS-derived Langerhans cells were not replaced by recipient cells but remained of donor origin up to 10 years post-transplantation (188,189). This is in line with results of sex-mismatched heart transplants, where only 31% of presumably BM-derived CCR2 + macrophages were of recipient origin compared to less than 1% CCR2 -, potentially YSderived, resident macrophages, after a mean period of 8.8 years post-transplantation (190). The arterial wall of the vessels in the transplanted organs has not been examined separately, but a recent scRNA-seq study of human healthy arterial tissue identified a proliferative macrophage subset (178), hinting towards a self-sustaining arterial resident macrophage population.
Although the human intima harbours subendothelial macrophages and CD11c + phagocytes that mirror the recently identified aortic intima-resident macrophage of the mouse (16,181,191), there are important differences in the intimal composition between mouse models and humans that need to be considered. The human intima is thickened and comprises abundant VSMCs and extracellular matrix at sites prone to atherosclerotic development. Additionally, VSMCs are very plastic cells and in addition to being producers of extracellular matrix components can be phagocytic and develop into foam cells. The discrimination of VSMC and macrophage foam cells is complicated by the fact that, VSMCs can express macrophage markers like CD68, whereas macrophages have also been found to express VSMC lineage markers (51,(192)(193)(194)(195). Lineage-tracing studies in mice have shown a varying degree of foam cells originating from VSMCs, ranging from 16% to 70% (53,(195)(196)(197). In humans, it has been estimated through the analysis of histone marks that 18% of CD68 + plaque cells originate from the VSMC lineage (51). Future studies will have to determine to what extent VSMCs and macrophages contribute to the foam cell pool at different phases of the atherosclerotic process.
The inflammatory reaction following the influx, retention and modification leads to a continuous recruitment of human monocytes into the growing atherosclerotic lesion (198,199). Evidence for an important role of monocyte recruitment in the development of human atherosclerosis derives from studies showing an association of monocyte counts with atherosclerotic plaque development during several years of follow-up (200)(201)(202). Human monocytes can be distinguished into three different subsets: (I) classical CD14 + CD16 -, analogous to the Ly6C high mouse population, (II) non-classical CD14 dim CD16 + , aligning with the murine Ly6C low subset, and (III) intermediate CD14 + CD16 + monocytes (203). Emerging results from multiparametric analyses identified further subsets and it will be interesting to determine their functional relevance in atherosclerosis (204)(205)(206). The non-classical CD14 dim CD16 + monocytes fulfil similar endothelial surveillance functions as in the mouse, whereas the role of intermediate monocytes is not yet clear in the atherosclerotic progress (55)(56)(57)203). The classical CD14 + CD16are thought to mainly enter the growing atherosclerotic lesion (207), as this subsets preferentially migrates into tissues and differentiates to macrophages (208)(209)(210)(211). Consequently, it has been shown that higher numbers of circulating CD14 + CD16monocytes predict cardiovascular events (212,213). Interestingly though, classical CD14 + CD16monocytes do not associate with a more high-risk plaque phenotype in patients with advanced atherosclerosis (214). This might be owed to a more important role of local macrophage proliferation than monocyte recruitment in advanced atherosclerosis, similar to what has been observed in mice. In line with this, advanced atherosclerotic plaques contain a significant fraction of proliferating macrophages (71,(215)(216)(217)(218)(219). Another striking similarity between the human and mouse plaque macrophages relates to their phenotype. An integrated analysis of scRNA-seq subsets of the mouse and human revealed a conserved phenotype between the two species, with detection of (I) inflammatory, (II) foamy TREM2 + , (III) resident-like and (IV) interferon-inducible macrophages (71).
In summary, there are important differences between human and mouse atherosclerosis, as exemplified by the presence of a thickened VSMC-rich intima in the human arterial wall.
Nonetheless, studies in rodent models have been instructive in examining basic principles of the trafficking of mononuclear phagocytes and will continue to provide valuable insight. Novel techniques, such as spatial transcriptomics (220), hold a great promise in translating murine results to the human situation.
CONCLUSION AND OUTSTANDING QUESTIONS
Monocytes and macrophages are key effector cells during all phases of atherosclerotic disease. Their trafficking in and out of the arterial wall directly influences the disease process. Although we have gained substantial insight into these processes during atherosclerosis development, there are still major gaps in our knowledge. For instance, it is currently unknown if invading monocytes persist in a non-differentiated state within the plaque or if their only fate is the differentiation to plaque macrophages. Answering this question is complicated by the phenotypic similarities of monocytes and macrophages. The combination of newly developed fate-mapping models with novel methodologies, like spatial transcriptomics, display a promising avenue for future investigations of cellular fates within the plaque. Along these lines, the recently identified subset of intima-resident macrophages illustrates the potential of such methodologies to deciphering macrophage dynamics within the arterial wall by using novel methodologies.
Nonetheless it is unclear if intima-resident macrophages vanish entirely upon onset of hypercholesterolemia or can rebound once cholesterol levels are normalised. Another remaining question relates to the dynamics of replacing intima-resident macrophages by recruited macrophages. Does the resident subset die, emigrate or just stop its proliferation?
Another major remaining question is the role of adventitiaresident macrophages in atherosclerosis. During atherosclerosis development, perinatally seeded intima-resident macrophages are quickly replaced by recruited inflammatory macrophages. The replacing cells are presumably transient macrophages, which do not engraft after inflammation resolution. As mentioned earlier, it is currently unclear whether a small subset of intimaresident prevails during atherosclerosis progression, and whether these cells are capable of rebounding following the cessation of hypercholesterolemia. Consequently, adventitial macrophages might be the only long-term resident macrophages in the aorta during atherosclerosis development. In contrast to the recruited inflammatory macrophages in the intima, adventitial macrophages do not show increased proliferation during atherosclerosis development, resulting in largely stable macrophage numbers despite the continuous inflammation in the local environment (49). This, and the low migratory capacity of plaque macrophages could argue for a limited role of adventitia-resident macrophages in the atherosclerotic process. On the other hand, regressing atherosclerotic plaques contain a subset of macrophages possessing a transcriptional signature that is reminiscent of homeostatic and pro-resolving EMP-derived adventitial macrophages. Future studies will have to evaluate the role of adventitial macrophages in atherosclerosis and investigate if these cells are capable of migration into the intima. Given their pro-resolving phenotype, adventitial EMP-derived macrophage and their migration into the atherosclerosis-affected intima also display a potential therapeutic target.
In general, we are lacking studies that quantitatively examine the recruitment and different fates of monocytes and macrophages during the different phases of atherosclerosis progression, and in particular during atherosclerosis regression. Novel fate mapping and conditional gene deletion models, such as the Ms4a3 cre (30), CCR2 cre (221)(222)(223) and Rank cre (45) mice, together with high-dimensional analysis approaches will aid in deepening our understanding of these processes.
In this Review, we have mainly focused on results from mouse models but summarized evidence for similarities as well as differences between the rodent and human arterial wall. Many aspects pertaining the trafficking of monocytes and macrophages are difficult to corroborate in humans, given the unfeasibility of fate-mapping techniques. Nonetheless, the development of novel methods, including scRNA-seq and spatial omics-technologies will continue to expand the possibilities of analysing monocyte and macrophage dynamics in humans. Although several findings in the mouse can be translated to the human, there are differences in the pathological mechanisms, which call for an increased effort in performing human studies.
AUTHOR CONTRIBUTIONS
LT and CS conceived the idea and article structure. FP and LT wrote and edited the manuscript. FP created the illustrations and CS revised the manuscript and provided oversight. All authors have made a substantial, direct and intellectual contribution to the article and approved the submitted version.
FUNDING
LT is supported by a Walter Benjamin fellowship of the German Research Foundation (DFG). This study was supported by the DFG, SFB 1123 project A07 to CS.
|
2021-10-25T13:16:28.805Z
|
2021-10-25T00:00:00.000
|
{
"year": 2021,
"sha1": "d74caa9b9cb5ed585c3ce93da97f8035fb8b1adb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2021.718432/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d74caa9b9cb5ed585c3ce93da97f8035fb8b1adb",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
224813044
|
pes2o/s2orc
|
v3-fos-license
|
Comparative genomics reveals the in planta‐secreted Verticillium dahliae Av2 effector protein recognized in tomato plants that carry the V2 resistance locus
Summary Plant pathogens secrete effector molecules during host invasion to promote colonization. However, some of these effectors become recognized by host receptors to mount a defence response and establish immunity. Recently, a novel resistance was identified in wild tomato, mediated by the single dominant V2 locus, to control strains of the soil‐borne vascular wilt fungus Verticillium dahliae that belong to race 2. With comparative genomics of race 2 strains and resistance‐breaking race 3 strains, we identified the avirulence effector that activates V2 resistance, termed Av2. We identified 277 kb of race 2‐specific sequence comprising only two genes encoding predicted secreted proteins that are expressed during tomato colonization. Subsequent functional analysis based on genetic complementation into race 3 isolates and targeted deletion from the race 1 isolate JR2 and race 2 isolate TO22 confirmed that one of the two candidates encodes the avirulence effector Av2 that is recognized in V2 tomato plants. Two Av2 allelic variants were identified that encode Av2 variants that differ by a single acid. Thus far, a role in virulence could not be demonstrated for either of the two variants.
Introduction
In nature, plants are continuously threatened by potential plant pathogens. However, most plants are resistant to most potential plant pathogens due to an efficient immune system that becomes activated by any type of molecular pattern that accurately betrays microbial invasion (Dangl and Jones, 2001;Cook et al., 2015). Throughout time, different conceptual frameworks have been put forward to describe the molecular basis of plant-pathogen interactions and the mechanistic underpinning of plant immunity. Initially, Harold Flor introduced the gene-for-gene model in which a single dominant host gene, termed a resistance (R) gene, induces resistance in response to a pathogen expressing a single dominant avirulence (Avr) gene (Flor, 1942). Isolates of the pathogen that do not express the allele of the Avr gene that is recognized escape recognition and are assigned to a resistance-breaking race. In parallel to these race-specific Avrs, non-race-specific elicitors were described as conserved microbial molecules that are often recognized by multiple plant species (Darvill and Albersheim, 1984). The recognition by plants of Avrs and of non-race-specific elicitors, presently known as pathogen-or microbe-associated molecular patterns (P/MAMPs), was combined in the 'zig-zag' model (Jones and Dangl, 2006). In this model, P/MAMPs are perceived by cell surfacelocalized pattern recognition receptors (PRRs) to trigger pattern-triggered immunity (PTI), while effectors are recognized by cytoplasmic receptors that are known as resistance (R) proteins to activate effector-triggered immunity (ETI) (Jones and Dangl, 2006). Importantly, the model recognizes that Avrs function to suppress host immune responses in the first place, implying that these molecules, besides being avirulence determinants, act as virulence factors through their function as effector molecules (Jones and Dangl, 2006). A more recent model, termed the invasion model, recognizes that the functional separation of PTI and ETI is problematic and proposes that the corresponding receptors, collectively termed invasion pattern receptors (IPRs), detect either externally encoded or self-modified ligands that indicate invasion, termed invasion patterns (IPs), to mount an effective immune response (Thomma et al., 2011;Cook et al., 2015). However, it is generally appreciated that microbial pathogens secrete dozens to hundreds of effectors to contribute to disease establishment, only some of which are recognized as Avrs (Rovenich et al., 2014).
IPRs encompass typical R genes, which have been exploited for almost a century to confer resistance against plant pathogens upon introgression from sexually compatible wild relatives into elite cultivars (Dodds and Rathjen, 2010;Dangl et al., 2013). Most R genes encode members of a highly polymorphic superfamily of intracellular nucleotidebinding leucine-rich repeat (NLR) receptors, while others encode cell surface receptors (Dangl et al., 2013). Unfortunately, most R genes used in commercial crops are short-lived because the resistance that they provide is rapidly broken by pathogen populations as their deployment in monoculture-based cropping systems selects for pathogen variants that overcome immunity (Stukenbrock and McDonald, 2008;Dangl et al., 2013). Such breaking of resistance occurs upon purging of the Avr gene, sequence diversification, or by employment of novel effectors that subvert the host immune response (Stergiopoulos et al., 2007;Cook et al., 2015).
Verticillium dahliae is a soil-borne fungal pathogen and causal agent of Verticillium wilt on a broad range of host plants that comprises hundreds of dicotyledonous plant species, including numerous crops such as tomato, potato, lettuce, olive, and cotton (Fradin and Thomma, 2006;Klosterman et al., 2009). The first source of genetic resistance toward Verticillium wilt was identified in tomato (Solanum lycopersicum) in the early 1930s in an accession called Peru Wild (Schaible et al., 1951). The resistance is governed by a single dominant locus, designated Ve (Diwan et al., 1999), comprising two genes that encode cell surface receptors of which one, Ve1, acts as a genuine resistance gene (Fradin and Thomma, 2006). Shortly after its deployment in the 1950s, resistance-breaking strains have appeared that were assigned to race 2, whereas strains that are contained by Ve1 belong to race 1 (Alexander, 1962). Thus, Ve1 is characterized as a race-specific R gene, and resistance-breaking strains have become increasingly problematic over time (Alexander, 1962;Dobinson et al., 1996). With comparative population genomics of race 1 and race 2 strains, the V. dahliae avirulence effector that is recognized by tomato Ve1 was identified as VdAve1, an effector that is secreted during host colonization (de Jonge et al., 2012). As anticipated, it was demonstrated that VdAve1 acts as a virulence factor on tomato plants that lack the Ve1 gene and that, consequently, cannot recognize VdAve1 (de Jonge et al., 2012). Recent evidence demonstrates that VdAve1 exerts selective antimicrobial activity and has the capacity to manipulate local microbiomes inside host plants as well as in the environment (Snelders et al., 2020). Whereas all race 1 strains carry an identical copy of VdAve1, all race 2 strains analysed to date are characterized by complete loss of the VdAve1 locus (de Jonge et al., 2012;Faino et al., 2016). Intriguingly, phylogenetic analysis has revealed that VdAve1 was horizontally acquired by V. dahliae from plants (de Jonge et al., 2012;Shi-Kunne et al., 2018), after which the effector gene was lost multiple times independently, presumably due to selection pressure exerted by the Ve1 locus that has been introgressed into most tomato cultivars (Faino et al., 2016).
Despite significant efforts, attempts to identify genetic sources for race 2 resistance in tomato have remained unsuccessful for a long time (Baergen et al., 1993). Recently, however, a source of race 2 resistance was identified in the wild tomato species Solanum neorickii (Usami et al., 2017). This genetic material was used to develop the tomato rootstock cultivars Aibou, Ganbarune-Karis and Back Attack by Japanese breeding companies, in which resistance is controlled by a single dominant locus, denoted V2 (Usami et al., 2017). However, experimental trials using race 2-resistant rootstocks revealed resistancebreaking V. dahliae strains that, consequently, are assigned to race 3 (Usami et al., 2017). In this study, we performed comparative genomics combined with functional assays to identify the avirulence effector Av2 that activates race-specific resistance in tomato genotypes that carry V2.
Identification of Verticillium dahliae strains that escape V2 resistance
To identify Av2 as the V. dahliae gene that mediates avirulence on tomato V2 plants, we pursued a comparative genomics strategy by searching for genomic regions that are absent from all race 3 strains. To this end, we performed pathogenicity assays with a collection of V. dahliae strains on a differential set of tomato genotypes, comprising (I) Moneymaker plants that lack V. dahliae resistance genes, (II) Ve1-transgenic Moneymaker plants that are resistant against race 1 and not against race 2 strains (Fradin et al., 2009), and (III) Aibou plants that carry Ve1 and V2 and are therefore resistant against race 1 as well as race 2 strains (Usami et al., 2017) (Fig. 1A). First, we aimed to confirm the race assignment of eight V. dahliae strains that were previously tested by Usami et al. (2017) (Table 1). Additionally, three strains that were previously assigned to race 2 were included (de Jonge et al., 2012) as well as V. dahliae strain JR2 (race 1) because of its gapless telomere-to-telomere assembly (Faino et al., 2015).
At 3 weeks post inoculation, all strains caused significant stunting on the universally susceptible Moneymaker control ( Fig. 1A and B), while all strains except for the race 1 strain JR2 caused significant stunting on Ve1transgenic Moneymaker plants ( Fig. 1A and C), corroborating that, except for strain JR2, none of the strains belongs to race 1 and that a potential containment on Aibou plants cannot be caused by Ve1 recognition of the VdAve1 effector. Importantly, all of the strains that were used by Usami et al. (2017) and that were previously assigned to race 2 did not cause significant stunting on Aibou, whereas all of the strains that were assigned to race 3 caused clear symptoms of Verticillium wilt disease ( Fig. 1, Table 1; Usami et al., 2017). The previously assigned race 2 strain DVDS26 (de Jonge et al., 2012) caused no significant stunting on Aibou plants, confirming that this remains a race 2 strain, while strains DVD161 and DVD3 caused significant stunting, implying that these strains should actually be assigned to race 3. As expected, the race 1 strain JR2 did not cause stunting on Aibou plants, which can at least partially be attributed to VdAve1 effector recognition by the Ve1 gene product in these plants. However, the finding that a transgenic VdAve1 deletion line (JR2ΔAve1; de Jonge et al., 2012) caused significant stunting on Ve1-transgenic Moneymaker and not on Aibou plants, indicates that the JR2 strain might also encode Av2. Currently, it is not known whether this is the case, or whether it is simply that basal defence is enhanced in the absence of Ave1. After all, we previously showed that the virulence of the VdAve1 deletion strain on tomato is severely compromised (de Jonge et al., 2012), which can also be observed on Moneymaker plants in our assays (Fig. 1B). This observation, combined with the observation that stunting on Aibou plants by any race 3 strain is generally less than stunting on Moneymaker plants ( Fig. 1B and D), could indicate that basal defence against Verticillium wilt is enhanced in Aibou plants, and thus that incompatibility of the VdAve1 deletion strain may be due to enhanced basal defence rather than due to V2-mediated recognition of the JR2 strain.
Comparative genomics identifies Verticillium dahliae Av2 candidates
Besides the gapless genome assembly of strain JR2 (Faino et al., 2015), genome assemblies were also available for strains DVDS26, DVD161 and DVD3, albeit that these assemblies were highly fragmented as these were based on Illumina short-read sequencing data (de Jonge et al., 2012) (Table 1). In this study, we determined the genomic sequences of the race 2 strains TO22, UD1-4-1, GF1207 and GFCA2, and the race 3 strains GF-CB5, GF1192, VT2A and HOMCF with Oxford Nanopore sequencing Technology (ONT) using a MinION device (Table 1). For each strain, $2-4 Gb of sequence data was produced, representing 50-100x genome coverage based on the $35 Mb gapless reference genome of V. dahliae strain JR2 (Faino et al., 2015). Subsequently, we performed self-correction of the reads, read trimming and genome assembly, leading to genome assemblies ranging from 18 contigs for strain UD1-4-1 to 69 for strain GF1207 (Table 1).
Based on the genome sequences, we pursued comparative genomics analyses by exploring two scenarios. The first scenario is that Av2 is race 2-specific and thus present in race 2 lineage sequences while absent from race 3. The second scenario is that Av2 is present in isolates that belong to race 1 and race 2, but that the resistance phenotype against race 2 is masked by Ve1 resistance directed against Ave1. In scenario I, comparative genomics was performed making use of race 2 strain TO22 (Usami et al., 2017) as a reference, while in scenario II race 1 strain JR2 (Faino et al., 2015) was used (Table 2). To this end, self-corrected reads from the V. dahliae race 3 strains were mapped against the assembly of V. dahliae strain TO22 (scenario I) or strain JR2 (scenario II) and regions that were not covered by race 3 reads were retained (Table 2). Next, self-corrected reads from the race 2 strains were mapped against the retained reference genome-specific regions that are absent from the race 3 strains, and sequences that were found in every race 2 strain were retained as candidate regions to encode the Avr molecule. Sequences that are shared by the V. dahliae strain TO22 reference assembly and all race 2 strains, and that are absent from all race 3 strains, were mapped against the V. dahliae strain JR2 genome assembly, and common genes were extracted. Sequences that did not map to the V. dahliae strain JR2 genome assembly were de novo annotated and signal peptides for secretion at the N-termini of the encoded proteins were predicted to identify potential effector genes.
Our strategy identified 563 kb of race 2-specific regions, containing 110 genes of which six encode putative secreted proteins, for scenario I (Table 2). For approach II, 222 kb of sequence that lacks in race 3 strains was identified with 40 genes of which only two are predicted to encode secreted proteins; XLOC_00170 (VDAG_JR2_ Chr4g03680a) and evm.model.contig1569.344 (VDAG_ JR2_Chr4g03650a, further referred to as Evm_344). Intriguingly, both these genes were previously recognized as being among the most highly expressed effector genes during colonization of Nicotiana benthamiana plants (de Jonge et al. 2013;Faino et al., 2015). A. Typical appearance of V dahliae infection by strain JR2, TO22 and HOMCF as representatives for race 1, 2 and 3, respectively, on Moneymaker (MM) plants that lack known V. dahliae resistance genes, Ve1-transgenic Moneymaker plants that are resistant against race 1 and not against race 2 or 3 strains, and Aibou plants that carry Ve1 and V2 and are therefore resistant against race 1 as well as race 2 strains, but not against race 3 strains at 21 days post inoculation (dpi). B-D. Measurement of V. dahliaeinduced stunting on wild-type Moneymaker plants (B), Ve1-transgenic Moneymaker plants (35S:Ve1) (C) and Aibou plants (D) at 21 dpi. The graphs show collective data from four different experiments indicated with different symbols (circles, squares, triangles and plus symbols), and asterisks indicate significant differences between V. dahliaeand mock-inoculated plants as determined with an ANOVA followed by a Fisher's LSD test (P < 0.01). Only two of the Av2 candidates are expressed in planta We anticipate that the genuine Av2 gene may not necessarily be expressed in N. benthamiana (de Jonge et al. 2013) but should be expressed particularly in tomato. Real-time PCR analysis on a time course of tomato cultivar Moneymaker plants inoculated with the V. dahliae JR2 strain revealed that the two candidate genes are expressed during tomato colonization, with a peak in expression around 7 days post inoculation, whereas little to no expression could be recorded upon growth in vitro ( Fig. 2A). Both genes are similarly expressed in V. dahliae strain TO22, albeit that the expression peaks slightly later, at 11 dpi (Fig. 2B). However, whereas the expression level of both genes is similar in V. dahliae strain JR2, Evm_344 is higher expressed than XLOC_00170 in V. dahliae strain TO22. Importantly, none of the four additional avirulence effector gene candidates that were identified in comparative genomics scenario I is expressed in planta in V. dahliae strain TO22 (Fig. 2B). Thus, based on the transcriptional profiling, these four avirulence effector genes can be disqualified as Av2 candidates, and only two genes that display an expression profile that can be expected for a potential avirulence effector gene remain; XLOC_00170 and Evm_344. To identify which of the two candidates encodes Av2, a genetic complementation approach was pursued in which the two candidate genes were introduced individually into the V. dahliae race 3 strains GF-CB5 and HOMCF. Subsequently, inoculations were performed on a differential Fig. 2. Expression of V. dahliae candidate avirulence effector genes in vitro and during colonization of tomato plants. To assess in planta expression, 12-day-old tomato cv. Moneymaker seedlings were root-inoculated with V. dahliae strain JR2 (A) or strain TO22 (B), and plants were harvested from 4 to 14 days post inoculation (dpi), while conidiospores were harvested from 5-day-old cultures of V. dahliae on potato dextrose agar (PDA) to monitor in vitro expression. Real-time PCR was performed to determine the relative expression of XLOC_00170, Evm_344 and the race 1-specific effector gene VdAve1 as a positive control (de Jonge et al., 2012) for strain JR2, using V. dahliae GAPDH as reference (A). Similarly, the relative expression of XLOC_00170, Evm_344 and six additional avirulence effector genes for strain TO22, using V. dahliae GAPDH as reference (B). set of tomato genotypes, comprising Moneymaker plants, Ve1-transgenic Moneymaker plants (Fradin et al., 2009), and Aibou plants (Usami et al., 2017). As expected, the non-transformed race 3 strains GF-CB5 and HOMCF as well as the complementation lines containing XLOC_00170 or Evm_344 caused clear stunting of the universally susceptible Moneymaker as well as of the Ve1-transgenic Moneymaker plants ( Fig. 3A and B). Interestingly, nontransformed race 3 strains GF-CB5 and HOMCF and the Evm_344 complementation lines caused clear stunting on Aibou plants, whereas the XLOC_00170 complementation lines did not induce disease symptoms and stunting on these plants ( Fig. 3A and B). As such, these complementation transformants of the race 3 strains GF-CB5 and HOMCF behaved essentially as the race 2 strain TO22 ( Fig. 3A and B). Thus, these findings suggest that XLOC_00170 encodes Av2. All visual observations of stunting were supported by quantifications of fungal biomass by real-time PCR (Fig. 3C). These measurements revealed that fungal biomass levels were only reduced on Aibou plants when inoculated with the race 2 strain TO22, and with the race 3 strains GF-CB5 and HOMCF that were complemented with XLOC_00170. Thus, our data confirm that reduced symptomatology is accompanied by significantly reduced fungal colonization and indicate that XLOC_00170 encodes the race 2-specific avirulence effector Av2.
To further confirm that XLOC_00170 encodes Av2, targeted gene deletions were pursued in race 2 strain TO22 as well as in the JR2ΔAve1 strain and inoculations were performed on Moneymaker plants, Ve1-transgenic Moneymaker plants (Fradin et al., 2009), and Aibou plants (Usami et al., 2017). All V. dahliae genotypes caused clear stunting on wild-type and Ve1-transgenic Moneymaker plants, except for wild-type JR2 on Ve1transgenic Moneymaker ( Fig. 4A and B). Interestingly, whereas V. dahliae strains TO22 and JR2ΔAve1 were contained on Aibou plants, the XLOC_00170 deletion strains caused stunting of these plants in a similar fashion as the race 3 strains GF-CB5 and HOMCF (Fig. 4C). All visual observations were supported by quantification of biomass by real-time PCR (Fig. 4C). Collectively, our data unambiguously demonstrate that XLOC_00170 encodes the Av2 effector that is recognized on V2 tomato plants.
Av2 does not seem to contribute to virulence It has been widely recognized that the intrinsic function of Avrs is to support host colonization by acting as virulence determinants (Jones and Dangl, 2006;Rovenich et al., 2014;Cook et al., 2015). Thus, we assessed the virulence of the complementation lines alongside their wild-type progenitor genotypes on wild-type and Ve1-transgenic Moneymaker plants (Fig. 3). However, no significant increase in symptomatology nor in fungal colonization could be recorded upon Av2 introduction. Similarly, no , and Aibou plants that carry Ve1 and V2 and are therefore resistant against race 1 as well as race 2 strains of the pathogen (Usami et al., 2017) inoculated with the race 3 WT strains GF-CB5 and HOMCF, and two independent genetic complementation lines that express XLOC_00170 or Evm_344, and the race 2 strain TO22. B. Quantification of stunting caused by the various V. dahliae genotypes on the various tomato genotypes as detailed for panel (A). Each combination is represented by the measurement of five plants. C. Quantification of fungal biomass with real-time PCR determined for the various V. dahliae genotypes on the various tomato genotypes as detailed for panel (A). Each combination is represented by the fungal biomass quantification in five plants. Asterisks indicate significant differences between V. dahliae-and mock-inoculated plants as determined with an ANOVA followed by a Fisher's LSD test (P < 0.01).
[Color figure can be viewed at wileyonlinelibrary.com] significant decrease in symptomatology, nor a decrease in fungal colonization could be recorded upon Av2 deletion from V. dahliae strain TO22 on these tomato genotypes and upon Av2 deletion from JR2ΔAve1 on wildtype Moneymaker plants (Fig. 4), suggesting that Av2 is not a major contributor to virulence on tomato under the conditions of our assays.
To assess Av2 distribution in V. dahliae, presenceabsence variations (PAV) were assessed in a collection of 52 previously sequenced V. dahliae strains ( Fig. 6; de Jonge et al., 2012;Faino et al., 2015;Fan et al., 2018;Gibriel et al., 2019), revealing that Av2 occurred in 17 of the isolates including the four race 2 isolates that were sequenced in this study (Fig. 6). To assess the phylogenetic relationships between strains that carry Av2, a phylogenetic tree was generated, showing that the strains can be grouped into three major clades, two of which comprising strains that contain Av2. However, within these clades closely related strains occur that lost Av2, suggesting the occurrence of multiple independent losses (Fig. 6). Overall, no obvious phylogenetic structure is apparent with respect to effector presence within the V. dahliae population.
Next, we investigated the genomic organization surrounding Av2 based on the gapless genome assembly of V. dahliae strain JR2 (Faino et al., 2015). Interestingly, Av2 resides in close proximity to Evm_344, separated by only two additional genes, in a lineage-specific (LS) region on chromosome 4 (Fig. 7). Furthermore, as typically observed in LS regions that are enriched in repetitive elements (de Jonge et al., 2013;Faino et al., 2016), Av2 is surrounded by repetitive elements such as transposons that mostly belong to the class II long terminal repeat (LTR) retrotransposons (Fig. 7). Typically, LS regions are characterized by the high abundance of PAV. As expected, the flanking genomic regions (100 kb) are highly variable between V. dahliae strains (Fig. 7).
As many Avr effectors are under strong selection pressure and thus often display enhanced allelic variation (Stergiopoulos et al., 2007), we assessed allelic variation among the 17 Av2 alleles identified in this study. We Fig. 4. Targeted deletion confirms that XLOC_00170 encodes the avirulence effector Av2 that is recognized in V2 plants. A. Top pictures of Moneymaker plants that lack known V. dahliae resistance genes (MM), Ve1-transgenic Moneymaker plants that are resistant against race 1 and not against race 2 strains of V. dahliae (35S:Ve1), and Aibou plants that carry Ve1 and V2 and are therefore resistant against race 1 as well as race 2 strains of the pathogen (Usami et al., 2017) inoculated with the race 3 strains GF-CB5 and HOMCF, the race 1 WT strain JR2, the deletion line JR2ΔAve1, two independent knock-out lines of XLOC_00170 in JR2ΔAve1, the race 2 WT strain TO22 and two independent knock-out lines of XLOC_00170 in TO22. B. Quantification of stunting. C. Quantification of fungal biomass with real-time PCR caused by the various V. dahliae genotypes on the various tomato genotypes as detailed for panel (A). Different symbols (empty circles, filled circles and triangles) refer to five plants from three different experiments. Asterisks indicate significant differences between V. dahliaeand mock-inoculated plants as determined with an ANOVA followed by a Fisher's LSD test (P < 0.01 identified only two allelic variants within the 17 Av2 alleles that differed by a single nucleotide polymorphism (SNP) in exon 3 leading to a polymorphic amino acid at position 73. Whereas 10 isolates carry a glutamic acid at this position (E 73 ), seven other carry a valine (V 73 ) (Fig. 8). Interestingly, strains carrying V 73 are clustered in the same branch, suggesting that a single event caused this polymorphism (Fig. 6). We noticed that all isolates carrying E 73 carry an extra transposable element of the DNA/Tc-1 Mariner class in the upstream region of the Av2 gene (Fig. 8). Intriguingly, as strains GF-CA2, TO22, UD-1-4-1 DVDS26 and GF1207, that encode the Av2 variant with V 73 , as well as JR2ΔAve1, that encodes the variant with E 73 , are contained on Aibou plants, we conclude that both allelic variants are recognized by V2. Moreover, the Av2 deletion strain of TO22 (with V 73 ) as well as of JR2ΔAve1 (with E 73 ) is not compromised in aggressiveness on wild-type Moneymaker plants when compared with the TO22 or JR2ΔAve1 progenitor strain, indicating that both alleles make no noticeable contribution to V. dahliae virulence.
Discussion
Historically, the identification of avirulence genes has been challenging for fungi that reproduce asexually, as genetic mapping cannot be utilized. However, since the advent of affordable genome sequencing, cumbersome and laborious methods to identify avirulence genes, that include functional screenings of fungal cDNAs or protein fractions for the induction of immune responses in plants (Takken et al., 2000;Luderer et al., 2002), have been supplemented with comparative genomics and transcriptomics strategies (Gibriel et al., 2016). Less than a decade ago, we identified the first avirulence gene of V. dahliae, known as VdAve1 for mediating avirulence on Ve1 plants, through a comparative population genomics strategy combined with transcriptomics by utilizing race 1 strains that were contained by the Ve1 resistance gene of tomato, and resistance-breaking race 2 strains (de Jonge et al., 2013). In this study, we used a similar approach based on comparative population genomics of race 1 and 2 strains with race 3 strains to successfully identify XLOC_00170 as the Av2 effector that mediates avirulence on V2 plants. Intriguingly, besides VdAve1, XLOC_00170 has been identified previously as one of the most highly induced genes of V. dahliae during host colonization (de Jonge et al., 2013). Ve1 and the V2 locus are the only two major resistance sources that have been described in tomato against V. dahliae thus far (Fradin et al., 2009;Usami et al., 2017). Since its initial introduction from a wild Peruvian tomato accession into cultivars in the 1950s (Deseret News and Telegram, 1955), Ve1 has been widely exploited as it is incorporated in virtually every tomato cultivar today. Even though soon after the introduction of these cultivars resistance-breaking race 2 strains emerged, first in the United States (Robinson, 1957;Alexander, 1962), and soon thereafter also in Europe (Cirulli, 1969;Pegg and Dixon, 1969), Ve1 is still considered useful for Verticillium wilt control today. An important factor that contributes to the durability of resistance is the fitness penalty for the pathogen upon losing the corresponding avirulence factor (Brown, 2015). The VdAve1 effector contributes considerably to V. dahliae virulence on tomato, which explains why race 2 strains that lack VdAve1 are generally less aggressive (de Jonge et al., 2012). Based on our current observations that differences in aggressiveness between race 2 and race 3 strains on Moneymaker plants are not obvious (Fig. 1), that genetic complementation of race 3 strains with Av2 did not lead to a striking increase in aggressiveness on Moneymaker plants (Fig. 3), and that targeted deletion of Av2 from race 2 strains did not lead to a striking decrease in aggressiveness on Moneymaker plants (Fig. 4), we conclude that the contribution of Av2 to V. dahliae virulence under the conditions tested in this study is modest at most.
Thus far, V2 resistance has been exploited scarcely when compared with Ve1, as it has only been introduced in a number of Japanese rootstock cultivars since 2006 (Usami et al., 2017). Previously, V2 resistance-breaking race 3 strains have been found in several Japanese prefectures on two separate islands (Usami et al., 2017). Intriguingly, our genome analyses demonstrate that race 3 strains that lack Av2 are ubiquitous and found worldwide, as our collection of sequenced strains comprises specimens that were originally isolated in Europe, China, Canada, and the United States. Arguably, most of these race 3 strains arose in the absence of V2 selection by tomato cultivation. It is conceivable that, similar to Ve1 homologues that are found in other plant species besides tomato (Song et al., 2017), functional homologues of V2 occur in other plant species as well, which may have selected against the presence of Av2 in many V. dahliae strains. However, as long as V2 is not cloned this hypothesis cannot be tested.
Like VdAve1, Av2 also resides in an LS region of the V. dahliae genome, albeit in another region on another chromosome. Typically, these LS regions are genesparse and enriched in repetitive elements, such as transposons, causing these regions to be highly plastic which is thought to mediate accelerated evolution of effector catalogues (de Jonge et al., 2013;Faino et al., 2016;Cook, et al., 2020). We previously demonstrated that VdAve1 has been lost from the V. dahliae population multiple times, and to date only PAV has been identified as mechanism to escape Ve1-mediated immunity (de Jonge et al., 2012(de Jonge et al., , 2013Faino et al., 2016). Similarly, our phylogenetic analysis reveals that Av2 has been lost multiple times independently, and although we identified two allelic variants, both variants are recognized by V2. Consequently, PAV remains the only mechanism to overcome V2-mediated immunity thus far. Despite the observation that PAV is the only observed mechanism for V. dahliae to overcome host immunity, pathogens typically exploit a wide variety of mechanisms, ranging from SNPs (Joosten et al., 1994) to altered expression of the avirulence gene (Na and Gijzen, 2016). Nevertheless, avirulence gene deletion to overcome host immunity is common and has been reported for various fungi, including C. fulvum (Stergiopoulos et al., 2007), Fusarium oxysporum (Niu et al., 2016;Schmidt et al., 2016), Leptosphaeria maculans (Gout et al., 2007;Petit-Houdenot et al., 2019), Blumeria graminis (Praz et al., 2016) and Magnaporthe oryzae (Pallaghy et al., 1994;Zhou et al., 2007).
It was previously demonstrated that frequencies of SNPs are significantly reduced in the area surrounding the VdAve1 locus when compared with the surrounding genomic regions (Faino et al., 2016), which was thought to point toward recent acquisition through horizontal transfer (de Jonge et al., 2012). However, we recently noted that enhanced sequence conservation through reduced nucleotide substitution is a general feature of LS regions in V. dahliae (Depotter et al., 2019). Although a mechanistic underpinning is still lacking, we hypothesized that differences in chromatin organization may perhaps explain this phenomenon. Interestingly, while DNA methylation is generally low and only present at TEs, only TEs in the core genome are methylated while LS TEs are largely devoid of methylation (Cook et al., 2020). Furthermore, TEs within LS regions are more transcriptionally active and display increased DNA accessibility, representing a unique chromatin profile that could contribute to the plasticity of these regions (Faino et al., 2016;Cook et al., 2020). Possibly, the increased DNA accessibility Fig. 6. Phylogenetic tree of sequenced V. dahliae strains with indication of presence-absence variation for the Ave1 and Av2 effectors. Strains that were phenotyped and included in the comparative genomics (Table 2) are shown in bold. Presence of the avirulence genes VdAve1 and Av2, and the race designation based on the presence or absence of these genes are indicated. Phylogenetic relationships between sequenced V. dahliae strains were inferred using Realphy (Langmead and Salzberg, 2012), and branch length represents sequence divergence.
contributes to the high in planta expression of genes residing in these regions, and VdAve1 as well as Av2 belong to the most highly expressed genes during host colonization (de Jonge et al., 2013).
Our identification of Av2 concerns the cloning of only the second avirulence gene of V. dahliae. This identification may permit its use as a functional tool for genetic mapping of the V2 gene. Typically, V. dahliae symptoms on tomato display considerable variability, and disease phenotyping is laborious. Possibly, injections of heterologously produced Av2 protein can be used to screen tomato plants in genetic mapping analyses, provided that such injections result in a visible phenotype such as a hypersensitive response. Similar effector-assisted resistance breeding has previously been used successfully identify resistance sources in tomato against the leaf Fig. 7. Presence-absence variation in the region surrounding the two candidate Av2 genes (A) genomic region flanking the Av2 candidate genes in 17 isolates detailed in Fig. 3. The matrix shows the presence (black/grey) and absence (white) in 100 bp non-overlapping windows for Av2 variant E 73 (black) and Av2 variant V 73 grey. On top, annotated genes are displayed in black and repetitive elements in green, while Av2 is displayed in red and Evm_344 in blue. (B) Read coverage for V. dahliae strain JR2 that encodes Av2 variant E 73 and strain DVD-S26 that encodes Av2 variant V 73 depicting a transposable element deletion in isolates that produce the V 73 variant. [Color figure can be viewed at wileyonlinelibrary.com] mould pathogen Cladosporium fulvum (Lauge et al., 1998;Takken et al., 1999) and potato against the late blight pathogen Phytophthora infestans (Vleeshouwers and Oliver, 2014;Du et al., 2015). The identification of Av2 can furthermore be exploited for race diagnostics of V. dahliae to determine whether cultivation of resistant tomato genotypes is useful, but also to monitor V. dahliae population dynamics and race structures. Based on the identification of avirulence genes, rapid in-field diagnostics can be developed to aid growers to cultivate diseasefree crops.
V. dahliae inoculation and phenotyping
Plants were grown in potting soil (Potgrond 4, Horticoop, Katwijk, the Netherlands) under controlled greenhouse conditions (Unifarm, Wageningen, the Netherlands) with day/night temperature of 24/18 C for 16-h/8-h periods, respectively, and relativity humidity between 50% and 85%. For V. dahliae inoculation, 10-day-old seedlings were root-dipped for 10 min as previously described (Fradin et al., 2009) To test for significant stunting, an ANOVA was performed which tests for significant differences in canopy area between mock-inoculated and V. dahliae inoculated plants. Outliers were detected based on the studentized residuals from the ANOVA analysis. All datapoints with studentized residuals below −2.5 or above 2.5 were classified as outliers and removed. In total, approximately 1.8% of the datapoints were classified as outlier.
High-molecular weight DNA isolation and nanopore sequencing Conidiospores were harvested from potato dextrose agar (PDA) plates, transferred to Czapek dox medium and grown for 10 days. Subsequently, fungal material was collected on Miracloth, freeze-dried overnight and ground to powder with mortar and pestle of which 300 mg was incubated for 1 h at 65 C with 350 μl DNA extraction buffer (0.35 M Sorbitol, 0.1 M Tris-base, 5 mM EDTA pH 7.5), 350 μl nucleic lysis buffer (0.2 M Tris, 0.05 M EDTA, 2 M NaCl, 2% CTAB) and 162.5 μl Sarkosyl (10% w/v) with 1% β-mercaptoethanol. Next, 400 μl of phenol/chloroform/isoamyl alcohol (25:24:1) was added, shaken and incubated at room temperature (RT) for 5 min before centrifugation at 16 000 g for 15 min. After transfer of the aqueous phase to a new tube, 10 μl of RNAase (10 mg μL −1 ) was added and incubated at 37 C for 1 h. Subsequently, half a volume of chloroform was added, shaken and centrifuged at 16 000 g for 5 min at RT, after which the chloroform extraction was repeated. Next, the aqueous phase was mixed with 10 volumes of 100% icecold ethanol, incubated for 30 min at RT, and the DNA was fished out using a glass hook, transferred to a new tube, and washed twice with 500 μl 70% ethanol. Finally, the DNA was air-dried, resuspended in nuclease-free water and incubated at 4 C for 2 days. The DNA quality, size and quantity were assessed by Nanodrop, gel electrophoresis and Qubit analyses.
Library preparation with the Rapid Sequencing Kit (SQK-RAD004) was performed according to the manufacturer's instructions (Oxford Nanopore Technologies, Oxford, UK) with 400 ng HMW DNA. An R9.4.1 flow cell (Oxford Nanopore Technologies) was loaded and run for 24 h. Base calling was performed using Guppy (version 3.1.5; Oxford Nanopore Technologies) with the highaccuracy base-calling algorithm. Adapter sequences were removed using Porechop (version 0.2.4 with default settings; Wick, 2018). Finally, the reads were self-corrected, trimmed and assembled using Canu (Version 1.8; Koren et al., 2017). Sequencing data are available at the NCBI SRA database under accession number PRJNA639910.
Comparative genomics and candidate identification
Self-corrected reads from V. dahliae race 3 strains were mapped against the reference genome using BWA-MEM (version 0.7.17;default settings;Li, 2013). Reads with low mapping quality (score < 10) were removed using Samtools view (version 1.9; setting: -q 10) , and reads mapping in regions with low coverage (<10x) were discarded using Bedtools coverage (version 2.25.0; setting: -d) (Quinlan and Hall, 2010). Self-corrected race 2 strain reads were mapped against the retained reference genome-specific regions that are absent from the race 3 strains. Retained sequences shared by the reference and every race 2 strain, while absent from every race 3 strain, were retained as Av2 candidate regions.
The previously determined annotation of V. dahliae strain JR2 (Faino et al., 2015) was used to extract genes when JR2 or TO22 were used as alignment references. To this end, retained sequences shared by the TO22 reference assembly and race 2 strains, absent from race 3 strains, were mapped against the JR2 genome assembly, and genes in the shared sequences were extracted. The remaining sequences that did not map to the V. dahliae strain JR2 genome assembly were annotated using Augustus (version 2.1.5; default settings; Stanke et al., 2006). SignalP software (version 4.0; Petersen et al., 2011) was used to identify N-terminal signal peptides in predicted proteins.
Real-time PCR
To determine expression profiles of Av2 candidate genes during V. dahliae infection of tomato, 2-week-old tomato (cv. Moneymaker) seedlings were inoculated with V. dahliae strain JR2 or TO22, and stems were harvested up to 14 dpi. Furthermore, conidiospores were harvested from 5-day-old PDA plates. Total RNA extraction and cDNA synthesis were performed as previously described (Santhanam et al., 2013). Real time-PCR was performed with primers listed in Table 3, using the V. dahliae glyceraldehyde-3-phosphate dehydrogenase gene (GAPDH) as endogenous control. The PCR cycling conditions were as follows: an initial 95 C denaturation step for 10 min followed by denaturation for 15 s at 95 C, annealing for 30 s at 60 C, and extension at 72 C for 40 cycles.
Genome mining
In total, 44 previously sequenced V. dahliae strains and eight strains sequenced in this study were mined for Av2 gene candidates using BLASTn. Gene sequences were extracted using Bedtools (setting: getfasta) (Quinlan and Hall, 2010) and aligned to determine allelic variation using Espript (version 3.0; default settings) (Robert and Gouet, 2014). Similarly, amino acid sequences were aligned using Espript (Robert and Gouet, 2014).
To determine the genomic localization of XLOC_00170 and Evm_344, the V. dahliae strain JR2 assembly and annotation were used (Faino et al., 2015) together with coverage plots from reads of race 3 and race 2 strains as described in comparative genomics approach IV (Table 2) using R scripts, with the package karyoploteR for R (version 3.6) using kpPlotBAMCoverage function. The schematic representation of the genomic region on chromosome 4 with XLOC_00170 and Evm_344 was generated using Integrative Genomics Viewer (IGV) software v2.6.3 (Robinson et al., 2011) and R package (version 3.6) Gviz (Hahne and Ivanek, 2016).
Presence-absence variation analysis
Presence-absence variation (PAV) was identified by using whole-genome alignments for 17 V. dahliae strains. Paired-end short reads were mapped to V. dahliae strain JR2 (Faino et al., 2015) using BWA-mem with default settings (Li and Durbin, 2009). Long-reads were mapped using minimap2 with default settings (Li, 2018). Using the Picard toolkit (http://broadinstitute.github.io/picard/), library artefacts were marked and removed with -MarkDuplicates followed by -SortSam to sort the reads. Raw read coverage was averaged per 100 bp non-overlapping windows using the BEDtools -multicov function (Quinlan and Hall, 2010). Next, we transformed the raw read coverage values to a binary matrix by applying a cut-off of 10 reads for short-read data; > = 10 reads indicate presence (1) and < 10 reads indicate absence (0) of the respective genomic region. For long-read data a cut-off of 1 read was used; > = 1 read indicates presence (1) and < 1 read indicates absence (0). The total number of PAV counts for each of the 100 bp genomic windows within 100 kb upstream and downstream of the candidate effectors was summarized.
Agrobacterium tumefaciens-mediated transformation (ATMT) was performed as described previously (Ökmen et al., 2013) with a few modifications. A. tumefaciens was grown in 5 ml minimal medium (MM) supplemented with 50 μg m −1 kanamycin at 28 C for 2 days. After subsequent centrifugation at 3000 g (5 min), cells were resuspended in 5 ml induction medium (IM) supplemented with 50 μg m −1 kanamycin, adjusted to OD 600 0.15 and grown at 28 C for minimum 6 h until OD 600 0.5. Simultaneously, conidiospores of V. dahliae race 3 strains GF-CB5 and HOMCF were harvested after 1 week of cultivation on PDA plates with water, rinsed, and adjusted to a final concentration of 10 6 conidiospores mL −1 . The A. tumefaciens suspension was mixed with V. dahliae conidiospores in a 1:1 volume ratio and 200 μl of the mixture was spread onto PVDF membranes in the centre of IM agar plates. After 2 days at 22 C, membranes were transferred to fresh PDA plates supplemented with 20 μg m −1 nourseothricin and 200 μM cefotaxime and incubated at 22 C for two weeks until V. dahliae colonies emerged. Transformants that appeared were transferred to fresh PDA supplemented with 20 μg ml −1 nourseothricin and 200 μM cefotaxime. Successful transformation was verified by PCR and DNA sequencing.
V. dahliae inoculations were performed as described previously (Fradin et al., 2009). Disease symptoms were scored 14 days after inoculation by measuring the canopy area to calculate stunting when compared with mockinoculated plants. Outgrowth of V. dahliae from stem slices was assessed as described previously (de Jonge et al., 2012). For biomass quantification, stems were freeze-dried and ground to powder, of which $100 mg was used for DNA isolation. Real-time PCR was conducted with primers SlRUB-Fw and SlRUB-Rv for tomato RuBisCo and primers ITS1-F and STVe1-R for V. dahliae ITS (Table 3). Realtime PCR conditions were as follows: an initial 95 C denaturation step for 10 min followed by denaturation for 15 s at 95 C and annealing for 30 s at 60 C, and extension at 72 C for 40 cycles.
|
2020-10-21T13:06:21.022Z
|
2020-10-19T00:00:00.000
|
{
"year": 2020,
"sha1": "7f32ee7cdf2ee62c35a352566b4ed52aef4329a7",
"oa_license": "CCBY",
"oa_url": "https://sfamjournals.onlinelibrary.wiley.com/doi/pdfdirect/10.1111/1462-2920.15288",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "6d39e98369e75ca22db0fd5da0ed682bb1295782",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
233525970
|
pes2o/s2orc
|
v3-fos-license
|
Metrological Characterization of an Aerosol Exposure Chamber to Explore the Inhalation Effects of the Combination of Paraquat and TiO 2 Nano-objects
Agriculture emits a significant quantity of airborne contaminants, and the prospective environmental release of nanopesticides, a new type of agrochemical that employs engineered nanomaterials (ENMs) as either active substances or additives in a pesticide formulation, raises concerns about the risks of inhalation which are still unknown. Although the adverse effects of pesticides have been studied extensively, the potential synergistic toxicity between these substances and ENMs has rarely been investigated. To this end, toxicological models are essential to estimating the health consequences of such aerosols. Thus, to assess the respiratory hazards of titanium dioxide nano-objects (specifically, AEROXIDE ® TiO 2 P25 nanopowder [nTiO 2 ]) in combination with paraquat (PQ), we developed a dynamic whole-body exposure chamber for rodents in compliance with guidelines for inhalation toxicity testing (Organization for Economic Cooperation and Development (OECD)) and animal welfare. First, we metrologically characterized the generated test aerosols by determining their mass and number concentrations, size distributions and atmospheric homogeneity at the laboratory. Then, we evaluated the reproducibility and proper functioning of the chamber during a preliminary field campaign, which validated the consistency of the aerosols’ mass and number concentrations between the laboratory characterization and the rodent exposure sessions. Finally, we examined the inhalation effects on the rodents.
INTRODUCTION
Epidemiological studies highlight converging evidence suggesting a link between the professional and residential exposures to specific pesticides and chronic illnesses such as neurodegenerative diseases and cancers (INSERM, 2013;Kim et al., 2017). Pesticides represent an occupational risk for farmers, pesticides applicators and manufacturers, but also for the general population who may be indirectly exposed to pesticides during their application or because of their persistence in the air (Lu et al., 2000;Coscollà et al., 2013;Brouwer et al., 2017;Mattei et al., 2019). Engineered nanomaterials (ENMs) are generally defined as novel materials designed at the nanoscale, with external or internal structures under the size of 100 nm (e.g., ISO/TR 18401:2017). ENMs could be used either as active ingredients or co-formulants, to provide nanopesticides (NPEs) proposing a reformulation of conventional pesticides. They are expected to enhance the efficiency of agrochemicals and to mitigate the environmental footprint of modern agriculture. Despite the lack of data on their current use, the number of granted patents associated with ENMs in agriculture grew significantly starting in the 1990s to reach 1,254 patents in 2016 (Kah et al., 2019).
Moreover, some NPEs have already been identified on the market (Kumar et al., 2019), showing that those products are likely to emerge in a near future. As the fate of NPEs is still not understood, some authors have already emphasized the knowledge gap surrounding the environmental impact of these emerging products (Kah et al., 2018) and some research agencies have integrated these new compounds as a healthcare priority (USDA, 2015).
Several studies analyzed the presence of pesticides in the atmosphere (Coscollà et al., 2010;Estellano et al., 2015;López et al., 2017;Désert et al., 2018). Their air concentrations ranged from few picograms to hundreds of nanograms per cubic meter. Nevertheless, the air concentration of pesticides can be locally much higher in treated parcels during the seasonal spreading. For instance, a study with paraquat (PQ) sprayers showed that the PQ mass concentration can reach 125 µg m -3 in the breathing zone during work (Morshed et al., 2010). Another study assessed the size distribution of the pesticide residues in a French rural area (Coscollà et al., 2013). Most pesticides were accumulated in the particulate phase, especially in the fine size range (0.1-1 µm), which represent an inhalable fraction, whose size distribution enables particles to be deposited in the respiratory tract. Accordingly, inhalation toxicology studies appear necessary to assess the risk associated with NPE exposure. However, to our knowledge no toxicological data are available on NPE inhalation effects, involving both nano-objects and their aggregates and agglomerates (NOAAs), and pesticides.
A validation of the inhalation facility with characterized aerosol parameters is needed to conduct animal inhalation studies that will provide toxicity data for the risk assessment of nanopesticides. Among the variety of inhalation devices used for animal experimentation, there are two main exposure modalities, namely nose-only and whole-body devices (OECD TG 403;Yeh et al., 1990). The nose-only strategy facilitates the control of the exposure dose for each animal. Animals are not able to move during the aerosol exposure, as they are restrained in a tube to be exposed individually by limiting other routes of exposure, such as the dermal and oral routes (Dorato and Wolff, 1991). However, the stress induced by restraint during gestation could be problematic, notably in case of repeated exposures. By contrast, in whole-body devices, subjects are immersed within the chamber atmosphere, which reflects environmental or occupational exposure scenarios, which are less restraining for animals since they are able to behave naturally. Whole-body exposure chambers encompass various types of chambers, using single-animal holders or largesized compartments for long-term studies using numerous subjects (Wong, 2007). They are particularly suitable for chronic studies or exposure during the gestation to minimize the stress induced by treatments. It must be noted that animals may be exposed by skin, as they collect particles on their fur during the whole-body exposures (Griffis et al., 1979), which can lead to the particle ingestion during grooming. Nevertheless, as inhalation and dermal routes are considered as the main occupational exposure pathways to pesticides (Damalas and Eleftherohorinos, 2011), this additional source of exposure could be relevant to consider.
To comply with the Organization for Economic Cooperation and Development (OECD) test guidelines 413 on subchronic inhalation toxicity (OECD TG 413), the physicochemical characterization of ENMs is necessary prior to the aerosol exposures. Currently, AEROXIDE® TiO2 P25 nanopowder (nTiO2) is one of the most studied ENMs in toxicology and it has already been characterized in a previous study in terms of crystalline structure, size of primary particles, specific surface area and chemical composition (Motzkus et al., 2014). Additionally, nTiO2 appears as a good candidate to be integrated as a nano-additive in NPEs for its valuable properties in agriculture (Wang et al., 2016), such as its photocatalytic activity, which can be used to degrade PQ (Florêncio et al., 2004). PQ is one of the most worldwide used herbicide, despite the fact it is suspected to be associated with the occurrence of the Parkinson disease among farmers (Kamel et al., 2007;Costello et al., 2009;Tanner et al., 2011). PQ and nTiO2 under the form of a colloidal suspension may constitute a model of NPE with interesting features based on the technological inputs brought by nTiO2 in agriculture and its ability to decrease the environmental PQ half-life in soils which could reach several years (Sartori and Vidrio, 2018).
Given the fact that the nanopesticide inhalation effects are still not addressed in the literature, the present study is the first necessary step to establish a proof of concept of a transdisciplinary methodology combining aerosol metrology and toxicology to address properly this question. The present paper is dedicated to the metrological characterization of an aerosol exposure chamber to explore the inhalation effects of the herbicide PQ combined with nTiO2 used as reference materials. The second step consisting in the neurotoxicological assessment of these aerosols, will be addressed separately in upcoming dedicated articles. Thus, a new whole-body exposure chamber was developed to explore the inhalation effects of aerosols. It is a simple device which can fit easily in any animal facility to be operated safely by conforming with animal welfare. As a complete metrological set-up cannot be implemented permanently in an animal facility for longterm studies, the characterization had to be made prior to the animal exposure. The characterization was made in the LNE (Laboratoire National de métrologie et d'Essais) laboratory, in terms of size distribution (Scanning Mobility Particle Sizer [SMPS] + Aerodynamic Particle Sizer [APS]), mass concentration (Tapered Element Oscillating Microbalance [TEOM] + indirect measurements of the sampled mass on 47-mm filters), number concentration (Condensation Particle Counter [CPC]), temperature, pressure and relative humidity. In addition, based on the particle size distribution, the chamber geometry and the airflows, computational fluid dynamics (CFD) modeling was used to investigate the homogeneity of the particle concentration within the cage. The main goal for the CFD simulation was to investigate the fill-up process of the chamber, the possibility of error induced by the relative positions of the injection tube and the measurement sampling tube. These parameters were assessed in accordance with the standard method on the reproducibility measurement (ISO 5725-2:2020).
Because it is essential to validate the functioning of the characterized aerosol exposure chamber on the field, here we also report the comparison of the measurements made in the LNE metrological lab (without animal) in regard to the first preliminary data obtained in the animal facility (with animals) for each of the three aerosols. During the animal exposure, a reduced monitoring of the aerosol was used to check the proper aerosol generation, to enable a precise metrological follow-up of the exposures. The mass concentration and the particle number concentration were therefore used to validate animal exposure sessions in accordance with the metrological characterization phase.
Inhalation Exposure Chamber
The exposure chamber is composed of a rodent cage used as a whole-body exposure system (Fig. 1). The chamber was sealed and customized with antistatic silicon pipes to be integrated within the generation device. This cage, initially dedicated to rodents hosting (rats, mice, hamsters or others) in animal facilities, is made out of polysulfone (internal volume of 19.8 L, overall dimensions of 395 × 346 × 213 mm [W × D × H]; GR900; Tecniplast). It was selected for the chemical resistance and the transparency of this material (Tuttle et al., 2010), which allowed a visual follow-up of animals during the exposure sessions. Coupled with this chamber, an aerosol nebulizer (Model 3076; TSI Inc.) was operated to generate three aerosols of interest, i.e., nTiO2, PQ and nTiO2 with PQ produced from daily prepared colloidal suspensions and solutions with bulk powders of nTiO2 (75% anatase, 25% rutile; AEROXIDE P25; Evonik [Reference 718467 Sigma-Aldrich]) and PQ dichloride hydrate (Reference 856177; Sigma-Aldrich, France). All colloidal suspensions and solutions were prepared using ultrapure water (18.2 MΩ cm resistivity; Milli-Q; Millipore) with nTiO2 and PQ concentrations of 3 g L -1 and 28 mg L -1 respectively and were kept under constant stirring during the nebulization process using a magnetic stirrer. These concentrations were used in order to produce aerosols achieving the target concentrations. Concerning the nTiO2 use in nanopesticides, it is not possible to know the precise mass concentration to which people could be exposed in link with professional or residential exposures. Nevertheless, some authors investigated the mass concentrations of nTiO2 in Korean production sites, thanks to personal sampling and real-time monitoring using SMPS (Lee et al., 2011). As a result, they highlighted that TiO2 mass concentrations can reach 4.99 mg m -3 , involving a particle size range of 15-710.5 nm. In addition, to be comparable with other studies dealing with nTiO2 inhalation that showed relevant toxicological endpoints using rodents (Bermudez et al., 2004;Disdier et al., 2017;Chézeau et al., 2019), a target concentration of 10 mg m -3 was chosen.
Concerning PQ, the minimum lethal mass concentration in rats from a single 4-h inhalation exposure was reported to be 0.6-1.4 mg m -3 (McLean et al., 1985). Moreover, a 3-week inhalation study in rats reported the lowest observed adverse effect level (LOAEL) of 100 µg m -3 , based on the histopathological changes in the upper respiratory tract (Grimshav et al., 1979). Thus, the target concentration of 100 µg m -3 corresponding to the National Institute for Occupational Safety and Health (NIOSH) occupational limit was chosen to avoid pain and distress. This concentration is also in line with the 125 µg m -3 occupational exposure reported by Morshed et al. (2010). The mixture being composed of both nTiO2 and PQ aerosols, its target concentration was assumed to be 10.1 mg m -3 . The doses were chosen to be both relevant doses and realistic to cause slight adverse effects in the lungs due to inflammation and oxidative stress, in order to investigate any potential synergy effects due to the mixture. Briefly, these concentrations can be extrapolated to human beings based on the aerosol characterization using the Multiple-Path Particle Dosimetry Model (MPPD), and they are comparable with the NIOSH recommendations for nTiO2 and PQ. The target concentration choice was made to highlight any possible cocktail effects between nTiO2 and PQ within a toxicology protocol involving chronic exposures.
The nebulizer (Model 3076; TSI Inc.) produced stable particle concentrations using compressed air (Pujalté et al., 2017b). It was operated at 2.4 bar which supplied a constant flowrate of 3.2 L min -1 . The compressed air was supplied by a silent air compressor coupled with a filtration and drying device (Model 3074B; TSI Inc.). The produced airflow enabled a controlled dilution of the generated aerosols entering the exposure chamber. It was also dedicated to the dilution of a CPC to prevent the instrument upper range (10 5 particles cm -3 ) to be exceeded. The instrument airflow was checked for each aerosol generation to adjust concentrations with the proper dilution factor. Two downstream diffusion dryers were used to obtain aerosols with a steady relative humidity during the assays. The aerosols were injected horizontally thanks to a four-branched input in the top of the chamber in order to optimize the dispersion of particles. APS, SMPS and TEOM, which were present only during the characterization phase, needed a total flowrate of 5.8 L min -1 . SMPS and TEOM airflows were not diluted while a dilution of the APS flowrate was set at 2.5 L min -1 . The sample outlet flowrate of the exposure chamber was therefore set at 7.8 L min -1 thanks to the global instrumental flowrate (5.8 L min -1 ), the filter holder sampling (1.7 L min -1 ) and the CPC (0.3 L min -1 ). In absence of the instruments used for the metrological characterization phase, a 7.5 L min -1 flowrate was set for the filter-holder sampling line. To comply with OECD TG 413 guidelines, temperature, pressure, relative humidity and airflows were monitored in the chamber as environmental parameters during the characterization.
Aerosol Characterization and Monitoring
To characterize particle number size distributions (PNSDs), mass and number concentration, a multi-instrumental set-up composed of a real-time monitoring and off-line aerosol samplings was used ( Fig. 1, part surrounded by a red dotted line). PNSDs of submicronic aerosols were measured thanks to an SMPS (TSI Inc.) composed of a differential mobility analyzer (DMA; Model 3081; TSI Inc.) and a CPC (Model 3775; TSI Inc.). An APS (Model 3321; TSI Inc.) monitored the 0.6-20 µm particle size range. At the top of the exposure chamber, a CPC (Model 3007; TSI Inc.) was set up to measure the total particle number concentrations in real time during the metrological characterization and during the exposure sessions. Total mass concentrations were measured in real time using a TEOM (Model 1400; Thermo Fisher Scientific Inc.) set at 50°C during the characterization, and a 47-mm filter holder allowed a gravimetric monitoring of aerosols during the exposure phase. This latter gravimetric measurement was performed to assess the mass concentrations, by using the mass difference between pre-and post-sampling filters, to be compared with the real-time aerosol mass concentrations assessed with TEOM. The aerosol filtration was made by a filter holder with 47-mm filters (Pallflex® Emfab TX40HI20WW; Pall Corp.) made out of borosilicate glass microfibers reinforced with woven glass cloth and bonded with PTFE. The filter holder was used to enable a complementary indirect gravimetric measurement during the characterization phase as well as the exposure phase. Gaie-Levrel et al. (2018) reported the use of these filters, which filtration efficiency was measured to characterize the most penetrating particle size (MPPS). For the size range of 100 nm particles, the filtration efficiency was 99.97%. Consequently, the sample's fraction of particles on 47-mm filters is representative of the total mass concentration. This filter holder was also coupled with a downstream high-efficiency particulate air (HEPA) filter to prevent any potential particle release during the experiment. Filters may adsorb water, which would alter the quality of particle mass concentration assessment. Nevertheless, our experimental procedure was compared with the reference values of mass concentrations measured with TEOM during the metrological characterization phase to check the consistency between both measurement techniques. A maximum deviation of 3% was obtained by comparing TEOM results with gravimetric analysis on filters.
The TEOM mass concentration measurements were not possible to achieve on a daily basis during the exposures of animal. Therefore, thanks to the 47-mm filter holder, the mass concentration was measured for each experiment, based on the mass difference between pre-and post-sampling filters. These concentrations were used to check the steady functioning of aerosol generation during animal exposure, using as a reference the TEOM characterization data made in real time during the characterization.
For the metrological characterization, 10 experiments of 2 h were performed on five days without animals, in order to characterize each generated aerosol, i.e., PQ, nTiO2 and PQ + nTiO2. To assess the reproducibility of the generation method, the relative reproducibility standard deviation (SR) according to the ISO 5725-2 standard (ISO 5725-2:2020) was reported. Due to the containment requirements of the animal facility (biosafety levels 1, 2 and 3), all objects entering the building must be sterilized and only workers accredited in animal experimentation can use the facility. As a consequence, it was technically impossible to use all the metrology instruments in situ within the animal facility. Therefore, the characterization phase was performed prior to animal experimentation without animal, to use the average number and mass concentrations as tracking parameters to validate the proper aerosol generation during the animal exposure. The exposure campaign, in presence of mice, consisted in 11 exposure sessions of 1.5 h for each aerosol, during which the mass and number concentrations were measured to assess the reproducibility of the aerosol generation procedure, in order to validate the operating device on the field.
Aerosol Dispersion Modeling
To study the atmosphere homogeneity in the exposure chamber, simulations of the aerosol dispersion were performed for PQ and nTiO2 aerosols using COMSOL Multiphysics software (version 5.4). A computer-aided design (CAD) model of the test chamber was built and imported in COMSOL. Using the symmetry of the device, half of the chamber was modeled in order to reduce computational time. A preliminary simulation was run to characterize the flow behavior inside the exposure chamber without particles. Reynolds number in the injection tube is evaluated around 1000; hence, a laminar flow formulation of Navier-Stokes equations was used. As the exposure chamber is small, and the ambient temperature is maintained for the duration of the test, it can be assumed that Brownian diffusion will not play an important part in the dispersion of the particles inside the chamber. An order of magnitude computation using the AeroCalc spreadsheet based on Willeke and Baron (2001) shows that diffusion losses are expected to reach a maximum of 2%, given chamber dimensions, flow rate and ambient parameters. It must be noted that such a computation gives a general order of magnitude of the diffusion contribution, which can still be significant at small spatial scale and yield local non-uniformities. Similarly, the small size of the particles leads to negligible inertial effects. Hence, the dispersion analysis was conducted using a passive scalar method. Navier-Stokes equations are resolved on a tetrahedral unstructured mesh with dedicated boundary layers meshing. The mesh includes around 568 k elements, with an average element quality (skewness) of 0.67. Initial conditions are set with no velocity and no particles inside the chamber. At time point T = 0, the fluid flow is ramped up on 3 s, with a target velocity of 0.64 m s -1 at both inlets using the experimental flow rate and an input concentration of 1000 particles m -3 . The outlet was set at a constant pressure difference of -60 Pa. The simulation was run for 900 s (15 CPUs, 32 GB RAM) of simulated time with a total computation time around 80 min.
Paraquat Stability
Thanks to a UV-visible spectrophotometer (LAMBDA 25; PerkinElmer), suspensions of PQ with nTiO2 were analyzed at different times, i.e., T = 0, 1, 2 and 24 h, to assess the possible photocatalytic degradations of PQ due to nTiO2. For PQ, the absorbance peak was selected at 257 nm and byproducts of degradations can be potentially observed in the 200-230 nm range (Cantavenera et al., 2007). For instrumental calibration, aqueous solutions of PQ with concentrations ranging from 6.25 to 40 mg L -1 were used. As nTiO2 absorbs in the UV-visible wavelength, suspensions were filtered (pore diameter of 0.22 µm; Millex-GS Syringe Filter Unit; Millipore) prior to analysis in order to prevent nTiO2 aggregates to interfere with measurements.
Environmental Parameters
Temperature (T, °C) and relative humidity (RH) were monitored for 2-h sessions within the exposure chamber during the characterization phase (Fig. S1 in the supplementary data). Average T°C and RH were characterized to be 24.5 ± 0.4°C and 20.1 ± 0.7% respectively with external laboratory conditions of 22.5 ± 1.2°C and 42.5 ± 7.8%. The difference in temperature was induced by the compressor operating temperature, which was comprised between 50-70°C, explaining the slight increase of temperature along the duration of nebulization, and the temperature difference with the lab conditions. At the outlet, a vacuum pump enabled a slight depression in the chamber (-60 ± 4 Pa) preventing the aerosol leakage in the laboratory. Based on the outlet airflow and the chamber volume, the air renewal was 23 volumes h -1 .
Particle Number Size Distribution
Figs. 2(A) and 2(B) present the average PNSD measured in the exposure chamber during the characterization phase using SMPS and APS. They were unimodal and broadly ranged from 10 to 200 nm for PQ and from 15 to 2000 nm for nTiO2 and PQ with nTiO2. The count median mobility diameters (CMMDs) were respectively 50 ± 1 nm, 220 ± 7 nm and 173 ± 2 nm for PQ, nTiO2 and PQ with nTiO2 aerosols respectively, with uncertainties corresponding to reproducibility standard deviation (SR) in agreement with ISO 5725-2 standard. Modal diameters for each aerosol were respectively 56 ± 1 nm, 264 ± 15 nm and 191 ± 5 nm; mean diameters were respectively 56 ± 1 nm, 264 ± 6 nm and 214 ± 2 nm. PNSDs were stable for 2 h of aerosol generation, as shown in Fig. 2(C), which presents the average PNSD evolution over time for the PQ with nTiO2 aerosol. PNSDs reached an equilibrium state around 20 min (T20) after the aerosol generation beginning and dropped at 120 min (T120), which corresponds to the end of aerosol generation. In the 0.8-20 µm particle size range monitored using APS (Fig. 2(B)), no particles were detected for the aerosol of PQ.
Particle Number Concentration Monitoring
The temporal evolution of the average particle number concentrations during both phases is presented in Fig. 3. The real-time monitoring is shown in Figs. 3(A) and 3(B), while the evolution of the average concentrations for each aerosol generation is presented in Figs. 3(C) and 3(D). The real-time measurements during the metrological characterization (Fig. 3(A)) showed that average concentrations were 258 × 10 3 particles cm 3 ± 11%, 205 × 10 3 particles cm 3 ± 10% and 292 × 10 3 particles cm 3 ± 8% for PQ, nTiO2 and PQ with nTiO2 respectively. Comparatively, these concentrations were respectively 249 × 10 3 particles cm 3 ± 7%, 202 × 10 3 particles cm 3 ± 8% and 308 × 10 3 particles cm 3 ± 9% during the animal exposure phase (Fig. 3(B)). The fact that particle number concentration in the mixture do not correspond to the sum of PQ and nTiO2 number concentrations could be explained by an internal mixing between both components during the aerosol nebulization process. Number concentrations in both conditions reached a steady state between 5 and 10 min; they remained stable over time until the end of the aerosol generation. By taking the metrological characterization phase as reference, the real-time measurements during the first step of the animal exposure phase allowed the validation of the aerosol generation for each inhalation exposure. The results and their SR are summarized in Table 1. Fig. 4 presents the average mass concentrations measured by TEOM in the exposure chamber during the metrological characterization phase with their corresponding SR. The total mass concentrations reached a steady state 10 min after the generation beginning for 2 h of aerosol generation. For PQ, nTiO2 and PQ with nTiO2 they were respectively 98.7 ± 3.2 µg m -3 , 10.2 ± 0.9 mg m -3 and 10.2 ± 0.4 mg m -3 . These values comply with the target concentrations of 100 µg m -3 and 10 mg m -3 for PQ, nTiO2 and PQ with nTiO2. Comparatively, during the exposure sessions, the average mass concentrations were 10.1 ± 0.4 mg m -3 for nTiO2 and 9.9 ± 0.8 mg m -3 for PQ with nTiO2. 10.2 ± 0.9 (9%) 10.1 ± 0.4 (4%) 205 ± 21 (10 %) 202 ± 17 (8%) PQ + nTiO2
Fig. 4.
Average total mass concentration in the exposure chamber using TEOM for 2 h of aerosol generation without animals. Results are presented as averages ± standard deviation (dotted lines) in function of time.
Aerosol Dispersion Modeling
According to the PNSD values assessed during the prior characterization, no difference was observed by simulation in the particle dispersion between PQ and nTiO2 (data not shown). Consequently, nTiO2 aerosol was selected according to their characterized mean diameter (264 nm). Fig. 5(A) presents the chamber and its associated CAD model used for the simulation. Fig. 5(B) shows the particle number concentration simulation at T = 10 min and T = 15 min. At T = 10 min, the atmosphere was still heterogeneous, since the top area of the exposure chamber presented a slightly higher particle concentration, whereas a lower concentration was observable in the middle zone and at the bottom corner below the injection site. This atmosphere heterogeneity decreased over time and the atmosphere became mostly homogeneous at T = 15 min. Such a result is expected as the simulation was made using a passive scalar method (no diffusion, no inertia). As the characterization was made using aerosol sampling from the top of the chamber, the atmosphere homogeneity was assessed to check if the aerosol characterization reflected what animals were really breathing during exposures. To this end, the simulated number concentrations were specifically obtained in the transverse plane the chamber floor (at a height of 1 cm) corresponding to the breathing area of mice. In addition, particle size distributions were compared using SMPS in both top and floor areas and no difference in the particle size distribution were visible between sampling points (data not shown) highlighting the aerosol spatial homogeneity of particles. Fig. 5(C) compared the simulated and the experimental average number concentrations according to the experimental data, which highlighted the consistency between both datasets. In the exposure chamber, the breathing area of mice was in the range of 1-10 cm height, while the CPC measurement spot was located at the top of the chamber (20 cm height). To check if the sampling location could affect the particle concentrations, three transverse planes were defined inside the chamber at different heights (1 cm, 10 cm and 20 cm). Based on the simulated data, average number concentrations were calculated for each plane and T95 were respectively 382 s, 475 s, and 446 s (Fig. 5(D)). At 8 min, a plateau corresponding to the atmosphere steady state was reached, showing the representativity of the aerosol sampling location at the top of the chamber compared to the mice breathing area at the bottom.
Paraquat Stability
Regarding the stability of PQ + nTiO2 suspension, no significant decrease of PQ concentration was detectable up to 24 h after the solution preparation (Fig. S2). The concentration at T0, T120 and T24 hours were respectively 24.0 mg L -1 , 23.6 mg L -1 and 23.1 mg L -1 . This slight decrease correlated with an increase in the 200-230 nm range corresponding to by-product formation (Cantavenera et al., 2007). PQ concentration was expected to decrease, by the nTiO2 filtration process if adsorbed, which was not the case.
DISCUSSION
In this article, an experimental set-up complying with OECD TG 413 guidelines, dedicated to inhalation toxicology studies with rodents is presented. The nanopesticide inhalation effects are still not addressed in the literature. Besides the novelty of this field of research, the insights of the article rely mainly in the fact that no whole-body chamber has already been used and characterized in this context. The cocktail effects associating nanomaterials and toxic substances have already been reported in numerous ecotoxicological studies, for instance as reviewed by Naasz et al. (2018), but not in toxicological studies. Thus, it seems essential to extend this novel topic of research to human health, by beginning to investigate the underlying mechanisms of toxicity, namely on the nervous system which is known to be a specific target of some nanosized toxics (Bencsik et al., 2018). Consequently, the protocol was established by generating inhalable particles whose size is characterized, in order to model the deposit fraction in the airways in upcoming article involving this device, to provide health data related to characterized exposure concentrations and conditions, for regulatory and toxicology purposes. In addition, nanopesticides represent a variety of different products, requiring a case-by-case toxic assessment depending on the conditions of use. The facility could be used to investigate other kind of nanosized substances, on the condition that a minimal characterization of aerosol is reported (size distribution, mass and number concentrations).
Stringent requirements are needed for chronic studies, like stated in the OECD TG 413 guidelines, as this facility is meant to be able to perform subchronic inhalation study lasting more than 30 days. The choice of a nebulizer reflects agricultural use of pesticides, which are typically sprayed using hydraulic nozzles (Hilz and Vermeer, 2013). The humidity control is essential to avoid the generation of too-large aggregates and agglomerates, and to avoid the proliferation of microorganisms within the cage. In comparison to dry powder generation process, wet aerosol generation is more similar to sprayings. Two diffusion dryers were required to obtain a steady humidity for generations of 2 h, by making a fast evaporation of the droplets before entering the exposure chamber. Nevertheless, this is indeed a model of generation and not an exact representation of the droplet size distribution generated with a hydraulic nozzle. In addition, some authors suggested that this type of nebulizer could represent a most versatile choice, suitable for inhalation toxicology studies (Schmoll et al., 2009;Pujalté et al., 2017b). The stability of the exposure conditions and the test atmosphere were characterized in order to comply with the OECD requirements for the testing of chemicals by inhalation. Environmental parameters (exposure chamber airflows, temperature and relative humidity) and test atmosphere were characterized in terms of number size distribution, number and mass concentrations and spatio-temporal aerosol stability. To be consistent between both phases of this study, the animal exposure session includes a real-time concentration monitoring and a gravimetric sampling that are used in a complementary manner, in order to confirm the generation stability of aerosols. As a result, no significant difference was found between both phases. Between all aerosol generations, the chamber average concentrations did not deviate from the mean by more than ± 20%. Moreover, the time to reach the chamber equilibration (T95) was around 7 min, which is short in comparison with the total duration of exposures. Three different aerosols were generated with a homogeneous and unimodal number size distribution, characterized by a CMMD of 50 nm for PQ, 173 nm for nTiO2 and 220 nm for PQ with nTiO2, with respective geometric standard deviations (GSDs) of 1.7, 2.1 and 2.2. The CMMD differences between the mixture and the compounds alone could be explained by the internal mixing between PQ and nTiO2 during the aerosol nebulization process. This can be clearly observed on the corresponding size distribution which is a convolution of PQ and nTiO2 size distributions. The reproducibility standard deviation was comprised between 3-9% for mass concentrations and 7-11% for particle number concentrations. The size of primary nTiO2 particles used in the study is 22 nm as previously described (Motzkus et al., 2014). However, the generation in the aerosol phase of individual primary particles is impossible due to high Van der Waals forces which do not allow dissociation of aggregates. Thus, the nebulization process enables a deagglomeration but not a disaggregation, explaining the mean size of 264 nm, as it was already previously discussed (Gaie-Levrel et al., 2020).
The characteristics comparison of existing exposure chambers may be inappropriate, because each system has its own requirements, depending on the protocol to achieve. The device presented in this article is in accordance with our expectations and the system was validated for our experimental conditions. It has general advantages; it is easy to transport, economical, compact, and thus adaptable to the constraints of animal facilities. It is also simple to operate and easily put in place, and it offers versatility to work with various rodent species (mice, rats, hamster or others). Moreover, the atmosphere homogeneity was assessed temporally and spatially, and both experimental and simulated data were consistent during the characterization and the exposures. These important features (mass concentration and size distribution) enabled a precise assessment of the exposure dose of animals, which is an essential requirement in chronic inhalation toxicology studies. Regarding the accuracy of data, our device compares advantageously to others. Cosnier et al. (2017) reported the generation of P25 nTiO2 aerosols using a rotation brush generator coupled with a nose-only device. Based on the target mass concentration of 10 mg m -3 , the calculated CMMD was 347 nm with a GSD of 2.29. The mean intra-experiment precision was comprised between 14% and 22% over six inhalation campaigns. Focusing on studies involving nTiO2 nebulizers, Pujalté et al. (2017a) generated aerosols at a mass concentration of 15.57 mg m -3 (min-max of 9-21 mg m -3 ) using a six-jet collision nebulizer with PNSD characterized by a GSD of 1.87 and a geometric mean diameter of 76.91 nm. Using also a six-jet collision nebulizer, Grassian et al. (2007) generated aerosols of nTiO2 using a whole-body device for different exposure scenarios, which involved a subacute exposure (10 days, 4 h per day) at a mass concentration of 8.88 ± 1.98 mg m -3 with PNSD characterized by a geometrical mean diameter of 128 nm (GSD of 1.7). Yi et al. (2013) used an innovative nebulizer to produce aerosol of nTiO2 with PNSD characterized by a CMMD of 145 nm and a GSD of 2.3 in a 500 L whole-body inhalation exposure chamber. In their article, the atmosphere spatial homogeneity was assessed, by measuring the concentrations at different spots within the chamber, thereby showing a spatial maximum relative deviation from the mean concentration < 6%.
In this study, specific care was taken concerning animal welfare to reduce stress. This is a matter of prime importance, because a stress undergone during gestation is critical for the fetal development, and then prenatal stress can have long-term consequences for the offspring (Weinstock, 2005). Consequently, the maternal stress in rodents, possibly due to restraint of poor exposure chamber design, may exacerbate negative developmental effects (Rasco and Hood, 1995). In this regard, we selected a whole-body inhalation device to expose animals, as this cage allowed daily exposures with a direct visual follow-up without constraint. It also enables animals to move freely, which simulates a physical activity enhancing the pulmonary ventilation of mice to reflect more accurately occupational exposures. Glass and stainless steel are the classical material used in whole-body devices, because of their ability to age slowly compared to plastics (Dorato and Wolff, 1991). Some authors used stainless steel exposure chamber with a vertical injection flow (Kimmel et al., 1997) or a combination of vertical and horizontal flows input/output to reach a uniform aerosol concentration (Oldham et al., 2004). In our study, we chose a polysulfone cage dedicated to rodent hosting, because it is an inexpensive alternative, which is easy to accommodate in a rodent facility. It can be used to expose mice or other species continuously in chronic toxicity studies because this product is standard-equipped to provide water and food. Moreover, it is easily washed and it can be used under a hood if necessary. A large number of animals can be exposed at the same time in a convenient way, which can significantly save time. Complications could arise from an important animal loading in the cage, which can increase the temperature, the relative humidity and the amount of air pollutants inside the chamber. Indeed, the ammonia coming from animal feces may be an issue since ammonia could affect animals and may also react with the test aerosols (Barrow and Steinhagen, 1982). According to Silver et al. (1946), the overall animal volume should not exceed 5% of the chamber volume to prevent this phenomenon (OECD TG 403). Therefore, we chose an important air renewal (23 volumes h -1 ) and a low animal loading (< 1% of chamber volume) to prevent particles and metabolites to be accumulated in our device. However, while air renewal within the cage is a key factor in the aerosol dispersion and homogenization, it must be limited to avoid a cold stress.
Some studies also took into consideration the environmental conditions associated with their exposure chambers (Kimmel and Kirk, 1997;Jeon et al., 2002;Oldham et al., 2004;Lucci et al., 2019), while other studies did not (Barrow and Steinhagen, 1982;Cheng et al., 1989;Phillpotts et al., 1997;Bhaskar and Upadhyay, 2003;O'Shaughnessy et al., 2003). It is indeed necessary to perform a deeper aerosol characterization, before following limited metrological endpoints during animal exposures. This characterization is necessary to enable an accurate assessment of the exposure concentration delivered to each animal. This assessment is based on the simulated deposited dose of particles in the lungs, which is a function of the aerosol mass concentration and size distribution. In addition, a homogeneous atmosphere is required, in order to minimize the variability of deposited dose between animals. For all these reasons, the minimum requirements are the mass and number concentrations, the size distribution, and also the environmental parameters assessments (temperature, air renewal, humidity and pressure), which are mandatory to get the experimentation authorization from the ethics committee. Moreover, it is important to note that only few studies validated their exposure chamber thanks to aerosol dispersion modeling (Kimmel et al., 1997;Oldham et al., 2004). The homogenization process results mainly from an adequate air renewal within the chamber, with proper inlet flows and chamber geometry (Kimmel et al., 1997). The analysis of the chamber structure and mass transfer forces such as diffusion is crucial to understand the test material distribution. Rajabi-Vardanjani et al. (2019) suggested that numerical simulation techniques should be used to predict flow velocity and aerosol dispersion in function of the chamber geometry in order to optimize an exposure chamber and the associated aerosol generation. Kimmel et al. (1997) found a clear agreement between assessment of chamber performance by CFD modeling and the analysis of performances by more conventional methods, which is in accordance with the results of this study. Nevertheless, their apparatus was much larger than in this work (700 L versus 19.8 L), which consequently required a more complete CFD modeling to be conducted. Cheng et al. (1989) andO'Shaughnessy et al. (2003) assumed that spatial and temporal uniformity were independent from each other in their whole-body exposure chambers. Oldham et al. (2004) did not make this assumption, taking in consideration the size of their chamber (20 L), which is relatively small in comparison with others, but similar to this work. The results we reported tended to conform with their statement. Uniformity assessment seemed not as good for whole-body chamber compared to those reported in noseonly exposure systems (Yeh et al., 1990;Cheng and Moss, 1995). However, it is consistent with the characterization reported for other large stationary whole-body exposure chambers (Schreck et al., 1981;MacFarland, 1983;Yeh et al., 1986;Cheng and Moss, 1995;O'Shaughnessy et al., 2003). The aerosol uniformity in several transverse planes (± 5%) was judged acceptable, and since exposures occur with animal moving freely in the totality of the exposure chamber, the stability requirements are less stringent compared to nose-only devices. Nonetheless, variations may be caused by changes in small inward air leaks, mice movements and induced thermal convection, or other any unknown factors.
CONCLUSIONS
To expose rodents to submicrometer-sized airborne particles, we developed a simple and versatile whole-body inhalation chamber that can be used with various exposure testing protocols and animal species. In this chamber, stable and reproducible generations of PQ, nTiO2 or PQ-nTiO2 aerosols were characterized in terms of particle number size distributions, mass and number concentrations with reproducibility standard deviations comprised between 1-11% for up to 2 h in the laboratory. We then validated the reliability of these measurements by conducting inhalation exposure sessions in the field, which exhibited a reproducibility standard deviation of 4-9%. These results confirmed the homogeneity of the chamber's atmosphere and hence the accuracy of the exposure assessment. Our project facilitates the collection of toxicological data, which are essential to evaluating the health risks of emerging substances, such as NPEs.
|
2021-05-04T22:05:15.339Z
|
2021-04-08T00:00:00.000
|
{
"year": 2021,
"sha1": "34c2a42b0e00259de1e7778eab78a0c873870412",
"oa_license": "CCBY",
"oa_url": "https://aaqr.org/articles/aaqr-20-11-oa-0626.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "0ad53308637635c7698db4fb13db517e462457f9",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
257771288
|
pes2o/s2orc
|
v3-fos-license
|
Adaptive Voronoi NeRFs
Neural Radiance Fields (NeRFs) learn to represent a 3D scene from just a set of registered images. Increasing sizes of a scene demands more complex functions, typically represented by neural networks, to capture all details. Training and inference then involves querying the neural network millions of times per image, which becomes impractically slow. Since such complex functions can be replaced by multiple simpler functions to improve speed, we show that a hierarchy of Voronoi diagrams is a suitable choice to partition the scene. By equipping each Voronoi cell with its own NeRF, our approach is able to quickly learn a scene representation. We propose an intuitive partitioning of the space that increases quality gains during training by distributing information evenly among the networks and avoids artifacts through a top-down adaptive refinement. Our framework is agnostic to the underlying NeRF method and easy to implement, which allows it to be applied to various NeRF variants for improved learning and rendering speeds.
Abstract-Neural Radiance Fields (NeRFs) learn to represent a 3D scene from just a set of registered images. Increasing sizes of a scene demands more complex functions, typically represented by neural networks, to capture all details. Training and inference then involves querying the neural network millions of times per image, which becomes impractically slow. Since such complex functions can be replaced by multiple simpler functions to improve speed, we show that a hierarchy of Voronoi diagrams is a suitable choice to partition the scene. By equipping each Voronoi cell with its own NeRF, our approach is able to quickly learn a scene representation. We propose an intuitive partitioning of the space that increases quality gains during training by distributing information evenly among the networks and avoids artifacts through a top-down adaptive refinement. Our framework is agnostic to the underlying NeRF method and easy to implement, which allows it to be applied to various NeRF variants for improved learning and rendering speeds.
I. INTRODUCTION
Neural Radiance Fields (NeRF) [1] and their derivatives have become one of the most promising areas of research in learning 3D scenes from images. Their key to success lies in using the flexibility of MLPs to learn a volumetric function, guided by the pixel colours of images taken in the scene. For this, pixels are assumed to be the result of integrating a volumetric function over samples from a ray traced through the scene. However, computing even a single pixel requires sampling many times along a ray, i.e. many evaluations of the function learning the scene. With larger or more detailed scenes, the complexity of the function must also increase, resulting slower queries for a point. While other approaches often try to improve sampling strategies, our approach takes geometric complexity into account and thus allows for faster convergence and real-time evaluation strategies. Our method can be used complementary to most other techniques and is based on combining three key observations: 1) Evaluating a complex function in k places can be much slower than to k times evaluate one of m less complex functions, if choosing the right function to evaluate is cheap enough 2) Learning a lightweight representation of a scene is a good indicator for what parts of a scene are geometrically complex while having an guidance for the coherence of the scene 3) Voronoi diagrams can partition a volume into convex cell sections that can be easily evaluated and offer a high degree of flexibility; cells can be easily adjusted to optimise a desired objective We propose to first learn a simple scene representation that is fast to evaluate for training. This representation is then used to optimise a Voronoi diagram that divides the scene in multiple subsections with roughly the same geometric complexity. The initial simple scene representation then also works as initial representation for the subsections defined by the Voronoi diagram. With this global prior, we are able to avoid coherence problems of approaches that focus on fast evaluation through subdividing a scene and can otherwise be only circumvented with more costly training. This allows us not only to have the speed advantage of distributed scenes for evaluating a learned scene, but to also bring that speed advantage to learning the scene itself. As a result, for a more complex scene, we can use multiple small functions instead of increasing the complexity of one large function. As each sample along a ray through the scene gets evaluated by one function, and we only introduce more low-complexity functions instead of making the function more complex, we can obtain more precise information for a ray sample point without increasing the number of parameters to evaluate for it. This important characteristic is key for large-scale datasets or possibly even real-time approaches in domains such as autonomous driving.
As our geometry-aware approach does not alter any of the underlying maths, it is in general compatible with most common variants of NeRFs. While these qualities make our approach particularly more effective with larger amounts of data, it also improves small-scale scenes. It can be implemented as an extension within few lines of simple code, and will be released on GitHub.
II. RELATED WORK
While learning 3D scene representations from images is a topic with a wide array of classic techniques [2], [3], our work builds on techniques that train neural networks to learn a volumetric scene representation. While there are different neural network-based techniques [4]- [6], we focus on Neural Radiance Fields.
Neural Radiance Fields (NeRF)
as first introduced by Mildenhall et al. [1] learn to produce novel views of a scene from photographs with intrinsic and extrinsic camera parameters. For this, they use a multilayer perceptron (MLP) that should map a position in a scene together with a view direction to a colour and a density value. Treating each pixel as the result of a ray going through the scene, they train the MLP such that volume ray marching on samples of the ray returns the correct pixel colour. Barron et al. [7] propose mip-NeRF, adapting this formulation to treat pixels as the result of cones going through the scene instead of rays. This better captures volume and helps to combat aliasing effects, synthesising more realistic and sharper views, in particular for datasets that contain images with varying distances to objects. Mip-NeRF 360 [8] further improves performance and quality on unbounded scenes by using a non-linear scene parametrisation and by efficiently learning a prior for sampling. Similar scene parametrisations can be found in [9], [10]. To deal with sparse sets of images, e.g. pixelNeRF [11] utilizes additional features collected by a CNN. Further work specialises on different aspects: Large-Scale Distributed NeRFs try to reduce the parameter count of NeRFs for large scenes by breaking the scene into multiple different local NeRFs. This technique enables learning areas spanning multiple hundreds of metres in planar directions [12] and can be parallelised during training [13]. Other approaches employ disentanglement to better capture large regions, either by disentangling environment parameters [14], disentangling the scene itself [15], [16], or by reparameterising scenes [17], [18].
Interactive Framerates for NeRFs can be achieved by learning a conventional NeRF and then storing the learned opacities and specular features in a sparse voxel grid structure [19], [20]. Thereby, spherical harmonics can be used as a view independent feature representation [21]. While these approaches enable real-time inference rendering by removing costly querying of large networks, they require training a large NeRF beforehand. Rebain et al. [22] use a differentiable Voronoi diagram as scene decomposition into many small MLPs to decrease inference time. KiloNeRF [23] learns a grid of tiny MLPs, enabling real-time rendering. A distillation step from a conventional NeRF is used to avoid artifacts. Kurz et al. [24] use an efficient sample placement to reduce the number of NeRF evaluations, resulting in real-time capabilities. Fast Training of NeRFs is a major task in NeRF research, as training can take up to multiple days to reach reasonable output quality. Point-NeRF [25] generates a point cloud of image features near the surface geometry by using a depth estimation during preprocessing. These local features are then used as prior for fast NeRF training. Other approaches utilise networks trained on tasks like depth prediction to speed up training [26], [27]. Sun et al. [28] learn a voxel grid of latent features combined with a shallow network to bring down convergence time into the order of minutes. This could be further improved by storing the latent features in a learnable hash table [29].
Voronoi Diagrams can partition a volume into cells, with points being easily assignable to the according cell. While traditionally used in geometry, optimising Voronoi diagrams can be done e.g. to re-create images or shapes [30]- [32]. Such a partitioning offers a compact and structure-sensitive representation of the underlying information with a high degree of flexibility. The underlying flexibility can also be used to partition a scene for NeRFs, as is done by [33].
Applications
With widespread use, the applications for NeRF become ever more complex, increasing the demand for faster training. Examples for complex applications are Text-to-NeRF approaches [34]- [36] or NeRF-based Text-to-Video approaches [37]. Other applications include e.g. classic tasks from robotics, like localisation and mapping [38]. These applications show the need for techniques to consider both speed in training and inference. Our work uses Voronoi diagrams to combine the advantages of approaches that provide interactive framerates at inference together with ideas for large-scale distributed NeRFs.
III. ADAPTIVE VORONOI NERF
We introduce Adaptive Voronoi NeRFs, a geometry aware approach that allows faster inference time to the training of distributed NeRFs. By exploiting the geometric information implicitly learned by the NeRF, we achieve better results in training time, inference time, and quality. While distribute the scene among multiple networks, we neither require interpolation between them nor require (possibly not ideal) humangiven partitioning of the space. Pixel colour in NeRFs is the result of integrating over samples of a ray going through the volume of a scene where the samples are being evaluated in a neural network that learns to represent a radiance field of the scene. The time consuming procedure of running every ray sample through a large neural network is what our approach accelerates: Instead of learning one large network, we learn many smaller ones, as assigning a point into the according network and querying a smaller network is much faster to evaluate. To understand our idea of accelerating NeRFs with geometric tools, we first re-cap the basics about Voronoi diagrams. a) Voronoi Diagrams: are a flexible way to partition a space into convex cells. A Voronoi diagram V partitions space by assigning every point to a cell V i with the closest cell centre v i ∈ V, in our case measured in Euclidean distance. An example can be found in Fig. 2. Formally, for a point Compared to approaches like octrees or BSP trees, Voronoi diagrams are both flexible in partitioning a space, e.g. allowing elongated cells, while also offering cheap point assignment to a cell by simply computing the distance to all cell centres. We can nest Voronoi diagrams by subdividing the content of each cell of an existing diagram into multiple new cells. Assigning a query point into a cell can then be done hierarchically by first finding the correct cell in the first Voronoi diagram, and then finding the correct cell in the Voronoi diagram that subdivides that cell further. This nesting can be done multiple times in a recursive fashion.
Based on the three observations made in Section I, we propose the following approach to adaptively learn a (nested) Voronoi diagram holding a scene (see Figure 4 for our visualised pipeline): 1) A lightweight NeRF learns the global scene representation for a few iterations 2) A part of the sample points from regions with dense information are kept, weighted by the error of the ray that are on and the contribution to its final colour 3) An initially random Voronoi diagram partitions these points and is then optimise to spread the weight evenly, i.e. distributing information of the scene evenly among the cells 4) Each Voronoi cell holds their own new neural network, inheriting the parameters of the global NeRF learned.
Training continues with all nets simultaneously, but now evaluates every sample point along a ray with the respective NeRF belonging to the Voronoi cell the point lies in We can apply this adaptive refinement procedure multiple times if necessary, creating new Voronoi diagrams further partitioning cells This top-down approach is agnostic to technical details of the NeRF, e.g. works with considering pixels as either rays [1] or cones [7] through the volume of the scene, and e.g. for approaches with different sampling procedures [24].
A. Estimating Scene Complexity
We start with training a smaller scene representation, e.g. the original NeRF [1], to learn a rough representation of the entire scene. We use the underlying NeRF architecture, but simply reduce the number of channels in the MLP. To get an estimate of the scene geometry to partition it, we extract a fraction of the most important ray sample points during an epoch, i.e. ray samples that contribute the most to the resulting image, for every batch. When only considering one colour channel for simplicity, recall that for a ray r, its pixel colour c r , and its e.g. 128 samples s i , i ∈ N <128 0 , the resulting loss for updating the NeRF becomes: Where w i is the contribution of a sample s i along the ray to the final pixel colour. As weight to find the most important, i.e. heaviest, samples we use a product of the samples contribution to the pixel colour and the error of that pixel, i.e. w i · E(r). While this implicitly takes density of a point into consideration, this also avoids to take ray samples from inside objects, as only the point directly on the surface will contribute much to the rays colour. It also does not overly low high represent simple, but dense regions, e.g. a flat white wall, as these tend to have a low error value. We extract 10000 ray sample points S per epoch, taking an equal amount of the heaviest points per batch. We then choose k ∈ S, e.g. k = 16, random ray samples as initial Voronoi centres for cells V k .
B. Finding Ideal Voronoi Diagrams
When formulating a large function as a composition of k smaller functions, making the smaller functions roughly equal in complexity is beneficial: For us, this means equal distribution of scene complexity in the Voronoi diagram cells leads to roughly equal amounts of information to be stored in the respective NeRFs. To optimise the Voronoi centres for this objective, we suggest a simple two step algorithm: First, we assign each sample to their respective Voronoi cell centre, then compute an update for each Voronoi cell centre, and repeat. For the update of the cell centres, we compute update directions to reduce the differences between the total weights W i of the sample points in each Voronoi cell V k , meaning to even out the amount of information per cell. For this, we shift light cell centres, i.e. cells with little or trivial geometric information in them, towards heavier neighbours, i.e. cells with complex geometric information. Likewise, we shift heavy cell away from light centres. For the Voronoi cells, this takes away area from heavy cells and gives it to lighter cells. As an updated position v i for every cell centre v i ∈ R 3 and its closest 8 Voronoi cells N (v i ), we compute: In result, this computes an update vector for each cell centre by averaging over the weighted directions towards each neighbour. Normalising the weight by dividing through the largest weight in the neighbourhood avoids pushing a cell centre arbitrarily far in a direction. We iterate this optimisation process in parallel for every Voronoi centre, using 500 steps and α = 0.05. The assigned positions for each Voronoi cell are not changed anymore after setting them in place. An example of the resulting Voronoi cells can be seen in Fig. 3.
C. Initialising Cells
The high quality that NeRFs achieve stems in part from an underlying prior of the network architecture: Multilayer perceptrons (MLPs) learn smooth functions better than noisy ones, and hence are prone to fall into an optimum that is a coherent scene. For multiple MLPs, each MLP in itself has a prior for smoothness, but a combined function of multiple MLPs has no such prior anymore. Hence, it is key to use some sort of prior to ensure scene coherence, as for initialising every cell in a Voronoi diagram with a new, randomly initialised NeRF, produces 'ghosting'-like artifacts as shown in Fig. 7. In the case of distributed functions that are independent of each other, each function will try to improve the ray's colour even when the object is placed in another cell. While this helps the training objective, it creates complicated, fractured scene representations that perform poorly when evaluating an unseen camera pose (see Fig. 7). For every new Voronoi cell, we thus initialise the respective NeRF with the parameters of the initially learned, lightweight global scene. Initialising each cell with a NeRF that learned a larger area is giving the cell a prior for shape coherence, hence the optimisation process can generally avoid bad optima. In addition, each cell will converge a bit faster with this initialisation and the boundaries of the cells are already fitting, strongly reducing visible seams even before convergence. Note that the points are always put into the respective NeRF of a cell in global coordinates, as each NeRF inherited the parameters of the NeRF trained with global coordinates. For inference, the only additional burden is the assignment of the correct cell for each sample point, i.e. finding the respective NeRF, which is neglectable compared to learning a much larger NeRF. However, optimising multiple NeRFs at once instead of a single NeRF is more costly for backpropagation, but still worth the time saved on the forward pass. We also experimented with a stochastic version of interpolation, taking not always the closest cell centre as the responsible Voronoi cell, but actually sampling the from vector of distances to the cell centres. While this occasionally avoided some small visible seams early in training, it had no more effect after a few epochs of training, as the initial prior from inheriting parameters of a global network was enough.
D. Nested Voronoi Diagrams
Initially, a learned scene may not cover every area in much detail, hence partitioning a scene into a large number of cells right away can be both impractical and drives up the cost of assigning samples to the right cell. Hence, we propose partitioning every Voronoi cell itself, applying just the same steps as before: Gather all samples that fall into a cell while training, optimise a Voronoi diagram to partition the underlying information evenly, and then give each cell its own neural network that inherits parameters from the parent cell. To decide which cells to subdivide, we have two options: We can either choose to always subdivide the Voronoi cell that is performing worst, i.e. accumulates the most error, or simply subdivide every cell at once. As we distribute the cells not only based on density, but also on error, we never experienced cases where one cell in an already partitioned scene was performing significantly worse than another cell. Hence, for simplicity, we subdivide all of the Voronoi cells in parallel. For all our test scenes, a nesting depth of 2, i.e. subdividing the scene once and then subdividing every resulting cell again, was enough. Note that we do not move any existing cell centres once we started training them, as moving them could lead to them covering an area for which they do not have the correct prior.
IV. EVALUATION
Our proposed method is built to be an independent extension for existing methods that speeds up train and inference time without sacrificing quality or causing artifacts. We first evaluate possible hyper parameter choices, then provide an ablation to highlight the impact of all our components (Section IV-A). We then provide experiments and argumentation to why our approach is agnostic to e.g. the underlying foundation for sampling strategies, underlying NeRF architecture, and ray formulation (Section IV-B). Then, we discuss our approach in comparison to other work in terms of evaluation speed, quality, and benefit for large-scale scenes (Section IV-C). Throughout this evaluation, we focus our evaluation on train speed, accuracy on test images, and inference speed. For fairness, we always measure performance over training time, as measuring with epochs alone would not take the computational overhead of our method, i.e. assigning points into their cells, into consideration. We would also gain an unfair advantage from our much shorter inference times for the test set. If not specified otherwise, due to limited resources, our experiments are run on 400-by-400 pixel versions of sets, if not further specified the Lego model, from the NeRF datasets [1] with a single GeForce RTX 2080 Ti. We use 256 channels (one single network) versus 64 channels (Voronoi variants) resulting in 12 cells with slightly fewer total parameters for the different NeRF architectures, and train for 24 hours. Note that we tested our approach with simple PyTorch code, with no refined performance boosts or possibly even specific CUDA optimisations, for better comparability. We limit ourselves to a single level of subdivision for simplicity, while we did observe best performance when using multiple levels (see Fig. 10). We always measure the PSNR of the mean squared error on the whole testset for any plots and otherwise give the average PSNR per image over the testset. In general, we show that our approach improves convergence speed and quality, as can be seen in Fig. 5. Limitations While our approach is only an extension, hence inherits weaknesses from the underlying NeRFs, our approach has an additional weakness shown by short training time for the global scene: Our approach struggles with scenes with initially unclear geometry, e.g. many translucent objects, as the initially learned scene will struggle to provide a meaningful geometric prior and may lead to a bad subdivision that can not be changed anymore over the training.
A. Hyper Parameter Choices and Ablation
Subdivision strategies Optimising Voronoi cells is key to the success of our approach: We want to distribute information evenly between the cells to make best use of each network for fast convergence and high quality. Large differences between the total weight of our cells would indicate a bad distribution, and similar total weights would indicate a good distribution.
cells subdivided in further cells (16 in total)
, each with roughly the same number of total parameters. As can be seen in Fig. 9, subdividing early is no issue, but can cause tiny artifacts not impacting the scores, while subdividing too late wastes computation time. Similarly, Fig. 10 shows that too few or too many cells slow convergence.
Initialisation
Training a distributed network without the right prior can lead to failure of generalisation. The result of this are local optima that prevent generalisation and that can cause artifacts from novel views (see Fig. 6 and Fig. 7). These artifacts are results of improving train quality by creating dense regions where there should not be any. Inheriting network parameters from a cell with larger global context prevents this effectively, while not requiring any form of distillation as in e.g. KiloNeRF [23]. We also observe no issues with the learned geometric representation that this prior affects, even at times better learning the scene geometry, as can be seen from the visualisation of the total density for each ray, see inset (left: mip-NeRF, right: Voronoi mip-NeRF).
B. Independence of Underlying Architecture
Our approach introduces a geometrically inspired dynamically learned decomposition of the NeRF problem into many localised, smaller problems. Since the underlying mathematical formulation is not altered, this speedup from decomposition can be easily combined with many other improvements introduced subsequently to the initial NeRF. We demonstrate the effectiveness of our approach on different NeRF variants by investigating different sampling strategies, different approaches to the ray formulation, and by exploring further approaches that have recently gathered attention. First, we show that our Voronoi NeRF approach is improving results regardless of the sampling strategies used. For this, we compare results of stratified and hierarchical sampling with either mip-NeRF [7] and [1], showing clear improvement for both stratified (see Table I) and hierarchical samplings strategies (see Table II). In more detail, we then demonstrate that Voronoi NeRF as an extension for different architectures, namely NeRF [1] or mip-NeRF [7], outperforms their counterparts on various datasets, see Table II. We also show that our approach can work with the small network architecture proposed by KiloNeRF [23] in Section IV-C. In summary, we provide improvements in particular early in training, converging much faster while having fewer fluctuations during training, as can be seen in Fig. 5. We further discuss how our approach is agnostic to other architectures and approaches: mip-NeRF 360 [8] and DONeRF [9] use a non-linear contraction function to restrict the coordinate range of points far from the origin. Furthermore, additional simplified NeRF-like networks are often used for more feasible samplings [1] [8]. These extensions are in no way in conflict with our approach, in fact one might even accelerate the sample predictor networks by localisation as well. Inference accelerations such as precomputed opacity grids, empty space skipping and early ray termination [23] are also complementary to our approach, as they do not change the actual NeRF mechanism. Interpolating outputs of multiple cells can be used to smooth transitions of neighboring cells [12] or improve cell placement [22]. While we explored this and saw improvements in early stages about occasionally visible, very tiny seams, the prior given through initialisation with a global NeRF is enough to avoid any visible seams after the first few epochs after a subdivision.
C. Comparison to Others
As discussed in Section IV-B, our approach is compatible with many improvements suggested for NeRFs when it comes to training speed and inference quality. However, the faster training times obtained through an approach exploiting geometric knowledge can also be used for both faster inference and learning larger scenes faster and in better quality. Kilo-NeRF [23] provides faster inference by first training a large NeRF that is then distilled into many smaller NeRFs arranged in a grid fashion. They obtain further speed by optimising the sampling process, e.g. skipping largely empty sections, terminating rays early, and optimising CUDA code. As all of these options are available to our approach as well, we only discuss the qualitative results. Analogue to their distillation, our parameter inheritance works in a top-down fashion while training the network, avoiding the extra step of distilling. For comparison and as another example for the flexibility towards the underlying architecture, we trained a Voronoi NeRF with 16 cells, subdivided each cell into another 16 cells, obtaining 256 cells in total. We compare to their approach with 512 uniformly distributed cells, trained for the same duration. We use the same architecture as they do, and use our inheritance initialisation instead of their distillation. Effectively, this can be considered an on-the-fly distillation process from coarse to fine. With the same inference speed and no need for the extra distillation step, and geometry-sensitive cell distribution, our approach can outperform KiloNeRF in terms of quality at half the number of cells, see Table III. We attribute this to our representation making better use of its parameters by making sure every cell is filled with about the same amount of information, where KiloNeRF can place cells entirely inside an object or in thin air. As can be seen in Fig. 5, our approach trains faster later on through multiple smaller networks, while particularly the early convergence benefits from having only one small network that learns the scene. Our top-down parameter inheritance can thus boost performance in particular early for scenes that are far from convergence. These two qualities make our approach particularly valuable for large datasets like the ones proposed by MegaNeRF [13] or Block-NeRF [12], as the larger or more detailed the scene grows, only our number of networks increases while individual ray sample evaluations, except for the relatively cheap assignment of each sample to a cell, do not become any more expensive. We further argue that, as we have shown, dynamically partitioned space leads to better performance. In particular, our results indicate that adaptively subdividing a scene more than twice will bring even more relative improvement than for the tested small-scale scenes. In summary, our approach does significantly speed up training, allows for fast inference, and is invariant to the underlying architecture and other degrees of freedom within the NeRF formulation. Opposing to other distributed approaches, it does not require any expensive precomputation [23], requires no interpolation between cells [12], [13], [33], and does not require previous knowledge of the scene, e.g. as human-given specification for the layout of the learned partition [12], [13], [23].
V. CONCLUSION
We propose an easy to use extension for Neural Radiance Fields that allows us to bring the quicker inference times from distributed approaches to training. We achieve this through considering the scene to be a (nested) Voronoi diagram that is adaptively refined through the training process. We build this diagram through exploiting the geometric information learned while training and reduce artifacts by obtaining a prior for a coherent shape from passing down parameters from a global to a local scale. Our approach achieves high speed from subdivision of the scene into networks that are fast to query, achieves high quality from geometry-sensitive adaptive space partitioning, and uses inheritance initialisation to avoid artifacts. As this solution is agnostic to architecture, sampling, and even conceptual differences in aspects like considering rays or cones, our approach works with many different NeRF variants. It can improve approaches that are built for speed and push the qualitative boundaries for existing approaches. With its dynamic adaptivity in refining detail, it offers flexibility and speed for e.g. large datasets, an area where NeRFs usually require costly hardware and hand-tailored solutions. With our approach, we provide a simple extension to bring the field closer to the masses on a hardware level, while its applicability to many different kinds of NeRF approaches and its simplicity make it accessible not only to experts in the field. For future work, we see possibilities to use our geometry prior even more adaptively: For dynamic scenes with an initialisation, these would become easy to adapt, as only the Voronoi cell(s) with the ongoing change would need to be updated. We also see that our approach could be used for formulations that learn e.g. signed distance functions.
|
2023-03-29T01:26:45.964Z
|
2023-03-28T00:00:00.000
|
{
"year": 2023,
"sha1": "9c1ee7b959ff9f2956d61f3773de7602da340b7a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9c1ee7b959ff9f2956d61f3773de7602da340b7a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
201042020
|
pes2o/s2orc
|
v3-fos-license
|
Health Information Management: Implications of Artificial Intelligence on Healthcare Data and Information Management
Summary Objective: This paper explores the implications of artificial intelligence (AI) on the management of healthcare data and information and how AI technologies will affect the responsibilities and work of health information management (HIM) professionals. Methods: A literature review was conducted of both peer-reviewed literature and published opinions on current and future use of AI technology to collect, store, and use healthcare data. The authors also sought insights from key HIM leaders via semi-structured interviews conducted both on the phone and by email. Results: The following HIM practices are impacted by AI technologies: 1) Automated medical coding and capturing AI-based information; 2) Healthcare data management and data governance; 3) Fbtient privacy and confidentiality; and 4) HIM workforce training and education. Discussion: HIM professionals must focus on improving the quality of coded data that is being used to develop AI applications. HIM professional’s ability to identify data patterns will be an important skill as automation advances, though additional skills in data analysis tools and techniques are needed. In addition, HIM professionals should consider how current patient privacy practices apply to AI application, development, and use. Conclusions: AI technology will continue to evolve as will the role of HIM professionals who are in a unique position to take on emerging roles with their depth of knowledge on the sources and origins of healthcare data. The challenge for HIM professionals is to identify leading practices for the management of healthcare data and information in an AI-enabled world.
Introduction
Health information technology has greatly impacted the health information management (HIM) profession. HIM professionals are part of the allied health team and they support efforts to ensure the availability, accuracy, integrity, and security of healthcare data. The digitizing of healthcare data has greatly impacted the responsibilities and work of HIM professionals requiring many to take on more technical roles related to the collection, storage, and use of healthcare data.
The digitizing of healthcare data, as well as advancements in computer processing and data storage, has also enabled the development of advanced algorithms in the form of Artificial Intelligence (AI). As of 2011, the U.S. Agency for Healthcare Research and Quality (AHRQ) had compiled over 17,000 algorithms and computer programs for healthcare evaluation, treatment, and administration [1]. In a recent white paper on AI in Radiology, the Canadian Association of Radiologists stated "In the next 5 years, Canadian radiologists will see more competent AI applications incorporated into PACS workflows, especially for laborious tasks prone to human error such as detection of lung nodules on x-rays or bone metastases on CT." [2].
Multiple factors are driving the development of AI in healthcare. In the United States (U.S.), legislative pressures are mounting to keep pace with other coun-tries regarding AI developments [3]. There are financial pressures on the healthcare industry globally, with increasing demands due to growing and aging population. The industry needs labor-saving technology and techniques to better understand the health of the population while managing the health of a greater number of people and saving money [4]. AI, whether or not it eliminates the need for a person to fill a job, can make the workforce more efficient [5][6][7][8][9]. Accenture estimates that "key clinical health AI applications" can create $150 billion in annual savings for the U.S. healthcare economy by 2026 [10]. Even if a fraction of that figure is realized, that is a powerful incentive for adopting AI solutions.
Beyond economic concerns, an additional driver of AI technology is the sheer volume of healthcare data. Healthcare is experiencing an information boom. "The rapid expansion of scientific knowledge and pace of technological development have resulted in an overwhelming sea of data that is difficult to decipher and apply." [11]. Physicians are drowning in data that requires ever more sophisticated interpretation, yet are still expected to perform efficiently. The promise that AI "augments decision making by clinicians by uncovering clinically relevant information hidden in a massive amount of data" [5] is extremely enticing, particularly now, when there are clinician shortages worldwide. The needs-based shortage of healthcare workers globally is estimated at approximately 17.4 million [12]. According to the Canadian Association of Radiologists, "…there is evidence that AI can improve the performance of clinicians and that both clinicians and AI working together are better than either alone" [2]. Indeed, AI technology is necessary to achieve the goal of "precision medicine". Precision medicine is an emerging medical model where medical decisions and treatments are tailored to the patient. "Precision medicine presupposes the availability of massive computing power and algorithms that can learn by themselves at an unprecedented rate" [5].
Predictions on when healthcare will experience widespread deployment of disruptive AI applications vary widely. Though AI is developing rapidly, and there are current and imminent uses of AI in healthcare, it is still largely immature. According to witnesses who testified before the U.S. Subcommittee on Information Technology of the House Committee on Oversight and Government Reform at a series of hearings on AI held in 2018, "narrow" AI, i.e. systems focused on specific tasks, is commonly used today but more general systems that can work across multiple tasks are underdeveloped [13]. However, given the pace of development, the timeline for AI in healthcare is years, not decades [14].
Presuming AI will eventually be widespread and affordable, there are implications for the management of healthcare data and information in an AI-enabled world which can greatly impact the HIM profession. The purpose of this paper is to describe the results of a literature review and the findings from interviews with key HIM leaders. The paper explores the relationship of the HIM profession and AI, focusing on the following key aspects: 1) Changes in HIM practices for specific HIM use cases, including automated medical coding and management of AI-based information; 2) Changes in management of healthcare data and the need for evolving data practices and data governance; 3) Legal, ethical, and regulatory data challenges; and 4) Changes in the HIM workforce, including foreshadowing new roles and skills that are required. The conclusion presents steps the HIM profession can take now to help advance the development of reliable AI applications and to respond to their use in healthcare.
Changing Health Information Management Practices
A core responsibility of the HIM profession is ensuring the right information is provided to the right people to enable quality patient care [15]. Increased adoption of AI-enabled applications and more sophisticated use of AI systems by healthcare providers at the point of care have significant implications for HIM practices. These include practical implications both for common HIM processes, such as medical coding, as well as more generally the core HIM responsibility to manage health data and information. This section explores the impact of AI systems on HIM practices for the following use cases: • Automated medical coding; • AI-based diagnosis specificity; • AI-based early detection information.
Each use case includes examples of the anticipated use of AI, discusses the associated impact to current HIM processes and practices, and explores new opportunities and challenges to adapt HIM practices.
Automated Medical Coding
A systematic literature review of published studies evaluating the performance of automated coding and classification systems indicated that automated coding systems have been in use since at least the mid 1990's [16]. Computer-assisted coding (CAC) is the term that refers to the automated generation of medical codes reported on healthcare claims that are derived from clinical documentation. CAC applications have been available since the early 2000's [17] with adoption rates increasing markedly in recent years. According to a report available through Research and Markets, the global market for CAC software is projected to reach $4.75 billion by 2022 at a compound annual growth rate of 11.5% [18]. North America is seeing the largest growth followed by Europe, Asia-Pacific, and the rest of the world.
CAC applications use natural language processing (NLP) to read and interpret clinical documentation in patient health records and suggest applicable diagnosis and procedure codes. Typically, a person reviews the suggested codes to determine the final code selection. This computer-assisted approach to the medical coding process is becoming more common and has been credited with measurable gains in coder productivity [19,20]. However, productivity impacts vary widely, depending on the specific deployment. Some studies reported a drop in productivity when medical coders were forced to validate, and frequently eliminate, a large number of suggested codes. Still, a Cleveland Clinic study found that CAC increased their coder productivity by over 20% without reducing quality when suggested codes were reviewed and edited by a medical coder [19]. The referenced Cleveland Clinic study found that CAC alone, without the intervention of a credentialed coder, however had a lower recall and precision rate.
Adoption of CAC requires reengineering the medical coding workflow to fully integrate the CAC tool in the process and gain optimal efficiency [21]. Early adopters of CAC in the U.S. reported that CAC had "…improved medical coding workflows, increased medical coding accuracy, and balanced medical coding resources to focus on more volume and complex cases" [22]. Not all hospitals however have experienced these benefits [23]. Some implementations have failed entirely. Effective implementation of a CAC application requires interfaces to work properly so the application can read all documents relevant for coding. In addition clinical documents must comply with a consistent format dictated by the CAC vendor [24]. And where CAC has been most effective, a new role has emerged to fine tune the rules and train the system to adapt as the code sets and reporting requirements change.
As the technology advances, and machine learning techniques improve the capabilities of CAC tools, the medical coding workflow will further evolve. A WinterGreen market shares research report released in 2017 stated that as much as 88% of medical coding in physician offices for billing purposes could occur automatically without human review [25]. This report requires independent validation and more research is needed on the accuracy of these systems to rely on them, but advancements in CAC are poised to further augment the medical coding process. Medical coding is a significant responsibility of many HIM professionals currently and this role will continue to evolve.
There are significant opportunities for medical coding professionals as CAC advances to increase coding efficiency. The fully automated coding workflow requires reengineering and a focus on data quality, which medical coders, with their intimate knowledge of the code sets and reporting requirements, are uniquely qualified to address. In addition to assigning or validating codes on complex cases, medical coders could also focus on validating aberrant coded data patterns across large groups of cases. For example, a medical coder has the knowledge to question the use of a code for an acute phase of a condition repeatedly for a patient, when the more likely data pattern would be the acute code followed by codes for the chronic phase or sequela. This code-specific pattern recognition is key in validating accurate reporting for risk-scoring payment methodologies for example. Clearly, HIM professionals' ability to identify data patterns to enhance business intelligence or improve compliance with code reporting requirements will be an important skill as automation advances.
Diagnosis Specificity
AI systems are expected to assist healthcare providers with diagnosis accuracy and specificity. Medical specialties that utilize images for diagnosis (e.g. radiology, pathology, dermatology, ophthalmology) are particularly amendable to AI-aided diagnoses. AI machine learning (ML) is very good at detecting anomalies in images, for example it has been proven effective in detecting lung nodules on a radiologic image [2,6,9] and congenital cataract as well as diabetic retinopathy on ocular image data [6,26]. The sensitivity and specificity of deep learning algorithms, in detecting diabetic retinopathy through retinal fundus photographs, for example, are both over 90%, which is "competitive against experienced physicians in the accuracy for classifying both normal and disease cases" [6]. An algorithm that can identify skin cancer by analyzing images of skin lesions has also performed as well as board-certified dermatologists [26,27]. It has been suggested that what might take an experienced radiologist 30 years of radiology-pathology correlation to master may only take an AI system hours or days to analyze and learn in the future [28].
Code reporting guidelines for using diagnostic test results to add specificity to a diagnosis code vary by country. As AI systems become more adept and are proven reliable in visual diagnosis, the need for physicians to read images may become less necessary, perhaps done only by exception. This change in responsibilities could result in either a decrease in code specificity or less consistency of international diagnosis code data, depending on a country's code reporting guidelines and how the guidelines are adjusted to account for AI. For example, currently in the U.S., "code assignment is based on the documentation by patient's provider (i.e., the physician or other qualified healthcare practitioner legally accountable for establishing the patient's diagnosis)" [29]. U.S. guidelines specifically state that clinically significant "laboratory, x-ray, pathologic, and other diagnostic results" can be used for coding only if the test has been "interpreted by a physician" [29]. In the U.K., the NHS National Clinical Coding Standards, while less explicit than U.S. guidelines, also imply that a physician has to interpret diagnostic test results [30]. In contrast, the Canadian Coding Standards are much more amendable to AI development. Canadian medical coders are directed to use diagnostic results "when they clearly add specificity in identifying the appropriate diagnosis code for conditions documented in the physician/primary care provider notes" [31]. In Canada, there is no specific requirement that the test itself has to be interpreted by a physician. Based on this varying guidance, in the instance where a physician has documented a diagnosis, additional specificity of that diagnosis in images interpreted by an AI system alone (without a physician over-read) would be lost in diagnosis data in the U.S. and possibly the U.K., whereas specificity would not necessarily be lost in Canada.
Medical coding and reporting guidelines and standards will need to be adjusted to account for AI applications. There are mul-tiple points to consider including whether reporting of diagnosis specificity using diagnostic test results should vary depending on the AI application itself. Some method is needed to demonstrate that the AI application meets the same degree of accuracy as physicians. For example, reporting guidelines might depend on whether the AI application is approved or credentialed in some manner. Reporting specificity based on AI results might also depend on whether the AI application is employing supervised verses unsupervised ML techniques. Unsupervised ML is well known for feature extraction, whereas supervised ML, which goes through a training process to determine the best outputs, is more suitable for predictive modeling and is generally considered to provide more clinically relevant results [6]. Thus, the type of AI and how the AI application is used in the clinical workflow (e.g. whether AI-generated interpretations are validated or certified as equally accurate compared to physicians) could potentially be factors in determining future reporting requirements for diagnosis code specificity.
Early Detection Information
AI systems are expected to assist healthcare providers with early detection of likely or impending conditions, allowing for faster intervention. ML algorithms are proving effective in making inferences about specific health risks and predicting health events. For example, neural network algorithms have proven effective in detecting strokes. Input variables analyzed by the algorithm include stroke-related symptoms such as paresthesia of the arm or leg, acute confusion, vision alteration, problems with mobility, etc. This input data is analyzed to determine the probability of stroke [6]. There are other examples of healthcare data being used to detect and predict future events including hospital readmissions, sepsis, and surgical complications [32][33][34].
Coding guidelines and standards for reporting suspected or impending conditions also vary from one country to the next. In the U.S., coders are directed to report a condition that remains "suspected and/or impending" at the time of discharge as if it existed or was established for a hospital inpatient admission, but not to code it on an outpatient encounter [29]. For outpatient cases the condition is coded to the highest degree of certainty [29]. Similarly, NHS National Coding Standards instructions are to code the diagnosis being "treated or investigated" and an example is given of a "probable myocardial infarction" reported with the code for an acute unspecified myocardial infarction [30]. According to the Canadian Coding Standards however, impending or threatened conditions are coded only when indexed as such in the Canadian version of the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD) ICD-10-CA. In addition, unconfirmed diagnoses in Canada are reported with a specific "Q prefix" to denote the uncertainty associated with the code [31]. This variability and the inability in some countries to qualify reported diagnoses as unconfirmed or uncertain is concerning. Consider for example, if an AI system triggers an alert for suspected sepsis on a patient and the healthcare team takes immediate action, thus intervening and preventing severe sepsis, the coding and reporting of this circumstance may be missed, or inconsistently reported at best. Coding guidelines and standards will need to be revised to capture this sequence of events and support AI developments in early detection of likely or impending conditions. This has broad implications and will require an interdisciplinary team to address the issue fully, including standards developers and members of the healthcare team as well as HIM professionals.
One solution is to capture qualifiers to diagnoses. If the functionality was built into Electronic Health Records (EHRs), the Health Level 7 (HL7) Fast Healthcare Interoperability Resources (FHIR) standard framework could potentially be leveraged to qualify diagnoses [35]. For example, the FHIR code system verification status defines codes as provisional, differential, confirmed, and refuted [35]. A status could potentially be added to reflect AI as the source for a condition or diagnosis. Alternatively, diagnosis qualifiers could also be addressed by the clinical terminology or classification system itself, which is demonstrated in SNOMED CT. Prefixes, such as Canada's Q prefix, could be defined and appended to ICD codes. Perhaps ICD-11 extension codes could be defined to characterize the degree of certainty of a condition (e.g. unconfirmed, impending) or identify the source for the diagnosis (e.g. clinician, AI system, patient). Again, there are multiple factors to consider. Use of a status, prefix or extension to a code would require some mechanism to ensure it remains linked with the base code. Otherwise data validity would be a major concern. This is the case for example when an "impending" stroke is identified as an actual stroke because the "impending" qualifier was lost. Implications for insurance coverage or payment policy have also to be considered. As the industry continues to refine what is deemed clinically relevant data/information, medical coding standards and guidelines will need to align with such data standards.
Changing Data Management Practices
Increased adoption of AI-enabled applications, and more sophisticated use of these AI applications by healthcare providers at the point of care, holds practical implications for managing the data. HIM professionals have an opportunity to help develop, implement, and manage the policies and procedures related to governing healthcare data, as well as to support the development, deployment, and assessment of AI models to ensure that the technology can be trusted to improve care and support greater efficiency.
New and more varied data types are generated by AI-enabled applications affecting data practices and data governance. Today, healthcare data is almost entirely encounter-based. Healthcare data is collected during an encounter with specific interaction with a care provider. However, healthcare data also includes streams of data collected remotely and automatically from multiple data sources. As the Internet of Things (IoT) expands further into healthcare, it is necessary to develop infrastructures to support the proliferation and use of these data streams.
IoT is a connection of physical objects with network connectivity that are used to collect and exchange data. 'In IoT, Things' refers to a device which is connected to the Internet and transfers the device information to other devices. "The future's data will not be collected solely within the health care setting. The proliferation of mobile sensors will allow physicians of the future to monitor, interpret, and respond to additional streams of biomedical data collected remotely and automatically" [7]. Such applications have been in development for several years. More than five years ago, a blood pressure cuff that connects to a smartphone, and transmits data to a care provider was already available [36]. Devices are also available that measure glucose levels, provide electrocardiogram readings, or even collect measures of people's cognition and emotional health [37]. As wearable sensors improve, they will increasingly allow specific health parameters to be tracked constantly and discreetly. They may replace commonly worn items such as a watch, may be worn under regular clothing, or even built into "smart" clothing [38]. These types of devices would conceivably transmit data back to a healthcare provider, potentially directly into an EHR, which presents numerous challenges. It will be critical to track the source of this data as the accuracy, value, and clinical significance may be uncertain. In addition, today's data practices are entirely oriented toward an episode of care. In AI-enabled healthcare, the underlying organizing schema for health data needs to shift from dates of service to the patient. It may require completely different data architecture to collect, store, process, validate, interpret, and potentially retrieve non-episodic ongoing streams of patient-specific data.
Manogaran and colleagues [39] proposed a framework to support the collection, transfer, and storage of data from multiple data streams. They emphasized that the security of data must occur at numerous stages including during the collection of data from devices, the transfer of data between devices, the storage of data, and during the application and use of the data. Additionally, how the data is received from various streams and integrated into a single system poses a challenge. Data streams may include structured, semi-structured, or unstructured data and for integration to occur there is a need for standardization. Initiatives such as International Standard for Metadata Registries (ISO/IEC 11179) aim to support what is referred to as 'semantic interoperability' between data that may be expressed differently across devices and technologies [40]. Semantic interoperability is intended to support the unambiguous exchange of data. One method for standardization is to create globally unique cross-reference identifiers for data elements that are semantically equivalent using eXtensible Markup Language (XML) standards, even though the data elements may have different names [40]. The Open Data Element Framework (O-DEF) was developed by The Open Group and can support the categorization, naming, and indexing of data using a controlled vocabulary that associates data elements with structured unique identifiers so that equivalencies and similarities between data can be easily determined [41]. These identifiers can be the basis of an indexing schema where a data element from one device can be integrated with a data element from another device because they both share the same equivalent content evidenced by the same structured unique identifier. O-DEF works well for collaborating enterprises, but may not serve the purpose of integrating data from disparate systems and organizations. Alternatively, other frameworks such as those from the World Wide Web Consortium (W3C) that focus on data integration of web-based data like RDF (Resource Description Framework), OWL (Ontology Web Language), and SKOS (Simple Knowledge Organization System) may be more useful [42]. Data integration challenges will require an interdisciplinary team to address the issue. HIM professionals can seek to examine how existing information models can be leveraged within an organization to support a data governance framework that accommodates multiple data streams. The utilization of existing vocabularies may serve to accelerate the collection and use of data from non-episodic sources.
An additional challenge is the need for quality healthcare data. ML techniques require substantial amounts of data to ensure algorithms work accurately and are applied appropriately to their targeted goals. "ML algorithms are highly data hungry, often requiring millions of observations to reach acceptable performance levels" [14]. Thus researchers and developers need access to large sets of health data from thousands of patients. The reliability of an AI application is dependent upon the quality of the data that was used to develop and train it. "At its core, AI is reliant upon data. If the data itself is incomplete, biased, or skewed in some other fashion, the AI system is at risk of being inaccurate" [13]. However, it's widely recognized in the U.S. that data in EHRs and claims databases need "careful curation and processing before they are usable" [14]. Healthcare data are highly heterogeneous, ambiguous, noisy, and incomplete [26]. Data curation (i.e., managing data to make it more useful) requires significant financial investment and without investing resources to support data curation the healthcare industry risks producing ML models based on factually inaccurate data [8]. The adoption of data governance principles can help organizations ensure that the people, processes, and systems involved in AI initiatives are held accountable for ethical use and deployment, the process is transparent, the result has integrity, the information is protected, the approach is compliant with organizational and legal practices, the technology is available, the method of AI development is retained, and when appropriate the healthcare data is disposed of properly [43]. These principles can help support the use of AI models that minimize the risk to patients, providers, developers, and healthcare organizations.
Evolving data governance principles are necessary and must be a priority for all healthcare organizations. Developing clear, consistent, and standardized policies and procedures for creating and managing current and emerging sources of data is a key enabler to development of AI applications. Data sources can include EHR data, lab data, imaging data, claims data, various types of master data (e.g., enterprise master patient index), patient-generated data, and metadata as well as a real-time streaming data from medical devices. Several issues need to be managed, such as data sparsity, redundancy, and missing values [26]. Data governance, including data modeling, data standards and definitions, data mapping, data auditing, data quality controls, and data quality management, must keep pace with evolving data types and data uses. For example, data quality management in healthcare organizations today focuses on assuring data is fit for use for the organization's business operations, decision-making and planning. More focus is needed on detecting, assessing, and fixing data defects in a systematic way. Data governance has never been a higher priority in healthcare as it "empowers users to trust the predictions of analytics models in their decision-making because there is certainty that the data and algorithms can be trusted" [44].
As advances in AI enable precision medicine, HIM professionals will need to develop practices to enable precision HIM. Treating all healthcare data and information the same will no longer be practical or efficient in an era of big data. More robust data analytics and processes need to be established to identify data patterns and trends and address data outliers. "Precision medicine attempts to ensure that the right treatment is delivered to the right patient at the right time by taking into account several aspects of patient's data, including variability in molecular traits, environment, EHRs and lifestyle" [26]. Precision HIM attempts to ensure the right data and information is delivered to the right person at the right time by taking into account the data source and the people, processes, and technology that interface with that data to ensure it is used and reused appropriately.
Legal, Ethical, and Regulatory Data Challenges
The use of healthcare data to develop AI applications has introduced substantial legal, ethical, and regulatory challenges. Patient privacy is a key concern affecting how AI is developed and tested. Development of AI applications may require updates to privacy and confidentiality laws and regulations, which vary widely. In the U.K., protection of health information centers on obtaining explicit consent from the patient in order to share information with any third party that is not in a direct care relationship with the patient. Researchers must apply to the Health Research Authority's Confidentiality Advisory Group (CAG) for approval to access confidential patient information without patients' consent [45]. In the U.S., government regulation is less strict. Privacy and confidentiality of protected health information are addressed in the Health Insurance Portability and Accountability Act (HIPAA). HIPAA provides data privacy and security provisions for safeguarding medical information and allows for sharing protected health information without patient consent specifically for the purposes of "treatment, payment and operations" [46]. How the U.K. or U.S. approaches will be interpreted on cases related to data sharing for AI development is largely undetermined. The U.K. consent requirement, and the definition of a "direct care relationship," was challenged in 2017 in a published case study. The case study alleged that a technology company, Google DeepMind, did not have a direct patient care relationship with every patient included in the data shared and thus "held data on millions of Royal Free patients and former patients since November 2015, with neither consent, nor research approval" [47]. This case study underscores the need to examine current privacy laws and regulations to determine how they may apply to AI applications. The U.S. Subcommittee on Information Technology recommends that federal agencies conduct such a review and, where necessary, update existing regulations to account for the addition of AI [13]. HIM professionals are involved with developing and implementing organizational policies regarding privacy and security of health information, training staff, and ensuring compliance. Therefore, with the access and use of health information for the development and deployment of AI models, HIM professionals should explore current privacy practices considering how they might apply to AI applications and how they might be amended to account for AI technology.
In addition to data privacy and protection, another looming legal issue is liability and accountability for the use of AI applications. Questions on who is ultimately liable for patient care decisions based on, or aided by, an AI application are yet to be answered. Should healthcare providers be held fully responsible for decisions suggested by algo-rithms they cannot understand? Will physicians use a system they cannot understand? Can the developer be held responsible? The problem is complicated since the reasoning in an AI application is difficult, often too complex to understand [48]. AI applications evolve and change constantly in unforeseeable ways as they are "learning" from data [9]. Though mechanisms to ensure AI applications are safe and effective are still being formulated, prevailing approaches include an expectation that algorithms can be inspected. "Each algorithm should be able to explain its output" [13]. To advance deployment and acceptance of AI applications, developers will need to be able to produce the algorithm for inspection, support why the algorithm works, and ensure the application can meet expected outcomes in testing or certification procedures. Product master data, which includes data about the components that make up the product, may include information on the algorithm deployed. In the future, individual patient health information may include the algorithm that was applied to the patient's data in order to validate or authenticate healthcare decisions. In addition, there may be a need to audit AI events for reporting purposes. HIM professionals can establish the necessary data governance principles that must be adopted for AI applications to be implemented successfully within healthcare organizations.
Another aspect that deserves attention is the need to balance the financial incentive to make processes more efficient with the ethical and legal uses of health information. For example, the financial motivators to adopt CAC for the sole purpose of coding a higher level of care must be tempered by ethical considerations. HIM professionals involved in the clinical coding process can greatly impact the amount of funding provided to a healthcare organization. Hoyle [49] and Shepherd [50] argued that HIM professionals are positioned as advocates for the ethical use of technology and data. HIM professionals must urge healthcare organizations to consider the ethical frameworks and practice guides not just deemed appropriate for health information professionals, but also for CAC and AI technologies. These activities will provide support for the HIM professionals in healthcare organization to go about the business of "providing the clinical truth in their coding and resisting the perverse incentives" [50]. Therefore, with the access and use of health information for the development and deployment of AI models, HIM professionals should be involved to ensure policies and procedures are being developed, amended accordingly, and followed to account for the influence of AI technology. Although HIM professionals are just beginning to work with AI technology, there has already been notable impacts on the HIM workforce.
Response of the Health Information Management Workforce
Healthcare technology has greatly impacted the way care is approached and delivered. The digitizing of healthcare data has supported efforts to automate processes that were previously done manually. These processes have inevitably impacted the healthcare workforce, including the HIM profession. There is a greater need for employees that have technical skills to better collect, manage, and use healthcare data. Sandefer and colleagues [51] evaluated data from a workforce survey that yielded responses from 6,475 healthcare professionals that were largely from HIM. The survey asked respondents to rate the percentage of their time they spent on current tasks and how much they anticipate they will spend on these tasks 10 years in the future. The findings of the study suggested that many HIM professionals spent significant time on diagnostic and procedural coding and records processing, but they expected these tasks to decline the most in the future while leadership, teaching, and informatics tasks are expected to increase. Historically, the HIM profession has focused on medical records and coding. However, the profession has evolved into more diverse roles and continues to change with technological advances. Today, many HIM professionals find themselves in diverse roles related to healthcare leadership, teaching, technology, compliance, quality, and informatics [51,52].
In 2018, Sandefer [53] evaluated data from a workforce survey of 274 senior-level professionals within clinical (e.g., hospitals, clinics) and non-clinical (e.g., software vendors, consulting firms) organizations. The goal of the survey was to identify the needed job skills, competencies, and education required by HIM professionals to meet future workforce needs. Seventy-two percent of clinical respondents reported that at least half of coding functions will be automated, and 50 % reported that more than half of the coding functions will be automated in the near future. The paper suggests that the application of natural language processing combined with the quality of voice to text translation will support improvements in extracting meaning from unstructured data, which will greatly revolutionize the healthcare industry.
Automation is also expected to impact the HIM workforce beyond just influencing how diagnostic and procedural coding is approached. Data analytics has been more prolific across the profession. More professionals are moving into roles to evaluate data related to financial, operational, and clinical performance [54]. HIM professionals are becoming more involved with developing solutions for healthcare organizations to better manage and use data. For instance, HIM professionals are actively participating in the development of policies, procedures, and best practices to ensure data are being used ethically and abiding by the required laws when research or data reporting is being adopted [55]. However, in the future, HIM professionals are going to need to be more involved in developing similar policies and procedures to accommodate AI developments. To date, there is very little attention on the needs for data governance to support AI. Without having a workforce to support AI data governance, there will likely be barriers to widespread adoption and use. For instance, past efforts to implement ICU mortality risk scores have been met with reluctance due to a lack of trust in the technology, despite the obvious benefits the technology may serve [56]. By engaging more stakeholders in the development of the technology, including HIM professionals, a culture of acceptance may be achieved by adopting principles of data governance that offer enterprise-wide technology support.
The evolving use of healthcare data for AI applications is already impacting the roles and responsibilities of HIM professionals. HIM professionals are findings themselves in more leadership roles that govern healthcare data and technology, and more technical roles that involve the access and use of healthcare data for reporting and evaluation purposes [51]. With some tasks being automated, there will likely be continuing opportunity for HIM professionals to take on more tasks that focus on the data collection, validation, analysis, and overall the ethical use of that data. HIM professionals who currently find themselves working in medical coding who embrace automated coding have an opportunity to transition into a role that focuses on data validation to improve the quality of healthcare data. However, to emerge into these roles, these professionals will need technical training related to methods and tools for data storage, acquisition, and analytics. With advancements in technology, many professions are realizing the need for greater competence in computational thinking skills to better translate data into abstract concepts and understand data-based reasoning [57]. Although exact details on how AI technologies will impact the future of HIM are not yet known, current workforce studies suggest that HIM professionals are going to continue to work in more technical roles and will therefore support AI developments and use.
Conclusion
AI has and will continue to impact the way decisions are made in healthcare. For example, decisions are influenced by ML algorithms that support the prediction of future events, or the use of clinical decision support systems that aid in the detection of anomalies in diagnostic images. The decisions that HIM professionals make are also being impacted. For instance, CAC has supplemented a medical coder's role in selecting diagnostic and procedural codes for healthcare claims. The promise that AI can support a more efficient decision-making process with greater accuracy is certainly a promise worth exploring. HIM professionals should participate in efforts to align medical coding standards and guidelines with evolving data types and standards. In addition, as AI technologies present new and varied types of source data, HIM professionals have an opportunity to influence the development of mechanisms to collect and integrate emerging data types, including non-episodic ongoing streams of patient data and algorithms in product master data for example. The adoption of data standards and vocabularies that support semantic interoperability is part of the solution to the data integration challenge [40] and one that HIM professionals should participate in evaluating and testing.
HIM professionals should also participate in developing the data governance framework within healthcare organizations to establish mechanisms to collect emerging data types from various sources, manage the policies and procedures related to the access and use of data, and develop methods to validate the reliability and impact of AI technology. This includes considering how evolving data structures impact the use and reuse of data and the related policy implications (e.g., data reporting requirements, payment policy). It also includes for example ensuring data governance practices include product master data (e.g., data about the algorithms deployed) to support efforts to audit, inspect, or certify AI applications. These endeavors will require HIM professionals to have the technical knowledge to analyze and monitor AI tools and the necessary technical skills related to collecting and managing healthcare data in AI-enabled healthcare. To acquire such technical skills, HIM professionals may need to seek additional education or training.
There are significant data management practices as well as laws and regulations surrounding the use of healthcare data that have the potential to either impede or enable development of AI applications. HIM professionals can support future AI developments today by increasing data validation efforts and beginning to evaluate relevant policies and processes. HIM professionals should analyze coded data patterns and establish processes to validate coded data across large groups of cases. HIM professionals must focus on detecting, assessing, and fixing data defects in a systematic way in order to improve the quality of current healthcare data that is being used to develop AI applications. Other examples of steps HIM professionals can take now include ensuring the proper laws and regulations are being followed (e.g., ensuring only authorized personnel and technology accesses clinical data), beginning to explore current privacy practices in light of how they may apply to AI applications, and establishing collaborative relationships with data standards developers and informaticists involved in developing AI applications.
Although there is an emphasis on creating policies and procedures to accommodate AI technology, HIM professionals will also find that there are emerging opportunities for careers related to the greater adoption of AI. HIM professionals are well situated to proactively manage and monitor data governance, data sets, and data models related to the implementation and use of AI. AI technologies are not intended to replace healthcare workers, but individuals who are able to adapt to new workflows and processes may replace those who cannot. There are wonderful opportunities for career moves and advancements for those who continue to increase their knowledge of data analytical methods and tools.
The future of AI holds the promise of a more effective and efficient healthcare system built on a strong foundation of reliable and accurate data. HIM professionals manage and support the entire continuum of healthcare data from the collection of the data to the use and disposition of that data. AI technology will continue to evolve and so will the role that HIM professionals would play to support this technology. The challenge for HIM professionals is to identify leading practices to achieve precision HIM and develop practice standards for the management of healthcare data and information in an AI-enabled world.
|
2019-08-18T13:04:42.784Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "10dc3971289cbe5605fd91de151ecada776f4c48",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0039-1677913.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "10dc3971289cbe5605fd91de151ecada776f4c48",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
216509014
|
pes2o/s2orc
|
v3-fos-license
|
Detection of promoter designed for transgenic plant in local soybean
Potential risk of allergenic, toxic, and dietary risks from Genetically Modified Organism (GMO) has become critical issues. The transgenic soybean of United States has commonly been exported to countries around the world, including Indonesia. Unfortunately, there has not yet been any label given to packaging product of Genetically Modified Organism (GMO) including soybean grain sold in Indonesia. The aim of the study was to detect promoter of Cauliflower Mozaic Virus 35 S which potentially indicate the transgenic plants. Samples used in this research were 12 different brands of soybean sold in the 4 local markets. Screening of CaMV 35 S genes was done by Polymerase Chain Reaction (PCR) methods using specific primers. The result indicated that positive signals detection refers to transgenic plants in the local soybean grains with the amplification area of 123 basepairs.
Introduction
According Imbalance of production and consumption of soybean in Indonesia has lead to increase high import rate of soybean every year. The implication of the condition was Indonesia has imported around 50 % of soybean needs from United States [1]. However, more than 90 percents of soybean production from that country is transgenic plant. The transgenic soybean of United States has been usually exported to countries around the world [2]. Unfortunately, there has not yet been any labels given to packaging product of Genetically Modified Organism (GMO) including soybean grain sold in Indonesia. Recognition of GMO and non-GMO products is one of problems in relation to fulfill consumer rights.
Potential risk of allergenic, toxic, and dietary risks from Genetically Modified Organism (GMO) has become critical issues. All of the products derived from genetically modified technology called transgenic products need to pass an evaluation and assessment before entering the markets in order to ensure food safety controls. In Indonesia, the requirement of products released in the market is based on regulation of No. 69/1999 about Label and Food Advertisement.
According to the genetically engineered plants, promoter of 35S CaMV is one of important genes used to carry out DNA recombinant to almost GMO crops. The function of this promoter is to induce strong and long term expression of genes. This promoter is derived from a virus namely Cauliflower Mozaic Virus (CaMV). The presence of this gene could be used as a signal of transgenic gene [3]. This promoter is reported as the first category targets to detect GMO beside T-Nos (Nopaline Sythase Terminator) region which all of gene encodes resistance to ampicilin (bla), and kanamycin (nptII) antibiotics [4].
Virus Cauliflower Mozaic Virus (CAMV) was first found in 1921 infected Chinese cabbage which caused abnormal phenotype likely mosaic on the leaves surface. This disease was commonly found in crucifer family (Cruciferae or Brassicaceae) such as cauliflower, broccoli, cabbage, and chinese cabbage. The virus vector could be found in three different aphids. Although this mosaic disease could infect many plants, this virus was given a special name of Cauliflower Mozaic Virus (CaMV). In 1960, this identified virus consisted of double stranded of Deoxyribonucleic acid (DNA). It indicates that the DNA might be transcripted in plant cells. In 1985, add or deletion expression in plants was impossible to study the specific gene until the identification of 35S promoter of CaMV appeared. Upstream sequences of 35S promoter which consisted of 46 base pairs revealed an expression, and its downstream sequences of 343 base pairs could strongly be expressed in plants. Thus, eighty percent of genetically modified organism harbouring 35S promoter of CaMV such Roundup Ready soybean, Bt corn and cotton, also papaya "sunset" could be resistant to ringspot virus [5].
In plant transformation process, a vector construction requires DNA sequences inserted into target organism [6]. Commonly, such insertion is done to border of T DNA including promoter and terminator sequences. The sequences are known to enable the desired expression of the gene interest. One of promoters is CaMV. This promoter has double strand DNA virus which could infect Solanaceae, and Cruciferae. Commonly, the gene is used to design Genetically Engineered crops for commercial production such as maize, soy, canola and papaya [7]. The benefit of this promoter insertion is functional, well characterized and constitutively expressed promoter [8]. Promoter size of CaMV 35 S is 342 basepairs including region of CAAT and TATA box. Nucleotide sequence of promoter and position of the region is presented in Figure 1. The size of DNA bands which detects the promoter of 35S of CaMV will be around 100-150 base pairs. This appearance was positive signals as Wardani et al. [1] explained that primer pairs of 35 S detection has amplification area about 123 basepairs.
The investigation of presence of 35 S CaMV promoter has been commonly used for detection of Genetically Engineered plant material [7]. A number of methods were developed to detect genetically modified organism. Polymerase Chain Reaction (PCR) is one of the approaches for detecting transgenic gene [9]. The method was chosen because of its high specificity, efficiency and validity [10] Designing a primer based on regulatory sequence is used to detect gene target [11].
The purpose of this research was to detect promoter of 35 S CaMV which potentially indicates transgenic gen of soybean. This detection results presumably could help Indonesia in implementing the government regulation on labelling of GMO and non-GMO products on market packaging of soybean grain. Moreover, the results might investigate the contamination among local and imported soybean grain sold in Indonesia.
Observation and Preparation of Samples
Soybean grains were used in this research were all brand of soybean sold totally 12 brands from 4 central markets in Special Region of Yogyakarta called Beringharjo Market, Sentral Market, Prawirotaman Market and Gamping Market. The interview was also conducted to support identification of phenotype among imported and local soybean grain sold in the market.
Extraction of DNA
Fifty milligrams of leaves of 14 days old seedlings were used for extraction of Genomic DNA mini kit according to Gene-aid Biotech Ltd. The DNA pellet was dried and re-suspended in 100 milliliters of de-ionized water (ddH2O). The product of extracted DNA was stored at 4ºC.
Yield and Quantity of DNA
Quantity and purity of isolated DNA samples were measured by Optical Densities (OD) of 260 nm and 280 nm of genequant 1300 spectrophotometer. The expect value of DNA purification was around ~1.8 ratios of A260/280 nm. Then, the DNA purity value was around 2.0-2.2 ratio of A260/230 nm [12]. The extracted DNA was adjusted by dilution to 200-400 ng/μl to PCR total volume.
Optimization of Primer Annealing Temperature and Detection of CaMV (Cauliflower Mozaic Virus) 35S Promoter
In this research, screening of promoter of CaMV 35 S was conducted by Polymerase Chain Reaction (PCR) methods using specific primers. Polymerase Chain Reaction (PCR) is a method to amplify specific DNA by using two primers of oligonucleotides which is helped by polymerase enzyme [13]. Primer used and its target in this research are listed in Table 1. Table 1. Oligonucleotide primer pair sequence and its targets [4] Gene specifity Primer name Sequences Amplicon (basepairs) Promoter of CaMV 35 S P35S-CF3 (f) 5'-CCA CGT CTT CAA AGC AAG TGG-3' 123 P35S-CR4(r) 5'-TCC TCT CCA AAT GAA ATG AAC TTC C-3' Ten milliliters of PCR total volume consisted of 2.6 μl of 50-100 ng extracted DNA samples, 0.25 μl of forward primer, 0.25 μl of reverse primer, 2 μl of nuclease-free water, and 5 μl of master mix Gotag®Green Master Mix, Cat.9PIM712. The temperature profiles used for optimization of primers was presented in Table 2.
Gel Electrophoresis and DNA Visualization
Gel electrophoresis was prepared by using 1.5 % of agarose soluted in Tris/Borate/EDTA (TBE) buffer 1X. Electrophoresis was conducted in 60 Volt for 30 minutes. The total volume of every tube for DNA running was 15 μl consisted of 10 μl of DNA template and 5 μl of staining/loading dye. Preparation of the gel after electrophoresis was soaked by using EtBr for 15 minutes. DNA band in the gel was visualized by UV transilluminator.
Percentage of Local and Imported Soybean Based on Observation and Grain Phenotype
Addition Percentage of local soybean grains collected from the markets was about 30, 77 %. Meanwhile, the percentage of imported brand of soybean and phenotype-like imported soybean which collected from the market was about 69, 23 %. Based on the interview process and observation of phenotype identification, local brand of soybean has grain phenotype with yellow doff and big grains. This phenotype has some representative brand name such as Lokal, Wonosari, and Galunggung, also Anjasmoro. Meanwhile, imported and phenotype like-imported soybean could be recognized with white color and oval grains. The white color grain is the phenotype representative of brand name, America No. Although the traders could mention the types of the local and imported soybeans, verification method is also required by detection of transgenic gene target. One of the ways was to verify the presence of CaMV 35 S promoter, isolation of DNA and genotyping analysis by using Polymerase Chain Reaction.
Yield and Quantity of DNA
Extraction of selected 10 (ten) DNA samples from local, imported, phenotype like-imported soybean sold in the market was conducted. Then, the DNA purity was measured to ensure quality DNA template for next analysis. A quantitative test of DNA isolation is shown in Table 3. Good purity level of DNA is around ~ 1.8 ratio of absorbance level on 260/280 nm and around ~ 2.0-2.2 ratio of absorbance level on 260/230 nm [14]. In this research, DNA concentration was around 200 to 400 ng/ μl which was qualified for DNA template of Polymerase Chain Reaction (PCR) methods.
Optimization of Primer Annealing Temperature and Detection of CaMV 35 S Promoter by using Polymerase Chain Reaction (PCR)
Optimization of annealing steps for PCR was conducted to maximize the process of complementary primers to the target nucleotide sequences. Primers of P35S-CF3 (f) and P35S-cr4 (r) used to detect Promoter of CaMV 35 S was successful in the temperature range of 55, 56, 58, and 59 ºC. The optimal annealing temperature was 56 ºC by signal of the white thickest band appearance in the electrophoresis gel, illustrated in Figure 3. Appearance of bands indicated optimal temperature of primer could be complement with extracted DNA of samples. Then, this optimal temperature of 56 ºC was used in the annealing temperature of primers for PCR program setting. Figure 4 visualized of DNA amplification of soybean (Glycine max L.) detected by Polymerase Chain Reaction (PCR) by using specific primers revealed positive signals that the bands appeared for all samples. The size of DNA bands which detected the promoter of CaMV 35 gene promoter was around 100-150 basepairs. The signals explained that primer pairs of CaMV 35 S promoter could be detected with the amplicon products was about 123 base pairs, the yield similar results from the previous research for detection of this promoter [4]. Meanwile, another appearance in the gel electrophoresis result was white band under every band of DNA appearance called Primer Dimmers (PDs) which commonly present in the result of amplification. A possible reason was caused of template independent primers interaction could take a position that increases production of nonspecific products called PDs. These primers appear in high concentration, and weak interactions. Also, the complement of a nucleotide between amplimer of 3'ends could increase to primer dimers after 30 cycles [1].
In details, the results of the band appearance with the amplification area of 123 basepairs also revealed in Lane 7, 8, 9 and 10 which indicated that the local soybean namely Lokal, Galunggung, and Wonosari also contained Genetically Engineered (GE) herbicide-tolerant gene called CaMV 35 S promoter gene of transgenic soybean. Considering the result of identification of transgenic soya products in previous research, the findings of this study also revealed similar results shown by positive signals of CaMV 35 S promoter gene with the amplification area around 123 basepairs [15]. Although the CaMV 35 S promoter is not the only gene which could indicate the transgenic soybean, the presence of this gene could be detected accurately to identify of major genetically modified crop species, Roundup Ready soybean [16]. In this research, the local soybean of Indonesia might be contaminated because seed producers cultivated during seed production together with imported soybean. A possible reason is regular farmers of soybean has main responsibility for the contamination of GM and non-GM soybean by their cultivation without considering distance isolation. Another explanation is that this promoter which has double strand DNA virus might be inserted because of contamination from another family such Cruciferae or other. Commonly, this fragment of DNA is used to design Genetically Engineered crops for commercial production such as maize, soy, canola and papaya [7]. The previous report explained that such contamination also results in similar cases in Europe where seed producers might be responsible for segregation of Genetically Modified (GM) and non-GM. However, Food and Veterinary Office (FVO) of European Union (EU) in 2007 conducted tests during the process of cleaning, sizing and packing [9]. One of most frequent factors affects contamination was isolation distances. The distance could vary a couple of meters to kilometers, depending on the crop and sometimes on regional characteristics. This measurement can be partial or full replacement by zones between GM and non-GM crops [17]. The result of this study would be a factor to develop government policy in assisting farmer to study of cultivation system among GMO and non-GMO plants. It also recommended to government in the role of assist and implement the labelling system of the market products in Indonesia, especially soybean.
On the other hand, Indonesia is one of the countries which is classified to receive the mandatory of labelling for many GE foods and the labelling threshold is higher than 1% or undefined. This includes laws with a threshold of 1% for the entire food item [18]. Indonesian farmers are also available to use biotechnology products. The technology rapidly adopted by farmers following commercialization. Meanwhile, the information and general knowledge about biotechnology might not be sufficient to the farmer in the field [19]. Although legal provisions related to genetic engineering of food crop, it could not approve the guarantees to the community including farmers [20]. Presumably, the transgenic imported soybean or another family is contaminated extensively to the local soybean by cultivation without compromising the isolation and any factors.
Conclusion
Samples of local soybeans (Glycine max L.) revealed positive signals of promoter of CaMV (Cauliflower Mosaic Virus) 35 S of transgenic soybean.
|
2020-04-16T09:02:21.018Z
|
2020-04-04T00:00:00.000
|
{
"year": 2020,
"sha1": "05a0510a3f4b0b638daf5c65b762f9ba0d903395",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1755-1315/458/1/012011",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "c45f7db7b4e89a2f432866c786a981b5c5b7a98c",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Physics",
"Biology"
]
}
|
212747833
|
pes2o/s2orc
|
v3-fos-license
|
Local Multiple Traces Formulation for Electromagnetics: Stability and Preconditioning for Smooth Geometries
We consider the time-harmonic electromagnetic transmission problem for the unit sphere. Appealing to a vector spherical harmonics analysis, we prove the first stability result of the local multiple trace formulation (MTF) for electromagnetics, originally introduced by Hiptmair and Jerez-Hanckes [Adv. Comp. Math. 37 (2012), 37-91] for the acoustic case, paving the way towards an extension to general piecewise homogeneous scatterers. Moreover, we investigate preconditioning techniques for the local MTF scheme and study the accumulation points of induced operators. In particular, we propose a novel second-order inverse approximation of the operator. Numerical experiments validate our claims and confirm the relevance of the preconditioning strategies.
Introduction
Developing efficient computational methods for modeling electromagnetic wave scattering by composite objects in unbounded space remains a challenging problem raising many technical and theoretical issues. Due to their rigorous account of radiation conditions, boundary integral representations are among the preferred choices despite the cumbersome electric and magnetic field integral operators. However, when dealing with many subdomains, the problem can become daunting computationally leading to high memory and CPU requirements. Several boundary approaches have been proposed to tackle electromagnetic wave transmission problems by homogeneous scatterers in the frequency domain [1,2,3,4,5,6]. For instance, the Poggio-Miller-Chang-Harrington-Wu and Tsai formulation (PMCHWT), also referred to as Single Trace Formulation (STF), has been shown to be useful for a number of applications and amenable to significant improvements in terms of preconditioning-when using iterative solvers-for the case of separated scatterers [7,8,9]. For Laplace, Helmholtz and Maxwell's equations, Multiple Traces Formulations (MTFs) [1,2] were introduced as a means to solve transmission problems by multiple connected scatterers and allowing to use Calderón-based preconditioners. Though theoretical aspects for Maxwell scattering are now fully available for global MTFs [10,11] their implementation is extremely cumbersome. Opposingly, local versions of the MTF [1,12,13,14,15] are easily implemented and parallelized. Although theoretical results for acoustic and static versions are available, in the electromagnetic case similar results remain elusive. Regarding Maxwell's equations, preconditioning is crucial as This work received the support from ECOS-Conicyt under grant C15E07. The first and second author would also like to acknowledge support from the French National Research Agency (ANR) under grant ANR-15-CE23-0017-01. C. Jerez-Hanckes thanks the support of Fondecyt Regular 1171491. STF and MTFs incorporate electric field integral operators. These commonly generate highly illconditioned linear systems and lead to solver time being the bottleneck of such schemes, or even to stagnation of iterative solvers (refer e.g. to [16,9]).
In the present contribution, we investigate various theoretical aspects of the local MTF applied to Maxwell's equations. We will focus on a geometrical configuration made up by two smooth subdomains and one interface. A large part of this article actually assumes that the interface is a sphere, which is still relevant as conclusions for general smooth interfaces can be drawn from this particular case arguing by compact perturbation, see e.g. [17,Chap. 2] or [18,Chap.5]. In this canonical setting, one can directly use separation of variables via vector spherical harmonics. One of our main results is the derivation of a Gårding-type inequality for the local MTF in the electromagnetic context (see Theorem 2).
We also study in detail the essential spectrum of the local MTF operator and its preconditioned variants, not thoroughly studied before to our knowledge. We exhibit surprisingly simple formulas- (33) and (49)-for these accumulation points. Finally, we propose and analyze several preconditioning techniques for the local MTF, looking for strategies that (i) reduce as much as possible the number of accumulation points in the spectrum of the preconditioned operator; and, (ii) lead to a second-kind Fredholm operator on smooth surfaces-a compact perturbation of the identity operator.
The outline of this article is as follows. In Sections 2 and 3 we set the problem under study and introduce a necessary notation and definitions related to trace spaces and potential theory. In Section 4 we derive the local MTF for Maxwell's equations. In Section 5 we show that the kernel of this operator is trivial. In Section 6 we apply separation of variables to the local MTF operator and study the asymptotics of its spectrum. In Section 7 we use separation of variables to establish a Gårding-type inequality thus proving that the local MTF operator is an isomorphism that, under conforming Galerkin discretization, leads to quasi-optimal numerical methods. Section 8 describes several preconditioning strategies along with numerical tests to discuss their performances. Finally, concluding remarks are provided in Section 9.
We are interested in computing the scattering of an incident electromagnetic wave (E inc , H inc ) propagating in time-harmonic regime at pulsation ω > 0. To simplify matters, we require that curl(E inc )−iωµ 0 H inc = 0 and curl(H inc )+iω 0 E inc = 0 in R 3 : the incident field may, for example, be a plane wave. The equations for the total electromagnetic field (E, H) under consideration read In addition, (1c) is referred to as the Silver-Müller radiation condition [19,20] whereinx := x/|x|. In (1d)-(1e) the notation "E| Γj " (resp. "H| Γj ") should be understood as the trace taken at Γ from the interior of Ω j -precise definitions provided in Section 3. For the sake of clarity, we represent the problem under consideration in Figure 1. One can reformulate (1) as a second order transmission boundary value problem, which is the basis of the Stratton-Chu potential theory. Specifically, In the equations above we defined the effective wavenumber in each subdomain: We study the solution of this problem by means of a boundary integral formulation. As mentioned before, several formulations are possible but we focus here on local MTF. As a complete stability analysis of the local MTF for the electromagnetic case is not presently available, we concentrate on the following special case.
Assumption 1. Ω 1 is the unit ball and Γ is the unit sphere.
This will allow for explicit calculus by means of separation of variables which will help investigate and clarify the structure of operators associated with the local MTF.
Trace spaces and operators
We refer to [20] for a detailed survey of vector functional spaces for Maxwell's equations. We introduce three interior trace operators taken from the interior of Ω j and are defined, for all U ∈ [C ∞ (R 3 )] 3 , the space of infinitely differentiable volume vector fields, as By density, one can show γ j t , γ j r : refers to the space of tangential traces of volume-based vector fields. The space H − 1 2 (div, Γ j ) is put in duality with itself via the bilinear form: The trace operators γ j t,c (resp. γ j r,c , γ j c ) refer to exactly the same operators as (4) but with traces taken from the exterior along the same direction of n j . Then, we shall define jump and averages traces as and define {γ j } and [γ j ] accordingly. We also need to introduce duality pairings for MTFs will be written in a so-called multiple traces space and obtained as the Cartesian product of traces on the boundary of each subdomain. In the present context, it takes the simple form: with Σ the skeleton. This space will be equipped with a bilinear pairing [[·, ·]] : H(Σ) × H(Σ) → C defined as follows. For any tuples u = (u 0 , Note the identity u]] for any u, v ∈ H(Σ).
Local multiple traces operator
As expected, we heavily rely on potential theory in the context of electromagnetics, i.e. Stratton-Chu theory [21]. In the sequel, let G κ (x) := exp(iκ|x|)/(4π|x|) refer to the outgoing Green's kernel for the Helmholtz equation with wavenumber κ > 0.
Next, we define the boundary integral potentials: for u = (u, p) ∈ H − 1 2 (div, Γ) 2 , we set The potential operator G κ maps continuously H − 1 2 (div, Γ) 2 into H loc (curl, Ω 0 ) and satisfies (curlcurl − κ 2 0 )G κ (u) = 0 in Ω 0 as well as Silver-Müller's radiation condition at infinity, regardless of u ∈ H − 1 2 (div, Γ) 2 . A similar result also holds in Ω 1 . The potential operator plays a central role in the derivation of boundary integral equations as it can be used to represent solution to homogeneous Maxwell equations according to the Stratton-Chu integral representation theorem [21,Theorem 5.49].
On the other hand, the jumps of trace of the potential operator follow a simple and explicit expression.
In the forthcoming analysis, we shall make intensive use of the operator Standard choices of notation in the literature dealing with Calderón projectors usually consider the same definition as above but without the factor 2. Our convention is motivated by simplifications stemming from this choice in our subsequent calculations. It is clear that {γ j t } · DL κ = {γ j r } · SL κ . On the other hand, since the vector Helmholtz equation is satisfied by Γ G κ (x − y)u(y)dσ(y), we find that that {γ j r } · DL κ = κ 2 {γ j t } · SL κ . As a consequence, the operator A j κ can be represented in matrix form as Observe that, for a given κ we have A 0 κ = −A 1 κ due to the change in the orientation of the normals n 0 = −n 1 . The operators (9) can be used to caracterise solutions of Maxwell's equations in a given subdomain.
is a continuous projector as a mapping from An immediate consequence of the above proposition, combined with the notational convention (8) (in particular the mulitplicative factor 2 in there), is that (A j κ ) 2 = Id, known as Calderón's identity. As the incident field is a solution to Maxwell's equations with wavenumber κ 0 on R 3which includes Ω 1 -, then so that (A 1 κ0 − Id)γ 1 (E inc ) = 0 according to the proposition above. Since on the other hand, it holds that Using Proposition 2, we also see that equation (2) can be reformulated as (A 1 κ1 − Id)γ 1 (E) = 0 on the one hand, and Next, we need to reformulate the transmission conditions (2c)-(2d). Since these conditions are weighted by the permeability coefficients µ j , we introduce scaling operators: . By the definition of the effective wavenumber in (3), we see that ωµ/κ = µ/ and we can define the scaled operators: With this definition, we have (A j κ,µ ) 2 = Id. The transmission conditions then are rewritten as For the sake of conciseness, we will thus choose u j = τ −1 ωµj γ j (E) as unknowns of our problem. As a consequence, (2) can be cast as Now, let us rewrite (12) in a matrix form. We first introduce the continuous map A (κ,µ) : H(Σ) → H(Σ) as a block diagonal operator A (κ,µ) (u) := (A 0 κ0,µ0 (u 0 ), A 1 κ1,µ1 (u 1 )) for any u = (u 0 , u 1 ) ∈ H(Σ), with subscript (κ, µ) representing the dependence on (κ 0 , κ 1 , µ 0 , µ 1 ). The first two rows of (12) can be rewritten as where u = (u 0 , u 1 ) and f = (−2τ −1 ωµ0 γ 0 (E inc ), 0). To enforce transmission conditions, we also need to consider an operator Π : H(Σ) → H(Σ) whose action consists in swapping traces from both sides of the interface. It is defined by Π(u 0 , u 1 ) := (u 1 , u 0 ) for both u 0 and u 1 in H − 1 2 (div, Γ) 2 , so that transmission conditions simply rewrite u = −Π(u). Plugging the transmission operator into (13) leads to the local MTF of (2): where
Injectivity of local MTF for one subdomain
We now prove the injectivity of the operator MTF loc introduced above. Assume that u = In accordance with Theorem 1, we define the (radiating) solution U(x) := G κj (τ ωµj (u j ))(x) for x ∈ Ω j , j = 0, 1. Taking interior traces, scaling both formulas, and using that u solves (16) yields: hence the trace jump τ −1 ωµ0 γ 0 (U) + τ −1 ωµ1 γ 1 (U) = 0, leading to the conclusion that U is a Maxwell solution over the whole R 3 . By uniqueness of the Maxwell radiating solution [19,20], it holds that U ≡ 0, and so u 0 − u 1 = 0, j , j = 0, 1, and repeating the same arguments as above yields We see that U c is solution to a one-subdomain transmission problem with homogeneous source term. Such a problem admits zero as unique solution which We have established the following result.
Spectral analysis of the local MTF operator for Maxwell equations
We are interested in deriving an explicit expression of operator (15) and analyzing the eigenvalues of the preconditioned formulations. As the present geometrical setting is spherically symmetric, this can be obtained by means of separation of variables based on spherical harmonics.
Tangential spherical harmonics
Any tangential vector field can be decomposed as [22,23] u with X n,m := 1 where In the definition above, the functions P m n (t), m ≥ 0, t ∈ [0, 1] refer to the associated Legendre functions, see e.g. [25, §7.12]. The tangent fields X n,m , X × n,m form an orthonormal Hilbert basis of which maps a pair of scalar coefficients to a tangent vector field over Γ (17) can then be rewritten in the more compact form for a collection of coordinate vectors u n,m = [u n,m , u × n,m ] ∈ C 2 .
Local MTF operator over a sphere: separation of variables
The operators coming into play in the expression of the local multi-trace operator (15) are actually (block) diagonalized by this basis. Define J n (t) := πt/2J n+1/2 (t) where J ν (t) are Bessel functions of the first kind of order ν (see [24, §10.2]) and H n (t) := πt/2H [22] and using notations (9), we have Since . According to Lemma 1 in [22], we also have the explicit expression: Here again, defining K 1 (20) and (21) we deduce an explicit expression for the operators A j κ,µ . First of all define the function X # 2 n,m by the expression which we can also simply denote X # 2 n,m = diag(X n,m , X n,m ). Then X # 2 n,m should be understood as a linear operator that maps an element of C 4 to a pair of tangent vector fields over Γ. Any element u = (u, p) ∈ H − 1 2 (div, Γ) 2 decomposes as u(x) = n,m X # 2 n,m (x) · u n,m where u n,m ∈ C 4 are coordinate vectors that do not depend on x. In this basis, the operator A j κ,µ admits the following matrix form We can reiterate the notational process used above, and introduce the field X # 4 n,m := diag(X # 2 n,m , X # 2 n,m ) which also writes in matrix form .
Then, any element u = (u 0 , u 1 ) ∈ H − 1 2 (div, Γ) 2 × H − 1 2 (div, Γ) 2 can be decomposed as u(x) = n,m X # 4 n,m (x) · u n,m where u n,m ∈ C 8 are coordinate vectors that do not depend on x. Then the multi-trace operator (15) is reduced to matrix form in this basis
Accumulation points
We can now study in more detail the symbol of the boundary integral operators introduced in the previous section. To be more precise, we examine their behaviour for n → +∞. First of all, from the series expansion of spherical Bessel functions given by [24, §10.53], we deduce that, for any fixed t > 0, it holds that Since Bessel functions are expressed in terms of convergent series of analytic functions, we can derive the above asymptotics. This leads to the following behaviours for the derivatives, J n (t) = t n n! 2 n (2n 8 One can combine these asymptotics to obtain the predominant behaviour of the functions coming into play in the boundary integral operators expressions of the previous section. Specifically, we find − 2iJ n (t)H n (t) ∼ n→∞ −t/n, Define T n ∈ C 2×2 by T n (u 1 , u 2 ) := (u 1 , u 2 /n). From this, we conclude that, as n → +∞, we have are constant matrices independent of n given by Next, define T #2 n ∈ C 4×4 by T #2 n (u 1 , u 2 ) = (T n (u 1 ), T n (u 2 )) for any pair u 1 , u 2 ∈ C 2 i.e. T n = diag(T n , T n ). Then, using the above results, the asymptotic behaviour of the matrix On the other hand, we also have Finally, let us define T #4 n ∈ C 8×8 by T #4 n (u 1 , u 2 ) = (T #2 n (u 1 ), T #2 n (u 2 )) for any u 1 , u 2 ∈ C 4 i.e. T #4 n = diag(T #2 n , T #2 n ). Then we have the asymptotic behaviour Remark 1. It is important to observe that MTF ∞ loc does not depend on n. Since the eigenvalues of MTF loc [n] coincide with the eigenvalues of T #4 n · MTF loc [n] · (T #4 n ) −1 , this shows that the spectrum of MTF loc [n] converges toward the spectrum of MTF ∞ loc for n → ∞.
Now, let us investigate in detail the spectrum of the matrix MTF ∞ loc . First, as an intermediate step, we analyze the spectrum of the matrices: where S ∞ (κ,µ) is the Single-Trace Formulation (STF) operator [7]. A thorough examination shows that they take the form: Trying to compute directly the eigenvalues of the above matrices leads to the conclusion that any eigenvalue λ ± satisfies (λ ± ) 2 = α ± β ± = −( µ 0 /µ 1 ) 2 . Thus, the spectra of both matrices is given by Let us now return back to MTF ∞ loc . We recall that (A j,∞ κ,µ ) 2 = Id, and obtain directly the following identity Taking account of (31) in addition finally leads to the following expression for the accumulation points of MTF ∞ loc S MTF ∞ loc = ± 2 ± iΛ µ , ± 2 ± iΛ .
For numerical experiments and validation, we consider the lossless scattering of Teflon [26] and Ferrite [27], both immersed into vacuum. Their relative permeability and permittivity r , µ r are described in Fig. 2 (left). For each material, we introduce the "Low", "High" and "Very High" frequency regimes with their associate acronyms, corresponding to the excitation of a plane wave with frequency f and wavelength λ := 2π κ0 as represented in Fig. 2 (right). We examine numerically the spectrum of the operator MTF loc . An explicit expression of the eigenvectors is provided by the vector spherical harmonics X n,m and X × n,m , so that S(MTF loc ) = ∪ +∞ n=0 S( MTF loc [n]). Each S( MTF loc [n]) consists in 8 eigenvalues. On each figure below in Table 1, we plot ∪ N n=0 S( MTF loc [n]) (in red) along with the expected accumulation points (in black) for the cases mentioned previously in Fig. 2. We adapt the number of spherical harmonics to the frequency, i.e. we set N = 150, 200, 500 for the LF, HF and VHF cases, respectively. These plots clearly confirm that: (i) the spectrum has no more than 8 accumulation points that systematically admit a modulus greater than √ 2; (ii) the accumulation points do not depend on the wavenumber; and (iii) the expected values of the accumulation points coincide with the calculated one. We notice that the eigenvalues spread around the accumulation points and get closer to 0 with increasing frequency and is likely due to the propagative modes of the local MTF operator (see e.g. [28, Section 6] for acoustics). The latter induces deterioration of the condition number and of the iteration count for iterative solvers.
Stability of local MTF for Maxwell equations
We now establish a generalized Gårding inequality for the local MTF on the unit sphere, by means of separation of variables. First of all, let us derive an expression of the norm on H − 1 2 (div, Γ) in vector spherical harmonics. Such an expression can be obtained by noting that the dissipative counterpart of the EFIE operator (i.e. associated to a purely imaginary wavenumber) is continuous and coercive on H − 1 2 (div, Γ) so that the corresponding bilinear form yields a scalar product. Here G i (x) = exp(−|x|)/(4π|x|) as i = √ −1 is the imaginary unit. The vector fields X n,m and X × n,m form an orthogonal family with respect to this scalar product. As a consequence, to obtain an expression of a norm over H − 1 2 (div, Γ), one can rely on the decomposition of the dissipative EFIE on vector spherical harmonics. First observe that (u, v) −1/2,div = Γ (n 0 × γ 0 t · SL κ (u)) · vdσ. As a consequence, using (20) we obtain and u, v ∈ C 2 such that u(x) = X n,m (x) · u, and v(x) = X n,m (x) · v. From this we deduce the asymptotic behaviour D n ∼ D n := diag(1 + n, 1/(1 + n)) for n → ∞, which yields the expression of an equivalent norm which is explicit when decomposed in spherical harmonics From this we easily deduce the expression of an explicit norm for H(Σ), using the matrix D #4 n := diag(D n , D n , D n , D n ). Next we need to introduce intermediate notations for the predominant behaviour of two key matrices coming into play in the local MTF formulation, namely Since we need to rewrite this formulation variationally, we start by inspecting how the duality pairing decomposes on spherical harmonics. First of all, according to (17), observe that Γ (n j × X n,m )·X n,m dσ = Γ (n j ×X × n,m )·X × n,m dσ = 0 and Γ (n 0 ×X × n,m )·X n,m dσ = 1. As a consequence, considering the vector fields u( 12 To examine coercivity of local MTF on the sphere, we need to study the coercivity of the matrix M · MTF loc [n] as n → ∞. If we look at the asymptotic behaviour of this matrix, taking account of the results of Section 6.3, we obtain the expression Let us introduce a diagonal matrix θ ∈ R 2×2 defined by θ = diag(+1, −1), and denote Θ := diag(θ, θ) ∈ R 4×4 . From (39), it clearly follows that (−1) j M · A j κj ,µj [n] · Θ is a real valued diagonal positive definite matrix. On the other hand (M · Θ) = −M · Θ. As a consequence we finally conclude that there exists c > 0 independent of n such that Since the constant c > 0 is independent of n, summing this inequality over n, and taking account that MTF ∞ loc [n] is the asymptotic behaviour of MTF ∞ loc [n], we finally obtain the following coercivity statement.
Preconditioning the local MTF for Maxwell equations
In this section, we introduce a closed formula for the inverse of the multi-trace operator and propose robust preconditioners for the formulation. First, let us rewrite MTF loc as: and introduce the block diagonal STF operator of (29) with S (κ,µ) known to be invertible and (S (κ,µ) ) 2 a second-kind Fredholm operator for smooth surfaces. Finally, we also introduce K (κ,µ) := diag(K (κ,µ) , K (κ,µ) ) with K (κ,µ) := (A 0 κ0,µ0 − A 0 κ1,µ1 ) in (28) being a compact operator on smooth surfaces.
We recall the inverse formula for 2 × 2 matrices, combining (i) and (ii) in [29, Theorem 2.1], and being also valid for bounded linear operators.
Theorem 3. The exact inverse of the multi-trace operator is given by Proof. This result is straightforward by application of Lemma 1 to and using identities (A 0 κ0,µ0 ) 2 = (A 0 κ1,µ1 ) 2 = Id, leading to .
From Theorem 3 we learn that the multi-trace operator can be closely related to the inverse of the single-trace operator. Now, remembering that S 2 (κ,µ) is a compact perturbation of the identity for smooth surfaces, it could be appropriate to replace S −1 (κ,µ) by S (κ,µ) . Using Theorem 3, we state the following important result: Proposition 4. Introduce the following operator: Then B (κ,µ) · MTF loc = S 2 (κ,µ) . Also, if κ 0 = κ 1 and µ 0 = µ 1 , then B (κ,µ) · MTF loc = 2Id. In parallel, we introduce the usual squared operator preconditioner and detail its properties: Theorem 4. The square of MTF loc is given by: In addition, in [30, §5.1] it was already pointed that, if κ 0 = κ 1 and µ 0 = µ 1 , then MTF 2 loc = 2Id. In [31] such properties were used to investigate the close relationship between local MTF and optimized Schwarz methods. A (κ,µ) , which provides
Remark 2. Sometimes, the MTF is preconditioned by
Similarly, one could use Π as a preconditioner, which is a cost-free alternative due to the sparse nature of this operator. Those solutions allow to obtain accumulation points with positive real part but are not a compact perturbation of identity. We did not incorporate these preconditioner in our analysis due to GMRes iteration counts close to MTF loc .
Preconditioning: Clustering properties
The novel preconditioner B (κ,µ) proposed appears to be a second-order approximation of the inverse operator while MTF 2 loc could be considered as a first-order approximation of the inverse operator: We can expect this secondorder property-relatively to a first-order property-to imply: (i) faster convergence towards zero of the singular values that lay close to the cluster and (ii) increasing spreading of the outlying singular values, with direct consequences on iterative solvers. We refer to [32,Section 5] for results concerning the convergence of iterative solvers applied to (operator) preconditioned schemes.
Notice that B (κ,µ) does not involve new operators and is straightforwardly computable from the knowledge of MTF loc . Still, it involves another operator product, which would originate a preconditioner that consists in two matrix-vector products in case of discretization with adapted function spaces. Taking account of (31) finally leads to the following expression for the accumulation points of all aforementioned operators Accumulation points for the last formulation are surprisingly simple as they rewrite as Υ 2 µ , Υ 2 = {2 + µ r + 1/µ r , 2 + r + 1/ r } .
Besides, we define Υ := min(Υ µ , Υ ) > 1. As stated before, the local MTF operator has no more than eight accumulation points, the latter being reduced to 4 accumulation points when using squared operator preconditioning, under the requirement of performing a two matrix-vector products at each iteration of iterative solvers. Their barycenter is located at 2.0 independently of the medium parameters, allowing for further clustering properties (see [33] for the analogy between one big cluster and several small ones). Finally, the novel approximate inverse has not more than two accumulation points, whose center is bounded away from zero, at the price of performing two additional matrix-vector products at each iteration of linear solvers. Notice that the two accumulation points of B (κ,µ) · MTF loc and their midpoints are parameter dependent. Still, these can be rescaled by a factor Υ 2 if needed to bring their values closer to one. In Fig. 3, we plot the eigenvalue distribution for the preconditioned operators for the Teflon and the LF case and remark that the accumulation points coincide with their expected values. Next, we decide to normalize the matrices for comparison purposes, namely, we compare √ 2 −1 MTF loc , MTF 2 loc /2 and Υ −2 B (κ,µ) · MTF loc . In Table. 2, we represent the eigenvalue distribution of all proposed operators for the three frequency ranges. These are significant, and show that from left to right: (i) the number of accumulation points diminishes and we observe stronger clustering close to the accumulation points while (ii) the outlying eigenvalues are more spread for increasing frequencies as expected.
Preconditioning: Numerical Experiments
We consider a partition of the sphere into disjoint planar triangles and we assembly the local MTF operator and the preconditioners with the open-source Galerkin boundary element library Bempp 3.3 [34]. A Bempp notebook server is easily accessible through Docker 1 . The meshes and the simulations are fully reproducible as a Python Notebook 2 . Bempp allows for Calderón-based preconditioning through barycentric refinement and has the Buffa-Christiansen (BC) function basis implemented (refer to [35] and the references therein). We apply mass preconditioning to all formulations-strong form in Bempp-in order to obtain matrices whose condition number is bounded with the meshsize and study the eigenvalues and condition numbers of the induced operators along with the restarted GMRes(20) convergence [36] and solver times. We set relative tolerance of GMRes (20) to 10 −8 and perform simulations through a 4 GB RAM per core, 64 bit Linux server.
We decide to focus on the LF and HF Teflon scattering problem. For the LF (resp. HF) case, we consider two meshes corresponding to precisions of r 0 = 10 and r 1 = 15 elements per wavelength, referred to as cases N 0 and N 1 , with N 0 < N 1 the size of the induced linear systems for MTF loc (detailed in Table 3). The incident wave is a plane wave polarized along z-axis and traveling at a θ = π 4 -angle. The LF case is assembled in dense mode while we use hierarchical matrices for the HF case with ACA compression with relative tolerance 10 −3 . To begin with, we summarize the parameters for each case in Table 3. To provide an extension of the results, we introduce the HF scattering of a complex shape, namely the unit Fichera Cube-the unit cube with a reentrant corner, referred to as (HF) . For comparison purposes, we solve the problem with the preconditioned Single-Trace-Formulation (STF) as a reference [7]. We verify that both the local MTF and STF lead to the same current densities. For example, the LF case with N 0 leads to a 1.38% (resp. 0.563%) relative error in L 2norm for the exterior Dirichlet (resp. Neumann) trace. Similarly, we obtain 0.767% and 0.525% for (HF) with N 1 . These relative errors remain the same for all preconditioners.
Case
(LF) (HF) (HF) Parameter n iter cond. Table 4: Overview of the results for all cases. We detail n iter (resp. t solve ) the number of iterations (resp. total solver times in seconds) of GMRes(20) along with cond., the spectral condition number.
In Table 4, we provide an extensive summary of the results. To begin with, we focus on the rows corresponding to (LF). We remark that all formulations show excellent and mesh stable convergence for GMRes(20) (see n iter ). As a remark, in all cases considered in this section, the unpreconditioned GMRes (20) failed to converge. The condition number ("cond." row in Table 4) for all formulations has a relatively high magnitude but remains stable with mesh (it even slightly betters for the three first cases with increasing N ). Table 5 displays the eigenvalues of the resulting matrices for (LF) and both values of N . We obtain a similar distribution to that expected from spectral analysis (see Fig. 3). Furthermore, the eigenvalues distribution is highly independent of the meshwidth, confirming the results presented so far. Table 5: Eigenvalues distribution for the Teflon LF case of MTF loc (red), MTF 2 loc (blue), B (κ,µ) · MTF loc (green) and STF 2 (purple).
Next, in Fig. 4, we plot the GMRes (20) convergence for N 0 (dashed line) and N 1 (solid line). We remark that the convergence behavior are very similar for each color. Also, the "second-kind" preconditioners-STF 2 , MTF 2 loc and B (κ,µ) · MTF loc -outperform MTF loc . Afterwards, we consider both HF cases. First acknowledge that all remarks discussed before apply to those cases. In Table 4 (columns (HF) and (HF) ), we verify the mesh independence, as the number of iterations remains almost exactly the same with increasing mesh density. We represent the convergence of GMRes of (HF) (resp. (HF) ) in Fig. 5 (resp. Fig. 6). Again, the second-kind preconditioners outperform simple mass matrix preconditioning in terms of convergence of GMRes (20), as well as in solver time (refer to columns t solve in Table 4). Concerning mesh independence, the convergence curves for (HF) in Fig. 6 are almost superposed, evidencing a strong mesh independence for all cases. To finish, B (κ,µ) ·MTF loc appears to converge around twice as fast as MTF 2 loc (see n iter in Table 4) at the cost of 3 versus 2 matrix-vector products per iteration, benefiting a priori to MTF 2 loc . More surprisingly, B (κ,µ) · MTF loc converges at the same rate as STF 2 for the HF cases despite a two-fold increasing in degrees of freedom due to the use of multi-trace space.
Concluding remarks
This article paves the way to show well-posedness of the local MTF applied to electromagnetic wave scattering. As pointed out in Section 1, the results in this paper can be generalised to other smooth surfaces by compact perturbation. Further research includes theoretical and numerical results for multiple domains and domains with junction or triple points. Concerning the MTF linear system preconditioning, our research hints at applying the fast preconditioning technique of [16,32] to produce lower requirements of matrix-vector products for the preconditioners (applicable to both MTF 2 loc and B (κ,µ) · MTF loc ). Fast convergence for the Fichera cube showed the applicability to complex (non-smooth) shapes. Finally, the novel preconditioner B (κ,µ) proposed here paves a way toward high order inverse approximation of operators and Calderón-based polynomial preconditioners [37].
|
2020-03-19T01:00:43.163Z
|
2020-03-18T00:00:00.000
|
{
"year": 2020,
"sha1": "1152aa21c1c5e498b531f947f4cd223c5e452386",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1152aa21c1c5e498b531f947f4cd223c5e452386",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Physics",
"Mathematics"
]
}
|
257717418
|
pes2o/s2orc
|
v3-fos-license
|
Global detection of human variants and isoforms by deep proteome sequencing
An average shotgun proteomics experiment detects approximately 10,000 human proteins from a single sample. However, individual proteins are typically identified by peptide sequences representing a small fraction of their total amino acids. Hence, an average shotgun experiment fails to distinguish different protein variants and isoforms. Deeper proteome sequencing is therefore required for the global discovery of protein isoforms. Using six different human cell lines, six proteases, deep fractionation and three tandem mass spectrometry fragmentation methods, we identify a million unique peptides from 17,717 protein groups, with a median sequence coverage of approximately 80%. Direct comparison with RNA expression data provides evidence for the translation of most nonsynonymous variants. We have also hypothesized that undetected variants likely arise from mutation-induced protein instability. We further observe comparable detection rates for exon–exon junction peptides representing constitutive and alternative splicing events. Our dataset represents a resource for proteoform discovery and provides direct evidence that most frame-preserving alternatively spliced isoforms are translated.
The left column of histograms shows a relative distribution of missed cleavages over six protease digests, the middle column -relative distribution of detected peptides (red line indicates an average value, which is also stated as a number), the right column -an occurrence of amino acids around the cleavage site (labeled as a red line).
Figure S5 .
Figure S5.How to perform a variant extraction in MaxQuant.A, Open MaxQuant and follow the "Tools/-Variant extraction" tab.B, Specify a list of BAM files with NGS data (RNA-seq, WGS, WES), and if needed, change mutation calling parameters.C, Specify location of genomic DNA sequence and genome annotation.Additionally, define folders for temporary and final files.
Figure S7 .
Figure S7.Hot to detect alternative splice events jointly for proteomics and transcriptomics data in Perseus.A, Open Perseus software and follow to "Load/NGS data upload" activity.B, Specify the location of peptide.txtfiles from MaxQuant, transcriptomics BAM files, genome DNA sequence, and annotation.
Figure
Figure S9
Figure S9 .
Figure S9.Properties of MS detected peptides spanning spliced exon-exon junctions.A-F, Percent of MS identified splice junctions as a function of transcriptional coverage, measured as logarithm of read count (reads per million -RPM).Splice junctions are further subdivided into constitutive sites, i.e., present in all isoforms of specific genes, and exclusion/inclusion sites, involved in exon skipping alternative splicing.Figures A-Cdemonstrate statistics for all exon skipping events, but D-F -for in-frame exon skipping events.The percentage of identified splicing sites was calculated among events sorted by transcription coverage using sliding windows of various lengths -100 (A and D), 500 (B and E), and 1000 (C and F) events.Note that figure E is identical to Figure 5D.G-I, the same as D-F, but for each protease used in this study, or all combined (Total).Note that figure H is identical to Figure5E.
Protein sequence coverage of all identified proteins. A,
Histogram showing the number of protein groups binned by observed sequence coverage.B, Pie chart showing the number of proteins observed in each of five 20% bins of sequence coverage.C, Series of violin plots for all measured combinations of cell lines, proteases, and fragmentation methods.D, A number of reported peptides and peptide spectral matches (PSMs) across large-scale proteomics studies.E, Cellular component gene ontology analysis of proteins with sequence coverage less than 25% are significantly enriched for membrane proteins accordingly to the Fisher's exact test... ... ... ...
Comparison with the neXtProt annotation. A,
The current release of neXtProt (October 2022) was downloaded and cross-mapped to peptides profiled in this study by first converting any proteins demarked by UniProt identifiers to Ensembl Protein identifiers.UniProt to ENSP mapping was obtained from BioMart.Next, Ensembl protein identifiers were mapped to neXtProt accession values via the mapping scheme provided in the October 2022 release.Finally, the number of peptides per neXtProt group were summed across all cell lines used in this study.B, Unique neXtProt proteins delineated by protein existence (PE) rank colored by the number of mapped peptides detected in this study.C, The relative proportion within each PE rank of neXtProt proteins with 0, 1, 2, or 3+ mapped peptides.
|
2023-03-25T06:17:33.491Z
|
2023-03-23T00:00:00.000
|
{
"year": 2023,
"sha1": "6e7dac372b773a59d85822289fb8785001ff638a",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41587-023-01714-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "85a0c98ba387e318358d9c5089107aa346bf027b",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
19409073
|
pes2o/s2orc
|
v3-fos-license
|
Chronic obstructive pulmonary disease in adults with human immunodeficiency virus infection : a systematic review
Objective: To determine the prevalence of chronic obstructive pulmonary disease (COPD) in adults with Human Immunodeficiency virus infection (HIV). Design: Systematic review of Medline, Embase, CINAHL, PsycINFO and references from identified papers. Study selection: Studies determining the prevalence of COPD in adults with HIV infection. Independent duplicate data extraction. Study quality was assessed in terms of whether consecutive patients were enrolled, recruitment and follow-up periods were defined, <10% of subjects were lost to follow-up, subjects with missing data, method of COPD diagnosis and antiretroviral treatment were described. Data synthesis and results: Of the 911 citations identified, 8 North American studies conducted from 2005 to 2010 were reviewed. The demographics were: mean age 43 50.3yrs, >60% males, <50% African Americans, 37.1% 83.3% active smokers, >60% on antiretroviral therapy. COPD was diagnosed by post-bronchodilator FEV1/FVC < 0.7 in three studies, by International Classification of Diseases (ICD-9) codes in three studies, by FEV1/FVC < 5% of lower adjusted normal in one and by prebronchodilator FEV1/FVC < 0.7 in another study. The prevalence was 10% 35%, except for one study that recorded prevalence of 4% by postbronchodilator FEV1/FVC < 0.7, but <38% of patients with prebronchodilator FEV1/FVC < 0.7 had post-bronchodilator spirometry in that study. Conclusion: COPD is becoming increasingly common in HIV infected as they smoke and live longer due to efficient antiretrovirals. However, definite conclusions cannot be drawn and more longitudinal studies are needed. In the meantime health care providers should be vigilant to screen for undiagnosed COPD and hesitant to attribute respiratory symptoms solely to HIV infection.
INTRODUCTION
Effective antiretroviral therapies have improved the prognosis for patients infected with the Human Immunodeficiency Virus (HIV) [1].The Collaborations in HIV Research-US (CHORUS ) cohort study reported a median survival of 20.4 years for HIV positive subjects, with mean age at death 60.4yrs and cause of death not directly attributable to HIV in 41% [2].Therefore in the era of Highly Active Antiretroviral Therapy (HAART) patients with HIV infection live longer and experience increased mortality from causes not directly attributable to HIV.Recent data suggest that Chronic Obstructive Pulmonary disease (COPD) is becoming increasingly common in HIV-infected patients on antiretrovirals [3].However even studies conducted before the introduction of HAART suggested that HIV infected patients have increased susceptibility to COPD.Diaz et al reported that the prevalence of emphysema in 114 HIV infected patients was 15% compared to only 2% in 44 HIV negative patients [4].Poirier et al reported that the prevalence of airway hyperresponsiveness was 30.1% in 151 HIV positive smokers compared to 13.3% in 82 HIV negative smokers [5].As bronchial hyperresponsiveness can be a potential risk factor for progressive COPD in smokers, the data by Poirier et al may indicate a potential link between smoking and HIV infection that could accelerate the development of COPD [6].The fact that 40-70% of HIV positive subjects smoke in the era of HAART should probably also be considered [7].The rate of infectious pulmonary complications has been reduced withthe new antiretrovirals [8].However, Morris et al suggested that, colonization with Pneumocystis jirovecii can predict lower FEV1 and FEV1/FVC, as well as clinical airway obstruction without having necessarily clinical evidence for chest infection [9].These investigators reported that the presence of Pneumocystis in the lungs even at low levels in clinically healthy HIV positive subjects can produce inflammatory changes similar to those seen in COPD independant of smoking.This observation lend support to the hypothesis that Pneumocystis can be involved in the progression of airway obstruction in HIV infected [10].Recent reports suggested that HIVinfection may be an independant risk factor for COPD [11,12].
The prevalence of COPD in HIV infected subjects has important clinical and policy implications for future delivery of care.Although previously assessed it has yet to be summarized in a systematic review.The purpose of this review is to systematically retrieve and appraise the available data on the prevalence of COPD in patients with HIV-infection.
Search Strategy
The review followed the methodology of systematic reviews [13,14,15].Two researchers independently searched the literature by using the Ovid search platform of CI-NAHL (1981 to October 2010), EMBASE (1980 to October 2010), Medline (1950 to October 2010) and Psy-chINFO (1967 to January 2010) databases.The following search terms were identified through the Medical Subject Headings dictionary [16]: HIV, COPD, Chronic Obstructive Pulmonary Disease, Chronic Obstructive Lung Disease, Chronic Airflow Obstruction, Chronic irreversible airflow obstruction, Chronic Bronchitis, Emphysema.All different possible combinations of terms were used in the literature searches to maximize references retrieved.The library of the Cochrane reviews was consulted to verify whether a given review has already been made [17].A hand search was also performed of reference lists of the relevant review articles and of the articles identified at the electronic search regardless of the initial publication language.Annals of relevant congresses were reviewed searching for presented but not published studies.
Eligibilty Criteria
Potentially relevant studies were identified by title, then by abstract, then by full text.Studies were selected for review if they met the three following inclusion criteria: 1) Adults (> 16 years old) 2) HIV infection as defined by the World Health Or-ganization [18] and 3) COPD documented in patients diagnosed with HIV-infection.All retrieved reports were checked for inclusion criteria in a blinded fashion by two authors.In the case of duplicate publication the largest study, if applicable was included.Any disagreement between the investigators was solved independently by a third experienced reviewer.
Exclusion Criteria
Studies were excluded first by title, then by abstract and finally by full text if they were: 1) Case reports.Studies with small samples have a greater chance of publication bias [19] 2) Narrative pieces without data to support observations (editorials, policy recommendations, opinion surveys and commentaries) 3) Enrolling patients with HIV exclusively from hospital inpatient wards.The rate of COPD in hospital inpatients may be biased upwards compared to patients at a similar stage of disease who were not hospitalized.Hospital inpatients may have faster rates at which they develop Acquired Immune Deficiency Syndrome and therefore susceptibility to comorbidities, including COPD [20].4) Using International Codes for Disease classification (ICD-9 codes) to diagnose COPD without provision to improve the accuracy of this source of data by including inpatient codes at least once and outpatient codes at least twice [21].
Description of Methods to Diagnose COPD
Criteria of COPD diagnosis are as following: 1) British Thoracic Society: FEV 1 /FVC < 0.70 and if FEV 1 > 80% of predicted, presence of symptoms like cough and dyspnea [22].2) European Respiratory Society/American Thoracic Society: post-bronchodilator FEV1/FVC < 0.70 [23,24].3) Global Initiative for Obstructive Lung Disease: post-bronchodilator FEV1/FVC < 0.70 [25].For the use of ICD-9 codes to diagnose COPD the relevant guideline suggests that the code is to be used when the medical record documentation substantiates obstructive lung disease including among other tests spirometry according the ATS protocol [26].
Data Extraction, Synthesis and Analysis
For each eligible study data were extracted on number of patients included, gender, age, race or ethnicity of participants, intravenous drug abuse, men who have sex with men, smoking status, history of Pneumocystis carinii or bacterial pneumonia, source of recruitment, CD4 and HIV viral load levels, treatment with HAART, study design and key exclusion criteria, study quality and inclusion criteria, method use to diagnose COPD and respiratory symptoms.Outcome assessed was the prevalence of COPD.Data extraction was performed by one author and cross-checked against extraction by another reviewer.
Due to qualitative and quantitative heterogeneity in patients diagnosed with HIV infection a quantitative synthesis was not possible.A descriptive and qualitative analysis was therefore undertaken.
The quality of eligible studies was assessed on the basis of the following features [27]: 1) consecutive patients were enrolled 2) the recruitment period was defined 3) the follow-up period was defined 4) subjects lost to follow-up were described 5) subjects known to be alive but lost to follow-up were < 10% 6) respiratory symptoms were assessed with a standardized questionnaire 7) the method to assess clinical data was clearly described (interview and standardized medical records versus survey and standardized medical records) 8) management of subjects with missing data was described 9) Antiretroviral treatment was described as standard, at least three antiretroviral agents from at least two different classes of medications, or otherwise, if applicable.
Search Results and Study Characteristics
The flow chart of the search results is presented in Figure 1.The search identified 911 citations from which 836 abstracts and 75 full-text publications were retrieved.A total of 8 non-interventional studies conducted in USA during the period 2005-2010 were eligible for review.The characteristics of the patients included in the studies assessing the prevalence of COPD in HIV infected patients are presented in Tables 1 and 2. Three studies evaluated veterans outpatients [12,28,29], four evaluated outpatients of HIV or Sexually Transmitted Disease clinics [9,[30][31][32], one evaluated intravenous drug users recruited from the community [33].Five studies were prospective observational in design and three cross sectional surveys.The mean age of patients ranged from 43 to 50.3 yrs.The age range was 32 to 70yrs, but in four out of eight studies it was 39-55 yrs.Men were more than 60% of the population in all but one study.African Americans were less than 50% of the population in all but two studies.Illicit drug use was reported in up to 25% of the population in all but two studies.Men who have sex with men ranged from 38% to 49.7%.The prevalence of active smoking ranged from 37.1% to 83.3% and in four out of eight studies it was above 50%.
History of previous Pneumocystis or bacterial pneumonia ranged from 1.3% to 44.3% although in six out of eight studies ranged from 5% to 15.5%.The mean CD4 varied from 264 cells.mm3 to 484 cells.mm3.The mean HIV viral load was from 2.6 to 6.5 copies per ml.In five out of eight studies more than 60% of the population was on HAART although the proportion of patients on HAART ranged from 28% to 83.3%.In three of the five prospective observational studies the period of time from diagnosis of HIV infection to diagnossis of COPD was reported and it was 10. 33 yrs (mean) with range from 9 to 13yrs [30][31][32].
The design and exclusion criteria for the studies assessing the prevalence of COPD in HIV infected patients are presented in Table 3.Three studies excluded subjects with cough, shortness of breath or fever in the last month before study enrolment [9,31,32].All studies provided data on smoking and illicit drug use for every subject included.This was so because four studies excluded subjects with missing data on smoking or illicit drug use and the other four had no subjects with missing data on smoking or illicit drugs.Similarly all studies provided CD4 level and HIV viral load for every subject included at the time of COPD diagnosis.This was so because four studies excluded subjects without CD4 or HIV viral load and the other four had no subjects with missing data.In three studies the subjects were on standard antiretroviral therapy and one study excluded subjects with other than standard antiretroviral therapy.
Quality Assessment
The quality assessment of the studies assessing the prevalence of COPD in HIV infected subjects is presented in Table 4.All studies enrolled consecutive patients.The recruitment period was described in all but two studies [9,30].The follow-up period was defined in four out of five prospective observational cohorts and it was not applicable for the three cross sectional surveys.Subjects lost to follow-up were fully described in four studies whereas three more studies had no subjects lost to follow-up.Of the four studies who had subjects lost to follow-up but known to be alive at the time of the study three confirmed that these subjects represented less than 10% of the population [11,12,29].Respiratory symptoms were assessed by a standardized respiratory questionnaire in all but one study [9].Data were obtained by interview and standardized medical records in four out of eight studies and by survey and standardized medical records in the other four studies.
Assessment of Prevalence of COPD
The method used to diagnose COPD and the relevant prevalence is presented in Table 5.In three studies COPD was diagnosed with post-bronchodilator FEV1/ FVC < 0.7, in three studies with ICD-9 codes, in one study with FEV1/FVC below the 5% lower limit of age adjusted normal and in one study with pre-bronchodilator FEV1/FVC < 0.7.The prevalence of COPD ranged from 4% to 35%.The highest COPD prevalence of 35% corresponded to clinically free from chest infection HIV positive patients colonized with Pneumocystis [9].In the same study the prevalence of COPD for HIV positive who were not colonized with Pneumocystis was less than 6%.In the study with the lowest recorded prevalence of 4%, obstructive lung function was diagnosed by prebronchodilator FEV1/FVC < 0.7 in 12%, but only 38% of them underwent post-bronchodilator spirometry [30].
In the same study logistic regression was used to demonstrate that COPD was associated with increased odds (OR = 2.25 CI: 1.43 to 3.54) of reporting MRC dyspnea scale > 3 and that HIV was also associated with increased odds (OR = 1.50 CI: 1.08-2.09) of reporting MRC dyspnea scale > 2 [30].Another four studies reported prevalence between 10% and 16% [12,28,29,33].One of the studies using ICD-9 codes, reported prevalence of COPD of 15% when patients were self reported to be examined for COPD and 10% when ICD-9 codes were used [12].The respiratory symptoms in the 8 studies included in this review ranged from 31% to 63%.In the three studies which excluded patients with cough, shortness of breath or fever in the last month before enrolment the respiratory symptoms ranged from 21% to 63%.In three of the eight studies the respiratory symptoms rate was > 40% [30][31][32][33].
Three studies compared the prevalence of COPD between HIV positive and HIV negative patients with similar characteristics [12,29,33].Two of them adjusted for known risk factors of COPD and found that HIV positive compared to HIV negative subjects were 50% more likely to get diagnosed with COPD by ICD-9 codes (OR = 1.47 with CI: 1.01 to 2.13) and 60% more likely to have a diagnosis of COPD by self report (OR = 1.58,CI: 1.14 to 2. 19).The third study found prevalence of COPD of 15.7% in HIV positive and of 15.5% in HIV negative subjects [ 33 ].In that study the population was composed entirely of intravenous drug users and intravenous drug use is a well known risk factor for COPD [34].The same authors demonstrated that in HIV patients, in whom the mental health summary score is already adversely affected by HIV, the presence of COPD is associated with 2.43 unit further decrease in the mental health summary score, which can be translated to an increase in the likelihood of death by approximately 8% [33].
Discussion
With the efficacy of HAART, life expectancy of HIV infected individuals has extended such that they are becoming increasingly susceptible to chronic debilitating diseases like COPD.HIV positive subjects have reported rates of respiratory symptoms comparable to those observed in elderly smokers, they are diagnosed with COPD at younger age and smoke more than the general population.HIV and COPD have been reported to exert additive effects on dyspnea and reduction of quality of life.Although no definite conclusion about the prevalence of COPD in HIV can be drawn from the data available, as the reported prevalence varies considerably, it is evident that as HIV patients become an aging population, health care providers should consider COPD when evaluating HIV infected patients, especially those who present with respiratory symptoms or impairment.
There are comparability issues that should be considered in the interpretation of our findings.To determine the prevalence of COPD in HIV infected patients require vigorous longitudinal studies.However the studies available consisted of 5 prospective cohorts and 3 cross sectional studies.Most research has been conducted in North American tertiary care University centers in the period 2009-2010.Therefore the recorded prevalence may not reflect the situation in other continents or another level of care.The majority of patients included were male smokers that prevent us from drawing conclusions about the prevalence of COPD in HIV positive females who have been reported to have high smoking rates [34].However although predominantly male the cohorts included in our study were ethnically diverse.The case mix varied between the studies since three of them studied veterans, two studied intravenous drug users and three outpatients with HIV.Although, the de-mographics of the studies included in this review were those of patients at risk for HIV and COPD, it could be argued that our results may not necessarily reflect other populations.However, the Veterans Health Care System is the largest provider of healthcare to HIV infected in USA and demographics similar to those included in our review have been described in other studies of HIVinfected non-veterans [35].The method to diagnose HIV was ICD-9 codes in three of the eight studies and spirometry in the rest.It has been previously reported that ICD-9 are more likely to underestimate than overestimate the prevalence of chronic conditions and the authors addressed this potential limitation by using a second method to measure prevalence, the patient self-report [12,[36][37][38].The consistency of the measured prevalence with these two methods strengthens the validity of their results on prevalence with ICD-9 codes.Additional strength to their findings is added by the fact that the authors assessed the prevalence of COPD in HIV infected compared to demographically matched HIV-uninfected controls and found it to be 50% to 60% more common.However, spirometry remains the gold standard for the diagnosis of COPD.The previously reported effect of the use of pre-bronchodilator FEV1/FVC < 0.7 compared to post-bronchodilator FEV1/FVC < 0.7 on the estimated prevalence of COPD should also probably be considered [39].
The question whether HIV infected is at sufficient risk to benefit from screening for COPD remains unanswered from the available data.However the studies included in this review have highlighted potential mechanisms who may explain, at least in part, the pathogenesis of COPD in HIV.Multiple interacting factors have been described.HIV infected individuals demonstrate elevated systemic and topical inflammatory cytokines, including those involved in the pathogenesis of COPD [40][41][42].Episodes of clinical pneumonia and colonization with respiratory organism may contribute to airway obstruction in subjects with HIV infection [43,44].Colonization with Pneumocystis jirovecii, the organism that is associated with human Pneumocystis Carinii pneumonia, has been associated with the presence and severity of COPD in HIV negative subjects [9].Pneumocystis jirovecii has been identified in the sputum of HIV positive smokers compared to non-smokers in the absence of active PCP [9].HIV has been suggested to accelerate premature frailty and aging related changes in the immune system [45][46][47][48].As a result HIV may render the lung more susceptible to diseases such as COPD, which has also been suggested as a disease of accelerated lung aging [49,50].Smoking, a habit highly prevalent in HIV population, exacerbates cellular senescence [51].We cannot but observe that from the suggested potential links between HIV and COPD, smoking and infection-colonization may be susceptible to interventions.Considering that there are approximately 31.3 million adults who live with HIV today and 2.3 million newly infected every year, who are in their majority smokers with mean age at death 60yrs, the relevant research should probably be considered as money well spent [52].
In this systematic review we found that, although definite conclusion about the prevalence of COPD in HIV cannot be drawn from the data available so far, COPD is becoming increasingly common among HIV infected, as they now smoke and live longer due to highly effective antiretrovirals.We clearly need more longitudinal studies to assess the prevalence of COPD in HIV infected and the potential links between the two.In the meantime healthcare providers should be vigilant to screen for undiagnosed COPD and hesitant to attribute respiratory symptoms solely to HIV infection.
Figure 1 .
Figure 1.Flow chart of literature search results.
Table 1 .
Characteristics of the studies assessing the prevalence of COPD in adults with HIV.
Abbreviations: COPD; Chronic obstructive pulmonary disease, HIV; Human immunodeficiency virus, MSM; Men who have sex with men, PCP; Pneumocystis carinii pneumonia, STD; Sexually transmitted diseases, IDU; Intravenous drug users.*Data for this study are presented as they available so far on www.vacohort.org,accessed 07/11/2010
Table 2 .
Assessment of the characteristics of HIV infection in the reports evaluating the prevalence of COPD in adults with HIV infection.
Table 3 .
Design and key exclusion criteria for studies assessing the prevalence of COPD in adults with HIV-infection.
Abbreviations: COPD; Chronic obstructive pulmonary disease, HIV; Human immunodeficiency virus infection, SOB; Shortness of breath, NA; Non-applicable, as there are no missing data *At least three antiretroviral agents from at least two classes of medications.
Table 4 .
Assessment of quality of studies reporting the prevalence of COPD in adults with HIV-infection
Table 5 .
Assessment of respiratory symptoms, method to evaluate the prevalence and prevalence of COPD in adults with HIVinfection.
|
2017-06-15T02:39:03.384Z
|
2011-04-29T00:00:00.000
|
{
"year": 2011,
"sha1": "56e78b7dea0dccda3fb01ef00e6fb6b7fc21a383",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=4746",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "56e78b7dea0dccda3fb01ef00e6fb6b7fc21a383",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212961343
|
pes2o/s2orc
|
v3-fos-license
|
Experimental Measurement of Bulk Thermal Conductivity of Activated Carbon with Adsorbed Natural Gas for ANG Energy Storage Tank Design Application
The development of adsorptive natural gas storage tanks for vehicles requires the synthesis of many technologies. The design for an effective Adsorbed Natural Gas (ANG) tank requires that the tank be filled isothermally within a five-minute charge time. The heat generated within the activated carbon is on the order of 150 MJ/m 3 of storage volume. The tank can be effectively buffered using Phase Change Material (PCM) to absorb the heat. The effective design of these tanks requires knowledge of the thermal properties of activated carbon with adsorbed methane. This paper discusses experimental measurements of the thermal conductivity of activated carbon with adsorbed methane. It was found that within the tank the thermal conductivity remains almost constant within the temperature and pressure ranges that ANG tanks will operate.
Introduction
High-pressure gas storage vessels are one of the fastest-growing markets for advanced materials such as composites. Changing and escalating emissions standards are driving a 10 percent annual growth in alternative fuel pressure vessel sales [1]. Composite-reinforced pressure vessels are used for Compressed Natural Gas (CNG) products, buses, and trucks dependent on CNG and hydrogen alternatives to gasoline and diesel. Since they work under high pressures, a failure of pressure vessel can be very dangerous, causing gas leaks, fires, and even explosions. To answer this problem and increase performances of high-pressure hydrogen storage tanks, a multi-layered pressure vessel design was proposed featuring the dynamic wall capable of absorbing hydrogen [2,3].
The use of natural gas as transportation fuel is limited mainly due to its low volumetric energy density. CNG vehicles use natural gas that has been stored in heavy walled steel or carbon-fiber/epoxy pressure tanks. Sometimes, CNG vehicles have reduced cargo space because of the design and placement of the tank. CNG vehicles use methane stored at 3000 pia (20.7 MPa) to achieve densities of 10 lb/ft3 (160 kg/m 3 ). High-cost, limited CNG refueling stations and space constraint inside the vehicle for CNG pressure vessels make them not a feasible alternative to petroleum fuels [4].
Adsorbed Natural Gas (ANG) became competitive to the CNG method because of the high energy density capability achievements [5][6][7]. ANG storage tanks use activated carbon as an adsorbent and can achieve storage densities as high as 24 Ib/ft 3 (384 kg/m 3 ) at 500 psia [8]. The disadvantage of ANG storage comes from the heat generated during the adsorption process. When natural gas is adsorbed on the activated carbon, the gas changes to a semi-liquid phase on the surface of the carbon. The heat transfer through the activated carbon with adsorbed methane is the focus of this paper.
The heat of adsorption is equal to or greater than the heat of vaporization. During tank charging, the tank heats up and limits the amount of gas which can be adsorbed. The heat of adsorption is approximately 150 MJ/m 3 of tank volume. A five-minute fast charge of a 0.3 m 3 ANG tank would require a heat transfer rate of 150 kWh to maintain isothermal conditions in the tank. A typical vehicle with 10 cubic feet (0.3 m 3 ) of storage has a range of 60 miles (100 km). The bulk thermal properties of the activated carbon and methane gas under charging conditions need to be known so that optimal design of ANG tanks can be achieved.
Phase Change Materials (PCM) used to buffer the tank thermally have been investigated at the Institute of Gas Technology (IGT). They have developed and patented this method [9]. The design that IGT uses is based on a computer model which simulated the adsorption process. Their model does consider changes in thermal conductivity and specific heat of the activated carbon with adsorbed methane. The aim of this research is to discuss experimental measurements of the thermal conductivity of activated carbon with adsorbed methane.
Activated Carbon
The adsorption phenomenon has been intensely studied since World War I, when activated carbons were first used in gas masks. Activated carbon is produced from carbonaceous raw material by carbonization and activation. During carbonization, most non-carbon elements such as oxygen and hydrogen are eliminated by pyrolytic decomposition. The residual carbon atoms group themselves into sheets of condensed aromatic ring systems with a certain degree of planar structure [10]. Figure 1 [11] shows the activated carbon structure. The aromatic sheets are intertwined and give rise to interstices and pore areas. The slit shaped interstice railed micro-pores have a width of 0.5-2 nm. The pore areas which are called macro-pores are 2-50 nm in width. After carbonization, the material is activated. The activation process enhances the pore structure by removing tarry residues blocking the pores. Adsorption occurs in the slit shaped spaces between the aromatic sheets. Activated carbon provides an exceptional surface area, and a fine pure structure which adsorbs and traps gas particles. Environmental pollution concerns have led to the increase of the activated carbon demand, which is expected to continue rising in the near future. Activated carbon is used in a broad spectrum of applications, including methane and hydrogen storage, removal of water and air pollutants, food and beverage processing, solvent recovery, decaffeination, industrial pollution control, medicine, sewage treatment, teeth whitening, and biomedical applications, among various others.
Carbon-based materials, such as activated carbons, have shown excellent potential for widespread applications. Activated carbons are carbonaceous adsorbents with an extremely crystalline form and high internal pore structure. A broad range of activated carbon products are available depending on the used raw material and activation method.
Many research studies have been reported in the literature about the adsorption of activated carbons for different applications.
To adsorb methane for storage, upgraded activated carbons using potassium hydroxide was investigated by Jung Eun Park et al. [12]. Improved Polyacrylonitrile-based activated carbon for carbon dioxide adsorption was investigated by Yu-Chun Chiang et al. [13]. They used potassium hydroxide (KOH) to modify activated carbon which provided extra pore volume for adsorption. The adsorption of salicylic acid, acetaminophen, and methylparaben using activated carbons were investigated by Bernal et al. [14]. Their findings indicated that the pharmaceutical compounds have a low level of adsorption abilities in the activated carbon. Aloysius [15] used activated carbon samples at two pyrolysis temperatures and evaluated their adsorption capacity in aqueous solution. Research results showed that activated carbon adsorptive properties were influenced to a great extent by the pyrolysis temperature.
Adsorption Theory
A large variety of adsorbents, including activated carbon, have been investigated for adsorption studies. Karatza et al. [16] investigated the adsorption and desorption characteristics of the mercury vapor with silver nitrate impregnated commercially available activated carbon. It was shown that employed carbon was successful in capturing the mercury and obtained kinetic and thermodynamic parameters were crucial for designing a full-scale unit. Vorokhta et al. [17] compared the CO 2 capture by three-dimensionally ordered micromesoporous carbon with carbon materials, carbon nanotubes, carbon nanohorns, and activated carbon. The capture performance of the proposed carbon was shown to be successful with high thermal stability and lower energy demand. Santonastaso et al. [18] investigated the use of permeable adsorptive barriers with activated carbon to mend the thallium contaminated aquifer. The finite element simulations showed that proposed barrier design can be considered as a successful tool. The proposed design was shown to be a better alternative to a continuous barrier. Hernandez-Monje et al. [19] presented a study in which the energy product from the interaction between three different organic solvents with three activated carbon samples with different physiochemical properties was measured. The highest interaction energy was found with benzene and toluene mixtures with activated carbon, thermally treated at 750 • C.
The adsorption theory for microporous carbon is best predicted by the Dubinin and Radushkevich equation. This equation is known as the DR equation. The modern form of the DR equation is where W o is the total volume of micropores (mg g −1 ), β is the affinity coefficient and β for (CH 4 ) is equal to 0.5 (mol 2 kj −2 ) [20], E o is the characteristic adsorption energy (kj/mol), n is a homogeneity exponent which varies from 1.5 to 3.
is the gas critical pressure (atm).
The DR equation is applicable for a wide range of gases and activated carbons [10]. The more well-known Langmiur equation is based on constant heat of adsorption regardless of coverage. It does not accurately predict the adsorption on highly captivated micro-porous carbons used in gas storage systems [21]. The adsorption of methane on ABG-40 and CALGON PCB carbons are shown in Figure 2a,b. The data used in these graphs came from experimental data compiled in reference [22]. Typical values for characteristic heats of adsorption for activated carbon methane range from 17 to 18 kJ/mol [23]. Micro-porous carbons tend to have higher characteristic heats of adsorption. The isosteric heat of adsorption for CALGON PCB 12 × 30 is 17.85 kJ/mol [22]. The heat of adsorption limits the amount of gas adsorbed. During the fast charging of ANG storage tanks, this heat must be removed to enhance storage efficiency.
A 300-liter automotive storage tank requires 150 kilowatts of heat transfer during a five-minute isothermal charge. The temperature in the tank during charging will rapidly reach 100 • C [9]. A parametric study was performed at Michigan State University on the effect of the heat of adsorption using a computer model. It was determined that the thermal conductivity and heat capacity of the activated carbon had the largest effect on storage efficiency [24]. The heat capacity of storage system can be increased by the use of PCM.
Thermal Conductivity
Numerous theoretical and experimental methods are reported in the literature to estimate materials' thermophysical properties. As far as the porous media such as activated carbon is concerned, studies about thermal conductivity have been a major research interest. There have been investigations for calculation of the thermal conductivity of packed beds. Kunii and Smith studied how to predict the effective thermal conductivity of beds of unconsolidated particles containing stagnant fluid [25]. Luikov et al. suggested a formulation to determine the effective thermal conductivity of powdered and solid porous materials in a wide-ranging temperature and various gas media [26]. Cheng et al. examined a method to estimate the effective thermal conductivity from the packing structure of a packed bed of mono-sized spheres in the existence of a stagnant fluid [27]. Composite adsorbents, containing activated carbon and expanded natural graphite, have been created, and their adsorption performance characteristics were tested by Wang et al. [28]. The thermal conductivity of small size cryopanel, which contains of a copper panel coated activated carbon adsorbent, was studied by Verma et al. [29].
The use of activated carbon has made it possible to store the same amount of gas contained in CNG tanks at a much lower pressure. Activated carbon alone is an excellent thermal insulator and, thus, a poor heat conductor. The thermal conductivity of activated carbon ranges from 0.05 to 0.10 Watts/m • K [30]. The thermal conductivity of methane gas is independent of pressure. Figure 3 shows the thermal conductivity of methane measured at 101.35 kPa. The thermal conductivity of methane at 300 • K is 0.0344 Watts/m • K [10]. The thermal conductivity of saturated liquid methane ranges from 0.089 to 0.189 Watts/m • K [31]. The thermal conductivity of solids is directly related to the number of free electrons available. During adsorption, the activated carbons interstices become filled with methane gas at near liquid densities, thus decreasing the mean free path between electron shells. This phenomenon alone should increase the thermal conductivity of the activated carbon. The effect of a transport phenomenon caused by desorption and adsorption was disproved in the computer model of reference [32]. The energy required to liberate the methane molecules from the micro-pores is not present in the temperature gradients in an ANG storage tank during charge and discharge. The effect of contact resistance between carbon grain particles should also decrease since surface wetting of the activated carbon is also present. The net effect is an activated carbon with increased thermal conductivity.
Thermal Conductivity Measurement Method
The method used to measure the bulk thermal conductivity of the activated carbon with adsorbed methane was performed by determining the temperature distribution through the test tank from thermocouple probes and then curve-fitting the data. Fourier's Law was used to find the mean thermal conductivity for each test. The development of this method follows that of reference [33]. Starting with Fourier's Law for heat conduction, where Q is the heat flow rate (W/m 2 ), k is the thermal conductivity (W/m • K), A is the area through which the heat flows (m 2 ), ∇T is the temperature gradient ( • K).
In 1D cylindrical coordinates, the radial temperature gradient is The heat conduction equation then becomes Assuming the thermal conductivity is a function of temperature and substituting the area, A, with 2πrL for heat transfer in the radial direction of a hollow cylinder, we have where L is the height of the cylinder (m), r is the radius of the cylinder (m).
Solving for k(T) in Equation (5), and by defining the mean thermal conductivity K m (T m ) and mean temperature as the mean thermal conductivity can be found by substituting Equation (6) into Equation (7) K When the limits of integration in Equations (8) and (9) are changed to r 1 and r 2 , T m and K m become which then can be used to calculate K m (T m ), when T(r) is known. It is evident that K m is only dependent on the two surface temperatures, for Equation (11) after simplification. In order to reduce the error associated with the thermocouple readings used in calculating K m , a least squares curve-fit of the temperature distribution data was made [34]. The curve-fitting Equation (9) can then be used to calculate the thermal conductivity. T(r) is found from the experimental measurements of temperature distribution through the carbon sample in the test tank. T(r) is the least squares curve-fit to the function where A and B are constants and n is a nonlinearity exponent of the temperature distribution (n = 1 for constant k) from the experimental data. Derivative of the function defined by Equation (12) is K m (T) is calculated by substituting Equation (13) into Equation (9). ∂T ∂r is developed from the test data through logarithmic regression of the temperature and radius data. The heater power was developed by measuring the impedance of the resistive film and was verified to be constant within voltage range of 100-130 Volts AC. Since the power applied is AC, the RMS is the actual power developed by the heater.
Experimental Test Setup
The experimental test setup consisted of the following equipment: data acquisition and control computer, thermocouple signal conditioner, heater power control, gas measurement and control equipment, and an annular thermal conductivity test tank. Figure 4 shows the test setup used to measure the bulk thermal conductivity of the activated carbon with adsorbed methane (note: manual valves not shown in the schematic).
Heater Power Control
The pulse burst method controls the power to the heaters by regulating the amount of time for which the power is switched "on" during a given cycle. The computer program was able to control the power to the heaters from 0 to 100% power in 1% increments with a power cycle of five seconds.
Gas Measurement and Control
The gas measurement and control equipment consisted of the DASH-16 board, refrigeration vacuum pump, pressure regulators, pressure transducers, solenoid valves, manual valves, CNG tank, bellows type gas meter, 1/2 high pressure stainless steel tubing, ANG gas control box, and an MKS mass flow meter controller. The DASH-16 was interfaced to the ANG gas control box to control and monitor gas flow rate and pressures. The ANG Gas Control box was designed to operate in manual and computer-controlled modes. The charge and discharge solenoid valves controlled flow out of the CNG tank and test tank, respectively. The discharge flow rate was controlled and measured by an MKS 558A mass flow meter and an MKS 1559A mass flow controller calibrated for methane. The MKS unit uses an electromagnetic proportioning valve for flow control and a laminar heated tube sensor for mass flow determination. During discharge of the test tank, the amount of gas in the tank can be determined from the discharge flow rate integrated by the computer. A bellows type gas flow meter was modified with digital counter and used downstream of the MKS unit to measure the gas volume.
Test Tank
Testing the bulk thermal conductivity of the activated carbon with adsorbed methane was performed by constructing a test tank in which the radial heat flow could be controlled and temperature profile measured. The American Society for Testing and Materials (ASTM) method for measuring thermal conductivity of granular media used a guarded hot plate system; however, an annular design was necessitated by the pressure requirements of the methane on the activated carbon. The test tank shown in Figure 5 consisted of two circular plates, a heater pipe insert, and a tank body pipe section. The tank body section was seated in the o-ring grooves of the plates on the top of vinyl o-rings. Welded to the center of the top plate on the inside of the tank was the heater pipe insert. Kapton film heaters matching the inner circumference of the heater pipe insert were centered and attached with adhesive to the inside of the pipe. Two guard heaters were placed outside of the heater test area with power ratings of 28 watts per inch. After electrical connections were made, the heater pipe insert was filled with urethane foam insulation. Type K thermocouples were used to control the power to the heaters. Thermocouples were also located on the inside surface of the tank body section duplicating the vertical location of the heater thermocouples. These thermocouples were used to monitor axial heat flow across the test section. The radially strung thermocouples were used to measure the temperature profile of the activated carbon.
Heater Pipe Insert
The heater pipe insert was tested to verify the assumption of 1D heat transfer in the test section of the tank. A polyvinyl-chloride, PVC, pipe was used as the tank body. The heaters in the heater pipe insert were controlled by the computer system to maintain a 122 • F (50 • C) surface temperature. After steady state was reached, the radial temperatures were measured at 1/4 inch (6.35 mm) increments for each vertical hole. The temperatures were recorded by inserting a 1/16 inch (1.59 mm) thermocouple probe into the holes and moving the probe inward 1/4 inch (6.35 mm) after each measurement until the heater pipe insert was reached. The results show that the axial temperature variation in the test section area is isothermal, thus verifying the 1D assumption of radial heat transfer.
The computer code also performed data logging. All temperatures, pressures, and line voltage were monitored continuously and displayed. The calculations for the thermal conductivity were performed and also displayed. Additional subroutines were written for charging and discharging of the test tank.
Carbons Tested
Two granular activated carbons were tested. The first carbon tested was Alamo brand ABG-40 activated carbon. The second carbon was CALGON PCB 12×30. Table 1 shows the data for these granular carbons provided by the manufacturers.
Steady State Tests
Each carbon was tested as follows. The activated carbon was poured into the test tank. The test tank was then agitated and refilled with more carbon. The refilling process was repeated until no further settling of the activated carbon occurred. The test tank was then sealed and the tank was evacuated using the refrigeration vacuum pump. The vacuum pump was operated for several days while the tank heaters maintained a 100 • C temperature to fully output gas from the activated carbon. After the out gassing process, the thermal conductivity of the activated carbon was measured. To measure the thermal conductivity, a fixed power setting was maintained to the heater test section. The guard heaters under PID control matched the test section heater's temperature. The steady state temperature profile was then recorded. The thermal conductivity was measured under a vacuum and with methane adsorbed. The methane was maintained at constant pressure using the computer control program. Tests were performed with methane at 20, 100, 250, 350, and 500 psia. Power to the heater test section was varied from 1% to 10%. Each test was run for 24 hrs to ensure steady state conditions. When the test results were questionable, tests were repeated under the same conditions.
Results and Discussion
Vacuum tests of activated carbon sample's thermal conductivity were found to be about 0.033 Watt/m • K. Typical values from literature range from 0.05 to 0.10 Watt/m • K [30]. The experimental values are slightly lower then values listed in literature. The lower values may be the result of the complete out-gassing of the activated carbon, which may not have been performed in the literature. Thermal contact resistance at the heater insert pipe and activated carbon interface could also cause the thermal conductivity measurements to be slightly lower. The thermal contact resistance is considered to be negligible and its effect is averaged with the least squares curve fit of the temperature distribution. Tables 2-4 show the test results. Test results shown in Table 4 were performed with a modified test tank. The results of the bulk thermal conductivity of activated carbon with adsorbed methane were surprising. The bulk thermal conductivity increased dramatically with the addition of methane and remained almost constant for all the tests. The measured bulk thermal conductivity was found to be approximately 0.200 Watts/m • K.
The mean and the standard deviation of a set of data given in Tables 2-4 are given in Table 5, respectively. As seen from the table, calculated standard deviations are relatively small, which indicates that data values are clustered closely around the mean. In calculating standard deviation, data values corresponding to P = 0 psia are ignored.
Other researchers also studied thermal conductivity of activated carbon. Wang et al. [35] studied the adsorption performance of activated carbon-methanol systems. It was stated that while the thermal conductivity of granular activated carbon bed was 0.017 W/m • K, the thermal conductivity of solidified activated carbon bed was ranging from 0.27 to 0.34 W/m • K. Kuwagaki et al. [36] reported the estimated thermal conductivity value changes between 0.17 to 0.28 W/m • K. Py et al. [37] stated that the thermal conductivity value changes between 0.1 to 0.2 W/m • K for unconsolidated activated carbon beds and consolidated beds. The number of thermal couples were increased to and spaced 1/4 inch (0.635 cm) apart. The tests were run based on the hypothesis that the bulk thermal conductivity would be proportional to the amount of methane adsorbed. Therefore, one would expect the data to follow the adsorption curves for methane. Based on this hypothesis, the thermal conductivity would increase rapidly between 0 and 100 psia and gradually increase to a maximum at 500 psia.This is evident from the data shown in Tables 2-4. As shown in Table 2, at high pressures, the trend for thermal conductivity has a slight increase with an increase in temperature. However, the presented results show that the bulk thermal conductivity measurements remained approximately constant for both temperature and pressure ranges. The bulk thermal conductivity measured was higher then what could be produced by just a combination of the activated carbon and gas phase methane. Figure 6, 3D representation of bulk thermal conductivity, is generated using Tables 2-4. In this figure, since some of the data points are very close to each other, they are not all visible. Using Tables 2-4, results are also presented in Figures 7-9, respectively. With the exception of P = 0 psia, Figure 7 clearly shows that, at all temperature and pressure ranges, bulk thermal conductivity of activated carbon is approximately 0.2 W/m • K. As shown in Figures 8 and 9, a similar argument can be made for CALGON PCB 12X30 and CALGON-2 PCB 12X30. The data were also analyzed on the assumption that free convection was dominating the heat transfer through the test tank. The correct measurement and characterization of the thermal conductivity of bulk materials such as activated carbon can pose a number of problems. For example, the loss of heat input power, resulting in the generation of heat for heating the activated carbon samples, can be very difficult to quantify [38]. The bulk thermal conductivity for the tests did not dramatically increase with increased power to the test section heater. The increased power to the test section should have resulted in increased free convection. Free convection does not appear to be a significant mechanism of the increased heat transfer. This free convection was not observed and was probably the result of the resistance to gas flow through the activated carbon being greater than the buoyancy effects on the gas. The transport phenomenon caused by desorption and adsorption from the hot side to the cold side also did not seem to be present. Phase change heat transfer rates are typical orders of magnitude larger and are also temperature sensitive. The increased in bulk thermal conductivity most likely is the result of methane adsorbed in the micro-pores of the activated carbon. The test results through this research showed that, for ANG storage tank design, a bulk thermal conductivity of 0.200 Watt/m • K should be used. Heat transfer in the storage tank is much greater than what is predicted when using the thermal conductivity of activated carbon alone. Using the data from the parametric study at Michigan State, the storage efficiency of the ANG tank will increase 20%. The effect on design of PCM buffered tanks will be even greater since the heat transfer rate is almost 2.5 times greater than the value used in the computer models at IGT.
Conclusions
This investigation to experimentally measure the bulk thermal conductivity of activated carbon was performed by developing a test setup which simulated ANG vehicle storage tanks. The test setup design was computer controlled and allowed for flexibility in the type of experiments that can be performed. The thermal conductivity of the activated carbon with adsorbed methane in the test tank was measured. It was found that the bulk thermal conductivity is six times larger than that for activated carbon alone and 2.5 times larger than the value used in computer simulations. The data from the thermal conductivity measurements resulted in the following conclusions: • The bulk thermal conductivity will be fairly constant for ANG vehicle applications.
•
The heat transfer in the tank is not dominated by free convection in the activated carbon bed.
•
The bulk thermal conductivity appears to be the result of methane adsorbed in the micro-pores of the carbon at near liquid densities. • Transport phenomena caused by adsorption and desorption do not appear to be a mechanism for heat transfer in ANG tank applications.
•
The bulk thermal conductivity for CALGON PCB and ABG-40 was found to be approximately 0.200 Watt/m • K for the pressure range of 20 to 500 psia. Design of ANG storage systems may be improved using the information found by this research.
|
2020-02-13T09:11:38.013Z
|
2020-02-05T00:00:00.000
|
{
"year": 2020,
"sha1": "a1b1112e71e4c2fb0a5ddfd257b854470d2fe2ff",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/energies/energies-13-00682/article_deploy/energies-13-00682.pdf",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "a1b1112e71e4c2fb0a5ddfd257b854470d2fe2ff",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
6461365
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptional profiling of the postnatal brain of the Ts1Cje mouse model of Down syndrome
The Ts1Cje mouse model of Down syndrome (DS) has partial trisomy of mouse chromosome 16 (MMU16), which is syntenic to human chromosome 21 (HSA21). It develops various neuropathological features demonstrated by DS patients such as reduced cerebellar volume [1] and altered hippocampus-dependent learning and memory [2], [3]. To understand the global gene expression effect of the partially triplicated MMU16 segment on mouse brain development, we performed the spatiotemporal transcriptome analysis of Ts1Cje and disomic control cerebral cortex, cerebellum and hippocampus harvested at four developmental time-points: postnatal day (P)1, P15, P30 and P84. Here, we provide a detailed description of the experimental and analysis procedures of the microarray dataset, which has been deposited in the Gene Expression Omnibus (GSE49050) database.
Experimental approach
Three main brain regions including the cerebral cortex, cerebellum and hippocampus were targeted in the study. Transcriptomes of these brain regions from 3 Ts1Cje and 3 disomic littermate control were compared at each of the following time-points: P1, P15, P30 and P84. The tissue samples were randomised prior to RNA extraction, quantitation of total RNA and quality/integrity, cRNA preparation and microarray hybridisation steps (Table 1). Fig. 1(A) is a simplified diagram of the experimental design and data processing flow/criteria used for the study.
Ts1Cje mouse breeding, ethics statement and genotyping
Ts1Cje and disomic mice were generated by mating Ts1Cje males (originally obtained from The Jackson Laboratory, Bar Harbour, USA)
Contents lists available at ScienceDirect
Genomics Data j o u r n a l h o m e p a g e : h t t p : / / w w w . j o u r n a l s . e l s e v i e r . c o m / g e n o m i c s -d a t a / with C57BL/6 female mice for over 10 generations. All mice were kept in a controlled environment of 12-h light/12-h dark cycle with unlimited access to a standard pellet diet and water. Breeding procedures, husbandry and all experiments were performed under the approval from the Walter and Eliza Hall Institute Animal Ethics Committee (Project numbers 2001.45, 2004.041 and 2007.007). Genomic DNA was extracted from mouse-tails and genotyping was performed using multiplex PCR with primers for neomycin (neo) and the glutamate receptor, ionotropic, kainite 1 (Grik1) as an internal control as described previously [4].
Tissue procurement
Three female Ts1Cje mice at four time-points (P1, P15, P30 and P84) with sex and age matched disomic littermates were used to avoid the effects of Y-linked genes such as Sry (sex-determining region of the Y chromosome), which contribute to neural sexual differentiation of the brain [5]. All mice were euthanized via cervical dislocation. Procurement of the cerebral cortex, cerebellum and hippocampus was conducted according to a method described previously [6].
RNA extraction and microarray hybridisation
The Qiagen RNeasy Micro kit (Qiagen) with a DNase I digestion step was used to extract total RNA from each tissue according to the manufacturer's instructions. All 72 tissues were randomised prior to RNA extraction to avoid biases ( Table 1). The quality and quantity of each RNA sample were assessed using an Agilent 2100 Bioanalyzer (Agilent). The RNA Integrity Number (RIN) ranged from 7.0 to 10. Six micrograms of total RNA was used to prepare biotinylated cRNA according to the standard Affymetrix protocol (Expression Analysis Technical Manual, 2001, Affymetrix). Hybridisation of labelled RNA samples onto Affymetrix GeneChip Mouse Genome 430 2.0 Arrays was performed according to the Australian Genome Research Facility (AGRF) protocol. A probe cocktail (cRNA at 0.05 μg/μl), which included 1× Hybridisation Buffer (100 mM MES, 1 M NaCl, 20 mM EDTA, 0.01% Tween-20), 0.1 mg/ml Herring Sperm DNA, 0.5 mg/ml BSA, and 7% DMSO was prepared to a total of 300 μl for each sample and 200 μl was hybridised onto a single GeneChip. The chips were incubated at 45°C for 16 h in an oven with a rotating wheel at 60 rpm, washed and stained with streptavidin-phycoerythrin (SAPE) using the appropriate fluidics script on the Affymetrix Fluidics Station 450 (Affymetrix). The GeneChips were scanned using a GeneChip Scanner 3000® (Affymetrix) with GeneChip® Operating Software (GCOS). Fig. 1(B) shows a simplified diagram of the sample preparation.
Microarray data normalisation and analysis
The microarray data was analysed using R (www.r-project.org) and Bioconductor (www.bioconductor.org) [7]. The probe-level intensities for the 72 arrays were background corrected, normalised and summarised using the GC Robust Multi-array Average (GC-RMA) algorithm [8] to obtain gene (probe-set) level summaries (see Supplementary File 1 for GC-RMA script used). Differential expression between Ts1Cje and their disomic littermates at different time-points and in different brain regions was assessed using the limma package [9]. A linear model was fitted for multiple contrasts (corresponding to the Ts1Cje vs disomic comparisons) for each gene using the lmFit procedure and differential expression was assessed using empirical Bayes moderated t-statistics [10]. P-values corresponding to the moderated t-statistics were adjusted for multiple testing using the false discovery rate (FDR) procedure of Benjamini and Hochberg [11]. Fig. 1(C) shows a simplified diagram of the microarray analysis. Stringent criteria were applied to identify differentially expressed genes (DEGs) from the datasets, which included t-statistic values of ≥ 4 or ≤ -4 and a FDR of ≤ 0.05. As reported in Ling et al. [12], a total number of 317 DEGs were identified from all spatiotemporal comparisons. A top-down screening approach was then used to analyse the 317 DEGs in order to identify any disrupted molecular pathways. Initially, a functional ontology clustering analysis based on all 317 DEGs collectively using the Database for Annotation, Visualisation and Integrated Discovery (DAVID) [13] was performed. The functional clustering analysis was performed under a stringent classification criteria (a kappa similarity threshold of 0.85, a minimum term overlap of three, two initial and final group membership with a 0.50 multiple linkage threshold and a modified Fisher-exact P-value or enrichment thresholds of 0.05) using the following databases: Biological Biochemical Image Database (BBID), BioCarta database, EC_number, Kyoto Encyclopedia of Genes and Genomes (KEGG) database, PANTHER pathway database and Reactome pathway database [13]. Subsequently, a more refined analysis was carried out involving the DEGs identified from the comparisons that were based on a specific time-point or brain region. Finally, the significant ontologies identified through all analyses were manually curated based on common genes that were found involved in the ontologies leading to the identification of 7 significant functional clusters. Fig. 1(D) shows a simplified diagram of the functional clustering analysis.
Discussion
Here we provide a detailed description of the generation of a 72 microarray dataset, which is comprised of transcriptome profiling data derived from three brain regions, at four postnatal time-points from the Ts1Cje mouse model of DS and disomic littermates. The strategy used to identify DEGs between the Ts1Cje and disomic littermate data and functional clustering analysis is also described. This comprehensive and well-controlled microarray dataset encompasses postnatal developmental stages from P1 to P84 in the cerebral cortex, cerebellum and hippocampus providing a platform to understand the differences between the Ts1Cje and disomic mouse brain in these regions at a transcriptome level. The analysis of the dataset was fully described and discussed in the study by Ling et al. [12], which demonstrated that the interferon-related pathways were significantly dysregulated in the Ts1Cje brain as compared to their disomic littermates.
|
2018-04-03T00:51:57.267Z
|
2014-10-02T00:00:00.000
|
{
"year": 2014,
"sha1": "ca5081a715f648995ef6849d6d5ff3b5677933e0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.gdata.2014.09.009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca5081a715f648995ef6849d6d5ff3b5677933e0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
169804878
|
pes2o/s2orc
|
v3-fos-license
|
Gamification Elements used in Employee Retention and Enhancing Employee Productivity
Nowadays in IT industries retention of employees is a crucial task. Companies try hard to have win-win situation so that both the employees as well as the companies get mutual benefit. Most of the employees tend to change companies regularly as there is no growth in their careers, non-availability of challenging roles and lack of recognition. Another Important problem is to increase the employee’s productivity. Most of the roles are saturated as employees keep doing the same work daily and no new challenges have been introduced in their work, to motivate their employees. This paper helps to understand and compare what all elements the IT companies are using to retain their employees and increase the productivity of the company. This study consider some of the reputed companies in the IT industry, comparing their Gamification elements with each other and finding the effective ones.
Introduction
IT Companies have been spending a lot in research and money to control the attrition rate of employees which in-turn increases the productivity of the organization. In sports, athletes has been self-motivated because of the passion for the games and they perform extremely well to achieve their goals and rewards. In the case of sports there is no need of any extra motivation given to the people, they are 100% committed towards achieving their rewards. So the IT organization use this strategy of gamification to motivate their employees at work, which reduces the attrition rate of the organization and which in turn increases the productivity of the Organization. The gamification elements used differs from different organizations. The gamification elements used are Learning and development, Rewards and recognitions, Employee Engagement, One on One with every employees
Literature Review
The gamification elements are used to understand the process well and obtain maximum efforts from people to achieve the goals." In this gamification elements are used in real time scenarios and confirmed that it motivates people to achieve their goals" (Michael Sailer, 2017).The word gamification has been used in many of the organizations. "The gamification elements such us leader 1234567890''"" boards, rewards, promotions enhances intrinsic motivation and also productivity of the organization" (Elisa D Mekler, 2017). The effect of spiritual values of employees plays an important role in organizational commitment." When employees are spiritually experience at work, they feel more commited to their organization, experience a sense of responsibility, loyality towards them and feel less materially committed" (Morteza Raei Dehaghi, 2012). In today's real life scenario organizations need talented people to improve the productivity. "Improving in Talent management like Strategic Leadership, Enhancing employee quality, Performance evaluation, Evaluation of expectations leads to organizational commitment"(Yalcin Vural, 2012). In recent years job satisfaction and organizational commitment is becoming more important in working life." If a job done gives a satisfaction for the talented employees they ensure that they will stay committed to the organization" (Mehmet Altinoz, 2012).In this technological era, organizations find it difficult to satisfy employees in order to cope up the evolving environment. "In order to increase productivity, effectiveness and job commitment of the employees, organization must satisfy the needs of employees by providing good working conditions" (Abdul Raziq, 2015). In this not only satisfaction but emotions also play a major role in organizational commitment." Negative emotions in employees tends the employees to leave the organizations" (Oya Erdil, 2014).To overcome the process of Retention of employees in small and medium industries where the attrition rate is high some of the HR practices must be made available for the employees." The major HR practices that must be given to the employees are compensation and benefit, performance management, training and employee relations" (Choi Sang Long, 2014). Learning and development is a process where employee could learn and develop their skills so that they can be efficient for futuristic roles. "The employees can be retained if the organization can provide the wants, needs and the expectations of the employee" (Roya Anvari, 2013). In every organization productivity play an important role." If the Customer information System has been made easy in usage, content and format it helps the employees to increase the productivity of the organization" (Norfazlina G, 2016). In an organization the productivity of the each and every employees must be equally captured so that appraisal will be fair. "Employee based monitoring devices are used to monitor the employee's performance, productivity, work related behavior, learning and development and mainly it is transparent with the employees" (David L Tomczak, 2017). This provides fair appraisal which increases productivity and retention of employees.
Conceptual Model
In this model we could see what all the factors that affect the employee productivity and retention of employees. The factors affecting the model are Learning & Development, Rewards and Recognition, Employee Engagement and Taking Feedbacks. We have considered the important variables that are responsible for Employee Retention and enhancing Employee productivity they are learning and development, Rewards and Recognitions, Employee Engagement and Taking Feedback. A pilot study has been conducted in enhancing productivity and retention of employees. Then Questionnaires has been modified and in-depth interview has been conducted to identify the feasible solutions.
Questions
1. What are the Gamification elements used by the company? 2. Is the Gamification elements used monthly, quarterly or yearly? 3. After introduction of Gamification elements have employee absenteeism and boredom reduced and by what percentage? 4. What percentage retention and productivity has been increased or decreased after Gamification elements has been introduced? 5. Is there any Drawbacks of Gamification so that it can be resolved in the future? 6. Whether Gamification provide equal opportunities for all the employees in the organization? 7. Whether Introduction of Gamification reduces conflicts between employees and Mangers?
Analysis
To increase productivity and in retention of employees following elements play an important role:-
Optimized Hiring
Many Organizations hire people who are mismatch for the organization which in turn result in increase in attrition rate for the organization. To avoid this, many organization provide a process called employee referrals, where the employees within the organization could refer his/her colleagues by providing the information of the organization, work culture and what all they expect from new employees. This process decreases the recruitment cost and employees are rewarded with incentives for the each new recruits. This process is mainly done for experience candidates to make sure they needed less training programs. Incentives are used as a gamification element to engage the employee to find suitable candidates to improve productivity, to retain employees and to reduce the cost of the Organization.
Effective Induction program
Most of the organizations have taken induction seriously and they are doing the best to educate the employees about the organization and what all they expect from their employees in return. Some companies have scheduled first 1 or 2 months of induction program for the employees to understand well about the company. In the Induction program the employees will be given exposure to what all roles available in the project and will make sure employees are aware about the projects and their functional roles. Some organizations even cross train their employees in the induction program and when they have vacancies in the future they use the employees to make sure the project have delivered in time. Employees are made to work in challenging roles and reduces the stag mentation of work in their life. This motivates the employees to work for the organization and stay committed.
Learning and Development
IT companies have investing heavily to make sure all the employees has been trained to the current application in which they are working and also being cross trained to other tools that are available. Many companies have added Learning and development as a part of their KRA to every employees and ratings are given accordingly. In some companies it has made mandatory to complete 2 to 3 learning and development courses in order to be eligible for the appraisal. In many IT industries, learning and development has been made available in internal portals for all the employee of the organizations for free of cost. The courses differ from different positions according to their positions the courses will be made available for the employees. Employees can utilize this courses from anywhere, at any point of time in order to increase their knowledge in their own domain and also can work on their futuristic roles. This decreases the boredom in organization and which in turn increases the productivity of employees.
Maintaining good relationship with employees
Team work is the essential part in an IT organizational, employees can't work alone and complete the projects, it can be only achieved with help of team work. Organizations started providing HR games where employees are put into different groups and the employees must work together as one to resolve the issues. Organizations also provide team outing, team dinner where employee get to know each other better and also relationship between employees improves. In this way there will be less conflict between employees and top level management.
Improve the work conditions
Organization must make sure the working condition are made available for every employees. Most of the Organizations have changed their computers into laptops and each employee is given a laptop to work on, employee can work on a place where they are comfortable. Some of the Organizations help employees by providing work from home policies especially for woman employees for working at night shifts, when they are pregnant. It is also applicable for employees who have broken legs or unable to travel to office for work. And they are also given flexible work schedules for the employees to help promote a work-life balance.
Rewards and Recognitions
Employees have been rewarded for their performance quarterly and yearly to make sure their productivity has been maintained, it also motivates other employees to improve their productivity. The rewards will be in types of incentives, gift vouchers and international tours. Some companies provide flash points which in turn can be used for online shopping. Some of the rewards have been given to the whole team as a whole for the successful competition of project by having team dinner or incentives for all the employees in that team. Employees have been recognized by providing momentum and also internal promotion according to the performance of the employees. These recognitions has been mentioned in the leader board of the organization to motivate other employees to work towards recognitions.
Employee Engagement
Organizations have started to concentrate more on this part by celebrating all the National festivals in office and making sure employees doesn't feel they have been ill-treated. This reduces employee absenteeism and also provide relaxation for the employees. They also have started introducing Tech Talks which improves the imagination of employees to come up with new ideas and it has been taken into to account by the organization. If it's a genuine solution it has been applied in the organization and employees are rewarded for the extra effort they have made.
One on One with every Employees
In this process the organization tries to understand the needs/ wants of the employees such as trainings, futuristic roles, higher education, whether there is any conflict with managers, new ideas for the project etc, have been accomplished. This helps the organization to know more about the employees and could easily help the employees in the area of weakness. Organizations have started following the process and it's mandatory that all employees have gone through the process. This increases the trust of employees towards the organization.
Taking Feedback
All the IT companies have started accepting feedback from each and every employees in the organization to make sure that they really grow together as one. Organization as a whole will find it difficult to identify the problems in a team or in a project, but employees could easy able to identify the problems in this kind of scenario but won't be able to resolve the issues alone. Employees use this feedback as a platform to showcase their views about the company, manager, project on which he/she is working, wants and needs of the company, conflict with team members/ managers. This has been made mandatory to every employees and they must fill the feedback every 6 months or 1 year. Separate team is allocated to understand the views of the employees and within a limited period of time the problems will be resolved.
|
2019-05-30T23:45:22.644Z
|
2018-07-30T00:00:00.000
|
{
"year": 2018,
"sha1": "6b4dc73e7e60b30a519cb3d9695d527deee613c0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/390/1/012039",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "9cc3f7bc024551d8ae7303b9b1d7dfc2bf4ef1cf",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
256862699
|
pes2o/s2orc
|
v3-fos-license
|
Ιnclusion Complexes of Magnesium Phthalocyanine with Cyclodextrins as Potential Photosensitizing Agents
In this work, the preparation of inclusion complexes, (ICs) using magnesium phthalocyanine (MgPc) and various cyclodextrins (β-CD, γ-CD, HP-β-CD, Me-β-CD), using the kneading method is presented. Dynamic light scattering (DLS) indicated that the particles in dispersion possessed mean size values between 564 to 748 nm. The structural characterization of the ICs by infrared spectroscopy (FT-IR) and nuclear magnetic resonance (NMR) spectroscopy provides evidence of the formation of the ICs. The release study of the MgPc from the different complexes was conducted at pH 7.4 and 37 °C, and indicated that a rapid release (“burst effect”) of ~70% of the phthalocyanine occurred in the first 20 min. The kinetic model that best describes the release profile is the Korsmeyer–Peppas. The photodynamic therapy studies against the squamous carcinoma A431 cell line indicated a potent photosensitizing activity of MgPc (33% cell viability after irradiation for 3 min with 18 mW/cm2), while the ICs also presented significant activity. Among the different ICs, the γ-CD-MgPc IC exhibited the highest photokilling capacity under the same conditions (cell viability 26%). Finally, intracellular localization studies indicated the enhanced cellular uptake of MgPc after incubation of the cells with the γ-CD-MgPc complex for 4 h compared to MgPc in its free form.
Introduction
Phthalocyanines (PCs) are large, aromatic, organic molecules that consist of a large internal porphyrazine ring, which connects four isoindole units linked together by nitrogen atoms. PCs display a wide range of applications due to their chemical, electronic, structural, and optical properties [1,2]. There is also the possibility for incorporation of a metal atom (Mg, Zn etc.) in the center of their ring to form molecules known as metallophthalocyanines, which results in improved properties. Phthalocyanines and metallophthalocyanines have been initially used as pigments and coloring substances as well as chemical sensors, semiconductors, etc. [3]. PCs are now being studied as promising photosensitizing agents for photodynamic therapy (PDT) of both cancer and non-cancer related diseases [4][5][6][7].
Nowadays, PDT is being investigated thoroughly as an alternative to conventional treatment of both cancer and non-cancer related diseases. During PDT, a non-toxic dye (the photosensitizer) is administered and then it is exposed to a light source of a specific wavelength depending on the absorption properties of the molecule used [8]. The interaction of light with the photosensitizer leads to the death of the target cells by oxidative stress, which is induced after the stimulation of the photosensitizer and subsequent production of reactive oxygen species (ROS). PCs are considered as 2nd generation photosensitizers along with porphyrins and chlorins [3,4]. Their structure and properties render them excellent candidates for PDT applications, while their synthesis is relatively simple. PCs can also be structurally modified in order to modify their properties, such as their hydrophobicity. The purpose of the present work was initially to study the inclusion of magnesium phthalocyanine (MgPc) (Figure 1) in natural (β-CD, γ-CD) and chemically modified cyclodextrins (HP-β-CD, Me-β-CD) in order to increase the solubility of MgPc and potentially enhance their photodynamic activity. The ICs were characterized and the release profile and kinetic modeling of MgPc release from the inclusion complexes were determined. The resulting inclusion complexes were then studied in terms of their optical prop- The purpose of the present work was initially to study the inclusion of magnesium phthalocyanine (MgPc) (Figure 1) in natural (β-CD, γ-CD) and chemically modified cyclodextrins (HP-β-CD, Me-β-CD) in order to increase the solubility of MgPc and potentially enhance their photodynamic activity. The ICs were characterized and the release profile and kinetic modeling of MgPc release from the inclusion complexes were determined. The resulting inclusion complexes were then studied in terms of their optical properties and their ability to produce reactive oxygen species (ROS). PDT activity of free MgPc and its ICs was evaluated in vitro against an A431 skin cancer cell line, and the intracellular localization of the most potent photosensitizing nanosystem was also studied.
Preparation of Magnesium Phthalocyanine Inclusion Complexes (ICs) with Various Cyclodextrins
Magnesium phthalocyanine was incorporated in various cyclodextrins using the kneading method [19,20]. MgPc and the respective cyclodextrin (β-CD, γ-CD, HP-β-CD, Me-β-CD) were added to the mortar in molar ratio 1:1. To create a homogeneous paste, a solution of water:ethanol in ratio 3:2 was added dropwise. The paste was ground for at least 45 min. The final blue solid powder was dried using a high vacuum pump and stored under refrigeration, for further analysis and characterization. The same procedure (kneading method) was applied under solvent-free conditions for MgPc and HP-β-CD, while the corresponding physical mixture of the components was also prepared for the better assessment of the ICs' formation (prepared samples were used for comparison purposes of the FT-IR and/or NMR analysis).
Evaluation of the Stoichiometry of the ICs Using Job's Plot
To determine the stoichiometry of the inclusion complex, the Job's plot was indicatively conducted for β-CD/MgPc mixture. Phosphate buffer solutions (pH 7.4) with known concentrations of both compounds (MgPc and β-CD) were prepared. The total sum of the two masses was kept constant while their molar ratio varied from 0 to 1 (Equation (1)). The absorbance of each sample was determined using UV-Vis. With these measurements a (Abs(MgPc) − Abs(MgPc + β − CD)) versus molar ratio graph was constructed. The stoichiometry was evaluated from identifying the maximum or minimum point in the graph.
Process Yield
The process yield (PY) is used to determine the suitability of a process chosen for a specific protocol and is calculated by dividing the final mass of the dried inclusion complexes by the total amount of CD and MgPc initially used. For each IC that was produced, the PΥ was calculated from the following equation (Equation (2)). %PY = 100 × mass of the prepared inclusion complex (mg) initial mass of β − CD + initial mass of MgPc to be encapsulated (mg)
Inclusion Efficiency of the CD-Phthalocyanines ICs
The inclusion efficiency (IE) describes the percentage of the encapsulated MgPc in the CD-MgPc ICs, relative to the total starting amount of MgPc used ((Equation (3)). %IE = 100 × mass of the encapsulated MgPc (mg) initial mass of MgPc to be encapsulated (mg) The inclusion efficiency of the MgPc in the CD-MgPc ICs was calculated using UV-Vis by determining the quantity of the encapsulated MgPc. All of the UV-Vis measurements were performed on a V-770 UV-Vis Jasco spectrophotometer. For each dried inclusion complex, 10 mL of dimethyl sulfoxide (DMSO) was added into 10 mg of IC and stirred for at least 24 h at room temperature. The samples were then filtered, diluted appropriately, and their absorbance was measured in the range of 500-800 nm. All of the measurements were performed in triplicate.
Dynamic Light Scattering (DLS)
In order to determine the size, polydispersity index, and the ζ-potential of the CD-MgPc ICs by the DLS method, measurements were performed on the Zetasizer Nano ZS Malvern. The samples were prepared by dispersing 1 mg of the CD-MgPc ICs in 20 mL of double deionized water. The final solutions were then stirred and vortexed directly before the measurement. For the measurements, cuvettes of type U (DTS1070) were used. For each sample, the measurements were performed at 25 ± 1 • C and in triplicate. The results are reported as mean ± standard deviation. All of the measurements were conducted for pH 7.4.
Fourier Transform Infrared Spectroscopy (FT-IR Spectroscopy)
FT-IR measurements were performed in order to study the formation of the ICs as well as the interaction between the MgPc and the cyclodextrins. A JASCO FT/IR-4200 spectrometer (Japan Spectroscopic Company, Tokyo, Japan) was used. All the measurements for MgPc, CDs, the final dried ICs, the physical mixture, and the solvent-free kneading product were conducted in the form of KBr pellets, in the scanning range of 650-4000 cm −1 .
Nuclear Magnetic Resonance Spectroscopy (NMR Spectroscopy)
1 H-NMR spectra of CDs, the final dried CD-MgPc ICs, and the physical mixture of HP-β-CD and MgPc were obtained using the Varian 600 MHz spectrometer (Varian, Palo Alto, CA, USA), located at the Institute of Chemical Biology, National Hellenic Research Foundation). The samples were dissolved in deuterated DMSO (DMSO-d 6 ). The chemical shifts were expressed in parts per million (ppm) while the coupling constants (J) in hertz (Hz).
In Vitro Release Studies of the MgPc from the CD-MgPc ICs
The release profile of the MgPc was evaluated under specified conditions. For each IC, 5mg of the dried final product was added to different glass vials, each one corresponding to a specific time of incubation. In each vial, 2 mL of phosphate buffer (pH 7.4) was added, and the samples were kept in an incubator under 37 • C. At predetermined time intervals, each vial was removed, filtered, and diluted with DMSO, in order to determine the MgPc concentration. All of the measurements were done in duplicate.
Kinetic Modeling of the MgPc Release from the CD-MgPc ICs The kinetic models that are widely applied to describe the release of bio-active substances from drug delivery systems are the zero-order model, first-order model, Higuchi model, and Korsmeyer-Peppas model [21,22].
From the Korsmeyer-Peppas equation, the diffusion exponent (n) can be calculated. After the calculation of n, the mechanism of the release can be described.
which can also be written as: Based on the aforementioned equations (Equations (4) and (5)), the slope of the log(Mt/M) versus log(t) graphs is equal to the diffusion exponent n [21,22]. The absorption spectra of MgPc in DMSO and CD-MgPc ICs in PBS (1% v/v DMSO) were recorded at the concentration of 2 µM using a Perkin-Elmer Lambda 35 UV/VIS spectrometer. The samples were freshly prepared just before measurements. All measurements were carried out at room temperature.
ROS Production
CM-H 2 DCFDA is a chloromethyl derivative of 2 ,7 -dichlorodihydrofluorescein diacetate (H 2 DCFDA). This substance works as fluorescent tracer and can be detected after being hydrolyzed because it turns into a fluorescent substance when it reacts with free radicals. The hydrolysis of CM-H 2 DCFDA was carried out using sodium hydroxide (NaOH).
Subsequently, the hydrolyzed substance is added in a PBS buffer solution (1% v/v DMSO) containing MgPc either in its free form or as ICs at the concentration of 5 µM. The solution was irradiated at 661 nm. Irradiation was performed using a 661 nm with a power density output of 14 mW/cm 2 . During irradiation, samples were constantly stirred using a magnetic stirrer. For the evaluation of the ROS production, the fluorescence spectrum of CM-H 2 DCFDA was recorded every 2 min for 10 min using a Perkin-Elmer LS45 luminescence spectrometer.
Cell Viability Assessment, MTT Assay
Cell viability was evaluated by the MTT colorimetric assay, which measures the capacity of mitochondrial dehydrogenase to reduce MTT to purple formazan crystals. The MTT test assesses the number of surviving cells.
Cells were seeded in 96-well plates (6000 cells/well) and grown overnight at 37 • C in a 5% CO 2 incubator. Exponentially growing cells were treated accordingly (dark toxicity, light toxicity, photodynamic treatment). Twenty-four hours after their treatment, the medium was removed and the MTT solution (0.65 mg/mL) was added to each well. The cells were kept in the incubator for 3 h to allow the metabolism of MTT and then the medium was removed and 200 µL of dimethyl sulfoxide (DMSO) was added, resulting in the solubilization of the formazan crystals. The absorbance was recorded at 570 nm (peak of formazan absorption spectrum) using an Epoch 2 microplate reader (Bio Tek Instruments, Winooski, VT, USA). The results were expressed as % cell viability = (mean optical density (OD) of treated cells/mean OD of untreated cells) × 100. All measurements were carried out in triplicate and all data were expressed as means ± standard deviation.
Dark Toxicity Studies
After seeding of the cells in the 96-well plates and incubating for 24 h, the cells were treated with different concentrations (0.05 µM, 0.5 µM, 1 µM, 3 µM, and 5 µM) of MgPc in its free form or in the ICs for 24 h. Finally, cellular survival was measured with the MTT assay.
Light Toxicity Studies
Twenty-four hours after seeding the cells in 96-well plates, the culture medium was removed from the wells and PBS was added (40 µL) so as to slightly cover the cells' monolayer. The cells were then irradiated at 661 nm with power density at cellular level of 18 mW/cm 2 . Exposure times were 60 s, 120 s, and 180 s resulting in fluence rates of 1.08, 2.16, and 3.24 J/cm 2 , respectively. Following irradiation, fresh medium was added, and the cells were maintained in the humidified incubator for 24 h. Finally, cellular viability was assessed via MTT method as described previously.
Photodynamic Treatment
After their seeding for 24 h, cells were incubated with 0.5 µM of freshly prepared solutions of MgPc and its cyclodextrin complexes in enriched medium for 4 h. In continuation, the medium containing the photosensitizers was removed, 40 µL of PBS were added in each well, and the cells were irradiated with fluence rates of 1.08, 2.16, and 3.24 J/cm 2 as indicated for the light toxicity studies. Following irradiation, fresh medium was added, and the cells were incubated for another 24 h. Finally, cell viability was measured with the MTT assay.
Irradiation Device
Irradiation was performed using a 661 nm diode laser system coupled to an optical fiber and a light diffuser (GCSLS-10-1500m, China Daheng Group, Beijing, China) in order to provide a uniform circular illumination spot. At each experimental condition, three wells were irradiated, and the laser spot was centered to them. Before and after cellular irradiation, laser power was measured at the cellular level using a power meter. Irradiance variability in different points of the irradiated area was less than 2%.
Evaluation of the Stoichiometry of the ICs Using Job's Plot
The Job's plot diagram is presented in Figure 2. The maximum for the molar ratio (r) is determined to be 0.33. This value indicates that the stoichiometry of this IC is 1:2 MgPc/β-CD ( Figure S1). This result is also compliant with experiments conducted in similar studies [24] for β-CD ICs with chemically modified zinc phthalocyanines.
Process Yield and Inclusion Efficiency
The process yield values as well as the inclusion efficiency are presented in Table 1. For the determination of %IE, the calibration curve for MgPc in DMSO was used. The absorption was measured at 673.4 nm. The %PY and the %IE for each IC are shown in Table 1. Based on the Table 1, β-CD displays the highest %PY, while HP-β-CD displays the highest inclusion efficiency followed by β-CD. Me-β-CD and γ-CD display lower encapsulation efficiency values.
Size, Polydispersity Index (PDI), and ζ-potential
For the ICs that were produced in this study, the size, the PDI, and the ζ-potential are shown in Table 2. Using different CDs, the size and the stability of the system vary. The hydrodynamic diameter of the prepared ICs ranges from 564.5 ± 52.6 (for the β-CD-MgPc complex) to 748.7 ± 52.0 nm (for the γ-CD-MgPc complex). The PDI ranges from 0.522 to 0.566 in all cases and indicates moderately uniform distribution of the particles' sizes. It is well
Process Yield and Inclusion Efficiency
The process yield values as well as the inclusion efficiency are presented in Table 1. For the determination of %IE, the calibration curve for MgPc in DMSO was used. The absorption was measured at 673.4 nm. The %PY and the %IE for each IC are shown in Table 1. Based on the Table 1, β-CD displays the highest %PY, while HP-β-CD displays the highest inclusion efficiency followed by β-CD. Me-β-CD and γ-CD display lower encapsulation efficiency values.
Size, Polydispersity Index (PDI), and ζ-Potential
For the ICs that were produced in this study, the size, the PDI, and the ζ-potential are shown in Table 2. Using different CDs, the size and the stability of the system vary. The hydrodynamic diameter of the prepared ICs ranges from 564.5 ± 52.6 (for the β-CD-MgPc complex) to 748.7 ± 52.0 nm (for the γ-CD-MgPc complex). The PDI ranges from 0.522 to 0.566 in all cases and indicates moderately uniform distribution of the particles' sizes. It is well known that CDs have the tendency to aggregate in aqueous solutions at room temperature merely because of the lack of sufficient charge on their surface. Usually, the aggregation does not take place in a uniform manner, thus affecting the PDI of the ICs. In regard to the ζ-potential, it is lower (absolute value) in the case of HP-β-CD-MgPc and γ-CD-MgPc (−17.7 ± 0.5 mV and −14.9 ± 4.0 mV, respectively) and higher in the case of β-CD-MgPc and Me-β-CD-MgPc (−29.8 ± 1.18 mV and −23.0 ± 1.6 mV, respectively).
These values indicate moderate to high stability of the particles in an aqueous dispersion [25,26]. More specifically, the ICs consisting of HP-β-CD and γ-CD, display moderate stability while the ICs with β-CD and Me-β-CD display higher stability.
Fourier Transform Infrared Spectroscopy (FT-IR Spectroscopy)
FT-IR spectroscopy is widely used to investigate the interactions between the host molecule and the guest molecule [27,28]. In the FT-IR spectrum of pure MgPc, the most characteristic absorption bands that are observed are at 1525, 1483, 1333, 1057, 888, and 728 cm −1 . The band at 1525 cm −1 is attributed to the C=C-C stretching of the aromatic ring, whereas the band at 1483 cm −1 is owed to the C-C stretching vibration of the isoindole structure. The bands at 1333 and 1057 cm −1 are owed to the C-C stretching and the C-N stretching vibrations of the pyrrole ring, respectively. At 888 cm −1 the absorbance can be attributed to the Mg-N stretching vibration, whereas the band at 727 cm −1 is owed to out of plane deformation of the C-H bond [29]. In Table 3, the characteristic absorption bands of the pure CDs as well as the peaks for the different ICs, the prepared physical mixture, and the solvent free kneading product are reported.
For all of the spectra of the prepared CD-MgPc ICs, a similarity with the spectrum of the pure CD used in each case is observed. In all of the ICs, there is a shift of the characteristic bands of the pure CDs, especially for the stretching vibration of the O-H group which presents the largest shift. Furthermore, there is a significant shift of the bands that were attributed to the antisymmetric stretch of the C-H bond of the CH 2 group as well as to the bending vibration of the O-H group, which is an indication of the interaction between the different CDs and MgPc.
In addition, the characteristic absorption band of pure MgPc at 1525 cm −1 is shifted in the spectra of the ICs (β-CD-MgPc IC: 1521 cm −1 , HP-β-CD-MgPc IC: 1528 cm −1 , Me-β-CD-MgPc IC: 1521 cm −1 ), which also indicates an interaction between the two components. Finally, the characteristic band of pure MgPc at 728 cm −1 is still noticeable in the spectra for the ICs, which indicates that only part of the phthalocyanine is located in the interior part of the CDs. This result has also been noted in other studies [11,30], which concluded that only part of the phthalocyanine is inserted in the hydrophobic cavity of cyclodextrins.
The FT-IR spectra of the physical mixture of HP-β-CD and MgPc and of the corresponding solvent-free kneading product present significant similarities. It is noteworthy that all the characteristic absorption bands of MgPc are present in both spectra, whereas in the spectra of the ICs only a few of them are evident, which is indicative of the successful formation of the ICs when the appropriate preparation method is applied.
Nuclear Magnetic Resonance Spectroscopy (NMR Spectroscopy)
NMR spectroscopy is an extremely useful technique for studying the structure of organic compounds, especially in the case of ICs since it provides essential information on the structure of supramolecular host-guest complexes [31]. The NMR analysis for the ICs of CDs with MgPc and of the HP-β-CD and MgPc physical mixture was performed in DMSO-d 6 at 600 MHz. The structure of the β-CD monomer and its 3D depiction can be seen in Figure 3. of CDs with MgPc and of the HP-β-CD and MgPc physical mixture w DMSO-d6 at 600 MHz. The structure of the β-CD monomer and its 3D seen in Figure 3. In the 1 H-NMR spectra, the changes of the chemical shifts of the p outside of the cavity of the CDs can be identified, which can be informat inclusion mode and the affinity between the different CDs and MgPc.
The change of the chemical shifts for β-CD and the β-CD-MgPc ICs Table 4. A notable change of the chemical shift was identified for H-5, inside the cavity of β-CD. Similar changes of the chemical shifts are obs H-1, H-2, H-3, and H-6, from which H-3 and H-6 are located within the In the 1 H-NMR spectra, the changes of the chemical shifts of the protons inside and outside of the cavity of the CDs can be identified, which can be informative regarding the inclusion mode and the affinity between the different CDs and MgPc.
The change of the chemical shifts for β-CD and the β-CD-MgPc ICs are presented in Table 4. A notable change of the chemical shift was identified for H-5, which is located inside the cavity of β-CD. Similar changes of the chemical shifts are observed for protons H-1, H-2, H-3, and H-6, from which H-3 and H-6 are located within the cavity while H-1 and H-2 are located outside the cavity (Figure 3). More specifically, all of the aforementioned protons displayed a downfield shift after the formation of the ICs, which is an indication of the proximity of β-CD to an electronegative atom, such as nitrogen in the structure of MgPc. Moreover, the observed shielding of H-5, H-3, and H-6, indicates that a part of the MgPc molecule interacts with the protons in the hydrophobic region of the host-molecule while another part of the MgPc molecule is located outside of the cavity, forming non-inclusion complexes or aggregates. This is in accordance with the FT-IR analysis results. Table 5, significant changes in chemical shifts are observed for H-4 and H-1 of Me-β-CD, both located on the outer surface of the cavity. It is also noteworthy that no significant differences in the chemical shifts are observed for the H-3 and H-5 protons that are located inside the Me-β-CD's hydrophobic cavity. Moreover, the downfield shifts are an indication of the proximity of Me-β-CD to an electronegative atom, such as nitrogen in the structure of MgPc [32]. These data indicate that MgPc is retained at the exterior surface of Me-β-CD, forming non-inclusion complexes or aggregations, and interacting with the exterior oxygen atoms of Me-β-CD. Based on the data of Table 6, significant changes in chemical shifts were observed for the H-2, H-4, and H-7 of HP-β-CD. These upfield shifts indicate that the MgPc molecule interacts with the protons that are located outside the cavity, as well as with the H-7 of the methylene group which is also located outside the cavity. As far as the protons that are located inside the cavity are concerned, a significant downfield shift is noted for the H-3 (∆δ = 0.024 ppm), while for H-5 the ∆δ is significantly smaller (∆δ = 0.008 ppm), indicative of the partial inclusion of the MgPc inside the cavity and the orientation of MgPc molecules towards the largest rim of the HP-β-CD. The downfield shift is possibly attributed to the proximity of HP-β-CD with an electronegative atom, such as nitrogen in the structure of MgPc [11,33]. However, an upfield shift is observed for protons H-3, H-5, and H-6 in the spectra of the physical mixture of HP-β-CD and MgPc (Table 7), along with lower ∆δ compared to the HP-β-CD-MgPc IC. Moreover, in the 1 H NMR spectrum of the physical mixture, a better signal splitting of the protons' peaks of MgPc is clearly observed, whereas in the inclusion complex spectrum, a broadening of the peaks occurred ( Figure S2). Thus, these observations revealed the successful formation of the HP-β-CD-MgPc IC when using the kneading method [33]. Table 7. Chemical shift changes (∆δ) of 1 H NMR (DMSO-d 6 , 600 MHz) signals of HP-β-CD and HP-β-CD in the physical mixture with MgPc.
In Vitro Release Studies of the MgPc from the CD-MgPc ICs
The determination of the release profile of a drug delivery system is of great significance since it provides vital information concerning its use for specific applications. The release profile of MgPc from the different ICs is shown in Figure 4. As it is observed in Figure 4, for all the ICs, the release profiles display similar behavior. More specifically, all of the CD-MgPc ICs, display a rapid release ("burst effect") of MgPc in the first 20 min. In this timeframe, 72%, 76%, and 85% MgPc is released for β-CD-MgPc, HP-β-CD-MgPc, and Me-βCD-MgPc, respectively. This rapid release is attributed to the diffusion of MgPc molecules located on the outer surface of the CDs due to weak interactions. After the burst effect there is a decrease in the rate of release ("lag effect") and after 1h, there is a "plateau" indicating the sustained release of MgPc.
The IC that displays the fastest release of MgPc is the one with Me-β-CD, followed by β-CD, and lastly HP-β-CD. Especially in the case of the Me-β-CD-MgPc ICs, the fastest release profile is also in accordance with the 1 H-NMR results which indicated that MgPc is mostly located on the surface of the Me-β-CD.
Kinetic Modeling of the MgPc Release from the CD-MgPc ICs In Table 8, the equations for each model used in order to fit the data derived from the in vitro release analysis are presented. The fitting (R 2 ) can be compared for each model in As it is observed in Figure 4, for all the ICs, the release profiles display similar behavior. More specifically, all of the CD-MgPc ICs, display a rapid release ("burst effect") of MgPc in the first 20 min. In this timeframe, 72%, 76%, and 85% MgPc is released for β-CD-MgPc, HP-β-CD-MgPc, and Me-βCD-MgPc, respectively. This rapid release is attributed to the diffusion of MgPc molecules located on the outer surface of the CDs due to weak interactions. After the burst effect there is a decrease in the rate of release ("lag effect") and after 1h, there is a "plateau" indicating the sustained release of MgPc.
The IC that displays the fastest release of MgPc is the one with Me-β-CD, followed by β-CD, and lastly HP-β-CD. Especially in the case of the Me-β-CD-MgPc ICs, the fastest release profile is also in accordance with the 1 H-NMR results which indicated that MgPc is mostly located on the surface of the Me-β-CD.
Kinetic Modeling of the MgPc Release from the CD-MgPc ICs
In Table 8, the equations for each model used in order to fit the data derived from the in vitro release analysis are presented. The fitting (R 2 ) can be compared for each model in order to determine which one describes the release more efficiently. The most effective fitting of the data is noted when the Korsmeyer-Peppas model is used for all the different ICs. The diffusion exponent n, as it is derived from the graphs for the different ICs, are presented in Table 9. The values of n indicate, that for all the ICs, the release mechanism is non-Fickian anomalous transport ( Figure 5) [21,22]. Table 9. Diffusion exponent n based on the Korsmeyer-Peppas equations.
Korsmeyer-Peppas Diffusion Exponent R 2 Equation
Me Table 9. Diffusion exponent n based on the Korsmeyer-Peppas equations. Figure 6 displays the absorption spectra of pure MgPc and the corresponding ICs. The dominant peak of MgPc is observed at approximately 670 nm, while this peak seems to have shifted to longer wavelengths for all ICs except for β-CD MgPc. This is due The dominant peak of MgPc is observed at approximately 670 nm, while this peak seems to have shifted to longer wavelengths for all ICs except for β-CD MgPc. This is due to the interaction of MgPc and the different CDs in order to form the supramolecular structure of the ICs. The dominant peak of phthalocyanines was also less visible after the complexation, which has been noted in other studies as well [34]. It is also worth mentioning that Me-β-CD MgPc and HP-β-CD MgPc also exhibited a new peak at a longer wavelength, which was not visible elsewhere in any of the other CD-MgPc ICs.
ROS Production Evaluation
For studying the production of ROS from the ICs and the pure MgPc, the samples underwent laser exposure for specific timeframes and then fluorescent spectrophotometry was used at 490 nm, which is the excitation wavelength for the fluorescent tracer. In Figure 7, ROS production ability of MgPc and the different ICs is displayed for different time intervals. Data are presented as percentage of fluorescein intensity prior to irradiation.
Based on the above diagram, all the substances tested are able to produce ROS. More specifically, the largest ROS production, even higher than the one from pure MgPc, occurs from the IC with HP-β-CD. ICs with Me-β-CD and γ-CD present similar ROS production ability, while β-CD appear to be less active than the others. Based on that, the ICs consisting of modified CDs (Me-β and HP-β-CD) produce ROS faster in aqueous solutions when compared to the ICs prepared with the unmodified, natural β-CD and γ-CD. The increased ROS production capacity of the MgPc in the ICs with HP-β-CD could be attributed to the degradation of the host molecule, leading to the release of free MgPc in the solution which in turn causes higher ROS production after laser treatment [34].
ROS Production Evaluation
For studying the production of ROS from the ICs and the pure MgPc, the samples underwent laser exposure for specific timeframes and then fluorescent spectrophotometry was used at 490 nm, which is the excitation wavelength for the fluorescent tracer. In Figure 7, ROS production ability of MgPc and the different ICs is displayed for different time intervals. Data are presented as percentage of fluorescein intensity prior to irradiation. Based on the above diagram, all the substances tested are able to produce ROS. More specifically, the largest ROS production, even higher than the one from pure MgPc, occurs from the IC with HP-β-CD. ICs with Me-β-CD and γ-CD present similar ROS production ability, while β-CD appear to be less active than the others. Based on that, the ICs consisting of modified CDs (Me-β and HP-β-CD) produce ROS faster in aqueous solutions when compared to the ICs prepared with the unmodified, natural β-CD and γ-CD. The increased ROS production capacity of the MgPc in the ICs with HP-β-CD could be attributed to the degradation of the host molecule, leading to the release of free MgPc in the solution which in turn causes higher ROS production after laser treatment [34].
Photodynamic Treatment
In order to proceed with the photodynamic treatment studies, dark and light toxicity experiments were performed. The appropriate concentration of the photosensitizer should not exhibit cytotoxicity without irradiation, while the applied light energy dose in the absence of photosensitizer should not present cytotoxicity as well. The cytotoxicity results of different concentrations of MgPc against an A431 squamous carcinoma cell line, both in its free form and in complexation with the different cyclodextrins, are presented in Figure 8.
Photodynamic Treatment
In order to proceed with the photodynamic treatment studies, dark and light toxicity experiments were performed. The appropriate concentration of the photosensitizer should not exhibit cytotoxicity without irradiation, while the applied light energy dose in the absence of photosensitizer should not present cytotoxicity as well. The cytotoxicity results of different concentrations of MgPc against an A431 squamous carcinoma cell line, both in its free form and in complexation with the different cyclodextrins, are presented in Figure 8. The examined concentrations of MgPc between 1 μM and 5 μM exhibited significant cytotoxicity, leading to their exclusion from further studies. However, cell treatment with a concentration of 0.5 μM of MgPc for 24 h did not affect the cell viability. In addition, light irradiation at 661 nm with power density of 18 mW/cm 2 and fluence rates of 1.08, 2.16, and 3.24 J/cm 2 was also found to be non-cytotoxic. Consequently, the photodynamic The examined concentrations of MgPc between 1 µM and 5 µM exhibited significant cytotoxicity, leading to their exclusion from further studies. However, cell treatment with a concentration of 0.5 µM of MgPc for 24 h did not affect the cell viability. In addition, light irradiation at 661 nm with power density of 18 mW/cm 2 and fluence rates of 1.08, 2.16, and 3.24 J/cm 2 was also found to be non-cytotoxic. Consequently, the photodynamic effect of MgPc and its cyclodextrin complexes against A431 cells was evaluated at the concentration of 0.5 µM, and with a power output density of 18 mW/cm 2 for various irradiation times. The results obtained on cell survival are presented in Figure 9. The examined concentrations of MgPc between 1 μM and 5 μM exhibited significant cytotoxicity, leading to their exclusion from further studies. However, cell treatment with a concentration of 0.5 μM of MgPc for 24 h did not affect the cell viability. In addition, light irradiation at 661 nm with power density of 18 mW/cm 2 and fluence rates of 1.08, 2.16, and 3.24 J/cm 2 was also found to be non-cytotoxic. Consequently, the photodynamic effect of MgPc and its cyclodextrin complexes against A431 cells was evaluated at the concentration of 0.5 μM, and with a power output density of 18 mW/cm 2 for various irradiation times. The results obtained on cell survival are presented in Figure 9. MgPc exhibited potent photosensitizing activity at the concentration of 0.5 µM under all irradiation conditions, reducing the cell viability to approximately 33% after 3 min of irradiation with 18 mW/cm 2 . All the examined CD-MgPc ICs presented significant PDT efficacy as well. The γ-CD-MgPc complex was the most promising photosensitizing nanosystem, presenting higher PDT activity than free MgPc, reducing the cell viability to 26% after 3 min of irradiation. This could be possibly attributed to the enhanced cellular uptake of MgPc after its encapsulation in the γ-CD cavity, improving its aqueous solubility and thus reducing its agglomeration [35]. On the other hand, complexation with the less water soluble β-CD [36], lead to an IC which presented the lowest phototoxicity under all the examined irradiation conditions (46% viability after 3 min of irradiation). This observation is in accordance with the ROS production studies in which β-CD-MgPc IC presented significantly lower ROS production ability than free MgPc and all the examined ICs. Overall, complexation of different phthalocyanines with cyclodextrins can be considered as an effective strategy for PDT efficiency improvement [37,38].
Intracellular Localization Studies
Intracellular localization was studied using fluorescence microscopy. Indicative images of A431 cells incubated with 0.5 µM of MgPc and γ-CD-MgPc IC for 4 h are shown in Figure 10.
ICs. Overall, complexation of different phthalocyanines with cyclodextrins can be consid-ered as an effective strategy for PDT efficiency improvement [37,38].
Intracellular Localization Studies
Intracellular localization was studied using fluorescence microscopy. Indicative images of A431 cells incubated with 0.5 μM of MgPc and γ-CD-MgPc IC for 4 h are shown in Figure 10. Any of the substances used did not affect cell structure and no nuclear localization was observed at 4 h of incubation. Fluorescence images presented higher fluorescence values in the case of cells incubated with γ-CD-MgPc compared to pure MgPc, indicating augmented cellular uptake of MgPc after its complexation with γ-CD. These findings are in accordance and can also explain the photodynamic efficiency results which showed that γ-CD-MgPc is more phototoxic than pure MgPc. It is also worth mentioning that after incubation for 4 h, cell structure was not affected, and no nuclear localization was observed at this incubation time.
Conclusions
The current research work demonstrated the successful preparation of inclusion complexes of natural (β-CD, γ-CD) and chemically modified cyclodextrins (HP-β-CD, Meβ-CD) with magnesium phthalocyanine (MgPc) via the kneading method. MgPc was successfully encapsulated with satisfactory inclusion efficiency values between 59% and 81%. The average size of the ICs ranged from 564 to 748 nm while the zeta potential ranged from −15 to −30 mV, indicating mild to high stability for the formed ICs. Moreover, the ICs were structurally characterized via FT-IR and NMR spectroscopy, which demonstrated the successful formation of the ICs, as well as that it is most likely that only part of Any of the substances used did not affect cell structure and no nuclear localization was observed at 4 h of incubation. Fluorescence images presented higher fluorescence values in the case of cells incubated with γ-CD-MgPc compared to pure MgPc, indicating augmented cellular uptake of MgPc after its complexation with γ-CD. These findings are in accordance and can also explain the photodynamic efficiency results which showed that γ-CD-MgPc is more phototoxic than pure MgPc. It is also worth mentioning that after incubation for 4 h, cell structure was not affected, and no nuclear localization was observed at this incubation time.
Conclusions
The current research work demonstrated the successful preparation of inclusion complexes of natural (β-CD, γ-CD) and chemically modified cyclodextrins (HP-β-CD, Meβ-CD) with magnesium phthalocyanine (MgPc) via the kneading method. MgPc was successfully encapsulated with satisfactory inclusion efficiency values between 59% and 81%. The average size of the ICs ranged from 564 to 748 nm while the zeta potential ranged from −15 to −30 mV, indicating mild to high stability for the formed ICs. Moreover, the ICs were structurally characterized via FT-IR and NMR spectroscopy, which demonstrated the successful formation of the ICs, as well as that it is most likely that only part of the phthalocyanine molecule is located inside the respective cyclodextrin's cavity. The Job's plot study indicated that the stoichiometry of the produced ICs is cyclodextrin:MgPc = 2:1. Release studies of MgPc from the ICs at 37 • C and a pH of 7.4 indicated a burst effect in the first 20 min in which approximately 70% of the encapsulated MgPc was released. The release data had better fit on the Korsmeyer-Peppas kinetics, while the release mechanism is described as anomalous transfer.
The ICs were tested for their ability to produce ROS, presenting satisfactory ROS production when irradiated with a laser at 661 nm. The photodynamic therapy studies against a squamous carcinoma A431 cell line indicated a potent photosensitizing activity of MgPc (33% cell viability after irradiation for 3 min with 18 mW/cm 2 ), as well as of the γ-CD-MgPc IC which was the most promising photosensitizing nanosystem, presenting the best PDT activity (cell viability 26% after 3 min of irradiation). Finally, intracellular localization studies indicated the enhanced cellular uptake of MgPc when encapsulated in the γ-CD cavity. Overall, CD-MgPc ICs can be considered as promising nanosystems for the sustained release of MgPc and the effective PDT cancer treatment.
|
2023-02-15T16:03:33.547Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "bcb6d5721118ebfd57c1a88f6db551add0e48ccf",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2306-5354/10/2/244/pdf?version=1676279870",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4cee5f7e2a877d6cc73b53e385b25a6c2171daaf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
251660802
|
pes2o/s2orc
|
v3-fos-license
|
THE HOMERIC CENTO IN IRENAEUS ( ADV. HAER . 1, 9,
The article studies the Homeric cento quoted by Irenaeus in Adversus Haereses (1, 9, 4) in the section directed against Valentinus and his followers: Irenaeus illustrates that their piecing together of disjoint passages from the Scriptures is similar to the technique used by authors of centones. The cento in question is of great interest as one of the earliest surviving examples of the genre that became popular in IV–V centuries AD. The article proposes a detailed linear commentary of the cento, with special attention to the interplay between the original context of the line and the way it is used in the cento. It is shown that the poetic technique, the sophisticated use of Homeric lines, and the subtle irony show that author of the cento was a cultivated, witty, and most probably pagan author who composed the cento for the mere enjoyment of the form.
In the section of his book Against Heresies 1 against Valentinus and his followers, criticizing their practice of bringing together disjoint passages from the Scriptures and thus, in his view, subverting the teaching, Irenaeus of Lyon quotes a Homeric cento to illustrate how the pastiche technique disrespects the original context, creating a story that Homer had never told, just as the Valentinians create a new teaching by piecing together excerpts from the Bible. This short cento is of great interest as one of the earliest examples of the genre that had not yet reached the popularity it would later enjoy 2 . This article proposes not only to analyze the way the cento is assessed and used by Irenaeus in his argumentation, but also to study it as an independent poem with an intricate web of intertextual play with the Homeric epics. It will be shown that the poem quoted by Irenaeus is a work of an erudite, witty and extremely skilled author.
Protesting against the practice of Valentinus and his followers of assembling quotations from the Scriptures with disregard to their context (ἔπειτα λέξεις καὶ ὀνόματα σποράδην κείμενα συλλέγοντες, μεταφέρουσι, καθὼς προειρήκαμεν, ἐκ τοῦ κατὰ φύσις εἰς τὸ παρὰ φύσιν, "and then, gathering together expressions and words that are disseminated, they transpose them, as we have said before, from a natural context to an unnatural one"), Irenaeus goes on to compare this practice to the composition of centos from Homeric lines: Ὡς ὁ τὸν Ἡρακλέα ὑπὸ Εὐρυσθέως ἐπὶ τὸν ἐν τῷ Ἅδῃ κύνα πεμπόμενον διὰ τῶν Ὁμηρικῶν στίχων γράφων οὕτως· (οὐδὲν γὰρ 1 Against Heresies is believed to have been written between 180 and 189 (Fedchenkov 2009: 359). A complete Latin translation of the work dating to ca. 380 survives, as well as excerpts in Armenian and in the original Greek (Osborn 2001: 1;Fedchenkov 2008: 339). The passage containing the cento is preserved in the Panarion of Epiphanius of Salamis whose discussion of the Valentinian sect depends almost entirely on Irenaeus (Williams 2009: xxv). It is lucky that the passage in Greek is preserved: although we would have probably been able to identify the Homeric lines used in the cento from its Latin translation (see Harvey 1857: 86-87), it might have been difficult to recognize the modifications introduced by the author of the cento into Homer's text in lines 8 and 9. 2 It is generally accepted that the high point in the development and popularity of the cento as a literary genre was IV-V century AD (see e.g. recently Garambois-Vasquez 2017: 9). On the genre in general, see the classical overview by O. Crusius 1899;Bright 1984: 80-82; for the problematization and assessment of intertextuality in the cento, see the excellent article by Hinds 2014, as well as Bažil 2017. κωλύει παραδείγματος χάριν ἐπιμνησθῆναι καὶ τούτων, ὁμοίας καὶ τῆς αὐτῆς οὔσης ἐπιχειρήσεως τοῖς ἀμφοτέροις.) Ὡς εἰπὼν, ἀπέπεμπε δόμων βαρέα στενάχοντα Φῶθ' Ἡρακλῆα, μεγάλων ἐπιΐστορα ἔργων, Εὐρυσθεὺς, Σθενέλοιο πάϊς Περσηϊάδαο Ἐξ Ἐρέβευς ἄξοντα κύνα στυγεροῦ Ἀΐδαο. Βῆ δ' ἴμεν, ὥστε λέων ὀρεσίτροφος ἀλκὶ πεποιθὼς, Καρπαλίμως ἀνὰ ἄστυ· φίλοι δ' ἀνὰ πάντες ἕποντο, Νύμφαι τ' ἠΐθεοί τε, πολύτλητοί τε γέροντες, Οἶκτρ' ὀλοφυρόμενοι, ὡσεὶ θάνατόνδε κίοντα. Ἑρμείας δ' ἀπέπεμπεν, ἰδὲ γλαυκῶπις Ἀθήνη· Ἤιδεε γὰρ κατὰ θυμὸν ἀδελφεὸν, ὡς ἐπονεῖτο, "Just as the one who writes in the following way about Heracles being sent by Eurystheus for the hound of Hades by using Homeric verses (it is not amiss to recall them exempli gratia, as the procedure is the same for both): Having said this, he sent him away from his home, moaning deeply, The man Heracles, experienced in many deeds, Eurystheus, the son of Sthenelus son of Perseus, To bring from Erebus the hound of the hateful Hades. And he went on his way, as a mountain-bred lion, haughty in his might, Rapidly through the city: and all his friends followed, Maidens and youths, and elderly men who had suffered much, Weeping pitifully, as if he were going to his death. And Hermes led him on his way, and the owl-eyed Athena: For she knew in <her> heart, how besieged by cares her brother was" (Iren. Adv. Haer. 1, 1, 20 = Epiphan. Panar. 1, 29, 5-8).
It is worth noting how Irenaeus accompanies the quotation by an excuse for introducing it, οὐδὲν γὰρ κωλύει παραδείγματος χάριν ἐπιμνησθῆναι καὶ τούτων: this parenthesis seems to reflect his need to justify the fact that he does quote a poem that as a sensitive and well-read reader he does appreciate, although it belongs to a genre that he, on the whole, disapproves of 3 . He goes on to explain the impression that this cento might create in an inadvertent reader 4 : Τίς οὐκ ἂν τῶν ἀπανούργων συναρπαγείη ὑπὸ τῶν ἐπῶν τούτων, καὶ νομίσειεν οὕτως αὐτὰ Ὅμηρον ἐπὶ ταύτης τῆς ὑποθέσεως 3 The comparison of the cento technique to the indiscriminate use of the Scriptures will later be echoed by Jerome in his letter to Paulinus of Nola: ad sensum suum incongrua aptant testimonia, quasi grande sit et non vitiosissimum docendi genus, depravare sententias, et ad voluntatem suam Scripturam habere re pugnantem (Hieron. Ep. 53,7;cf. Tertul. Praescr. 39,3;4;6). 4 The importance of this point is rightly emphasized by Sowers (2020: 101). πεποιηκέναι; Ὁ δ' ἔμπειρος τῆς Ὁμηρικῆς ὑποθέσεως ἐπιγνώσεται, [suppl. μὲν τὰ ἔπη, τὴν δ' ὑπόθεσιν οὐκ ἐπιγνώσεται,] εἰδὼς ὅτι τὸ μέν τι αὐτῶν ἐστι περὶ Ὀδυσσέως εἰρημένον, τὸ δὲ περὶ αὐτοῦ τοῦ Ἡρακλέος, τὸ δὲ περὶ Πριάμου, τὸ δὲ περὶ Μενελάου καὶ Ἀγαμέμνονος. Ἄρας δὲ αὐτὰ, καὶ ἓν ἕκαστον ἀποδοὺς τῇ ἰδίᾳ, ἐκποδὼν ποιήσει τὴν ὑπόθεσιν, "Who among the guileless would not be captured by these verses, and would not consider that Homer had composed them on this storyline 5 ? But he who is experienced in Homer's composition would recognize <the verses as Homer's, and would not recognize the story plot as Homer's>, knowing that one of them was said of Odysseus, and one of Heracles himself, and one on Priamus, and one on Menelas and Agamemnon. And taking them, and one by one placing them back where they belong, he would make the story plot disappear" (Adv. haer. 1,9,4). This remark refers to the game that the author of the cento plays with the reader, challenging him to recognize the original contexts of the Homeric verses that had been used to weave together this text that is cardinally new, and on a topic that Homer had never treated (cf. Keaney, Lamberton 1996: 310-311;Usher 1998: 29). However, Irenaeus emphasizes and explicitly disapproves of the deceptive nature of this game, suggesting that it is worthy of πανοῦργοι.
The authorship of the cento has been discussed. Among the earliest authors to bring up the question of the cento's authorship was Heinrich Ziegler, who suggested that Irenaeus might have composed it himself (or might be citing another author): in both cases Ziegler argued that the quotation of the cento demonstrates Irenaeus' familiarity with classical Greek literature 6 . 5 It is not easy to give an adequate translation of the word ὑπόθεσις here, as the works where this passage is analyzed show: Wilken translates it as "sense" or "system", making it refer to "the meaning of the Christian faith" (Wilken 1967: 33); Usher contests this interpretation, stressing that ὑπόθεσις applies more to the composition / performance of centones (Usher 1998: 29 n. 15); Sowers also interprets ὑπόθεσις as a term taken from the Classical paideia, in particular, from the tradition of rhetorical declamation (Sowers 2020: 96). We would suggest that the term is used in the sense "subject", or even more exactly "storyline" or "plot of the story", a meaning that was developed in particular with regard to summaries of plays (LSJ 1996: 1882cf. Holwerda 1976). 6 "Mag Irenäus diese Zusammenstellung selbst gemacht oder aus dem Buche irgend eines derjenigen Schriftsteller entlehnt haben, von denen er sagt, dass sie es sich zur Aufgabe machten, irgend welchen Inhalt durch Zusammenstellung Homerischer Verse als uralt zu erweisen, jedenfalls zeugt die Bekanntschaft mit Homer und mit seinem Gebrauch bei den Schriftstellern seiner Zeit von griechischer Bildung" (Ziegler 1871: 17).
In his 1961 book on Classical Greek culture in the first centuries of Christianity, Jean Daniélou briefly discussed the cento on Heracles quoted by Irenaeus: judging from the broader context of Irenaeus' passage (discussion of Gnosticism), and from the popularity of centones among the Gnostics, Daniélou surmised that the cento must have been composed by Valentinus and must thus be interpreted not as a poem on a pagan myth, but as an allegorical rendering of the Christian doctrine 7 . Daniélou's hypothesis of Valentinus' authorship was rejected by Robert L. Wilken who argued that Irenaeus does not designate Valentinus as author anywhere, nor does he seem to view the poem as a Christian (or Gnostic) allegory: his only point in using the cento is to illustrate the subversion of the original text through the pastiche technique 8 . In recent studies, the question of authorship of the cento is usually qualified as unanswerable, and only the high culture of its author is emphasized: thus, Brian Sowers viewed the cento quoted by Irenaeus as a product of classical paideia that should be interpreted independently of Irenaeus' polemics 9 ; similarly, Oscar Prieto Domínguez argues that the cento did not originally carry any religious associations, and that its allegorical interpretation is secondary 10 . 7 "This is a cento of lines from Homer, composed by Valentinus and given an allegorical meaning by him" (Daniélou 1973(Daniélou [1961: 85). This allegorical meaning is interpreted as follows: "It seems therefore that what Valentinus was seeking to describe was the mission of Christ, sent by the Father into the realm of death to deliver those who were death's prisoners, a mission of immense labour in which Christ figures as hero" (ibid. 86). On centones in Gnostics, see also Prieto Domínguez 2011: 102. 8 "Consequently we conclude that the cento does not reproduce a gnostic allegory of Homer, and that there is no link between he content of the cento and the content of gnostic teaching […] Irenaeus used the cento simply as an illustration of how men misunderstand and pervert writings when they rewrite them to suit their purposes. Perhaps such an interpretation is more prosaic than that offered by Daniélou, but it is certainly not less interesting. For it shows us something of Irenaeus' familiarity with classical authors, his awareness of how they were used in his own day, and his skill at putting such knowledge to work in a theological argument" (Wilken 1967: 31). 9 "In my view, Irenaeus' Herculaean cento, written during the earliest stages of early Christianity's engagement with Graeco-Roman poetry, should be read and interpreted on its own before being placed within the context of Irenaeus' polemical agenda" (Sowers 2020: 99). 10 "este poema en origen debió de ser un centón no-religioso cristianizado por los primeros creyentes y dogmatizado por algunas partes de la Iglesia primitiva: el mito de Heracles pasa a representar el misterio de Cristo" (Prieto Domínguez 2011: 102). This cautiousness in speaking of the cento's author seems appropriate. We propose a detailed linear commentary of the cento, with special attention to the interplay between the original context of the line and the way it is used in the cento. We will then summarize what can be gleaned about its author from its poetic technique, and briefly return to Irenaeus' use and evaluation of the cento.
The line used for the cento follows this speech, briefly describing Odysseus' return to his comrades before they resume their journey. Not much is remarkable about Od. 10, 76, and other ancient authors do not seem to have used it 11 . While the author of the cento seems to have chosen his subject based on the possibility of combining lines mentioning Heracles, Eurystheus and Cerberus (vv. 2-4), the first verse of the cento was taken as the starting point primarily because of the verb ἀπέπεμπε 12 . It should be noted that in this new context the formulaic ending βαρέα στενάχοντα 13 seems to acquire a delicate irony, highlighting an unexpected emotionality in Heracles, as he is depicted lamenting the task that will become the greatest of his labors (see note on v. 2); one can also imagine that, as Odysseus had pleaded with Aeolus, Heracles might have pleaded 11 See West's apparatus criticus (West 2017: 204, ad Od. 10, 76). The passage itself was, of course, well-known: e.g. verses 74-75 are cited by emperor Julian in one of his letters (Iul. Epist. 49, 432a). 12 There is a subtle play on the two meanings of ἀποπέμπειν in Homer: in Aeolus' speech it is used of official send-off (πομπή), whereas in the capping formulaic verse it appears in the sense "to chase someone away from the house" (cf. Heubeck, Hoekstra 1989: 47, ad Od. 10, 76). 13 Besides the current passage, βαρέα στενάχοντα appears (invariably at the end of the verse) in Il. 8,334;13,423;13,538;14,432;Od. 4,516;5,420;23,317. with Eurystheus to spare him 14 . The introductory ὡς εἰπών leaves the impression of an extract from a larger narrative (rather than an account commencing in medias res), and the reader is left guessing what Eurystheus' words might have been.
2. φῶθ' Ἡρακλῆα, μεγάλων ἐπιΐστορα ἔργων. The first of the three lines that directly mention the main characters of the myth (Heracles, Eurystheus, Cerberus) is taken from Od. 21, 26, where Homer recounts the story of the encounter of young Odysseus with Iphytus, and the bow and arrows that Iphytus had given him as a gift (Od. 21, 12-41); later in the book the same bow will be used by Odysseus to slay the suitors. Heracles' role in this story is ambivalent, as he killed Iphytus, who was staying as a guest at his house, in total violation of the laws of xenia (see especially Od. 21, 27-29). It should be noted that the passage as a whole has been suspected of being an interpolation because of the confusing details of the myth and the unstraightforward, convoluted structure of the narrative 15 . However, it was included in Alexandrian editions of Homer, and the author of the cento obviously did not consider it spurious. Due to the mention of Heracles' name and to the hapax legomenon ἐπιΐστωρ, the line was used by several authors: thus, Strabo in the beginning of his Geography recalls the expression μεγάλων ἐπιΐστορα ἔργων (Strab. 1, 1, 16) as highly suitable for Heracles because of his experience; Clement of Alexandria cites the verse in his catalogue of authors who considered Heracles a mortal (Clem. Protr. 2, 30, 7; in this case, it was certainly the expression φῶθ' Ἡρακλῆα that drew his attention). There is evidence that for ancient scholars the interpretation of μεγάλων ἐπιΐστορα ἔργων presented a problem: it could be taken to refer to Heracles' experience in undertaking difficult labors, or to his indirect implication in the abduction of the mares of Eurytus 16 . However, the author of the cento seems to have 14 Cf. scholia (ad Od. 10, 76) that comment on βαρέα στενάχοντα that it is appropriate of someone whose plea was ignored (ὅπερ οἰκεῖον εἰπεῖν ἐπὶ τοῦ ἐν ἱκεσίᾳ μὴ ἐλεηθέντος). 15 Russo, Fernández-Galiano, Heubeck 1992: 150 (ad Od. 21, 13) and 151 (ad Od. 21,26), with references to earlier studies. 16 Most lexicographers seem to have understood ἐπιΐστωρ as reference to Heracles' experience: Hesychius cites the word in the same case in which it stood in Homer (ἐπιΐστορα· ἔμπειρον "experienced", Hsch. ε 4826; similarly, schol. in Od. 21,26); the other interpretation, "accomplice", is mentioned in Eustathius (in Od. 21, 25 = vol. 2, 247 Dindorf), and is alluded to in poetry (see especially Quint. 13, 373; cf. Lehrs 1882: 109). understood it as a simple reference to his labors. As regards the combination of this verse with the previous line (Od. 10, 76), there is a subtle subversion of the character of Heracles that is brought out by the intertext: a well-versed reader would have noticed that in Odyssey 21 Heracles is specifically characterized by his courage: ἐπεὶ δὴ Διὸς υἱὸν ἀφίκετο καρτερόθυμον, / φῶθ' Ἡρακλῆα, μεγάλων ἐπιίστορα ἔργων... "but when he reached Zeus' son of mighty spirit, the man Heracles, experienced in great labors…" (Od. 21,(25)(26). The reader who remembered that Heracles had been presented as καρτερόθυμος in the Odyssey would have enjoyed the ironic contrast with his emotional reaction to the task assigned to him by Eurystheus in the cento (βαρέα στενάχοντα).
3. Εὐρυσθεὺς, Σθενέλοιο πάϊς Περσηϊδάο. The line comes from Here's speech quoted within Agamemnon's speech to Achilles: Agamemnon, saying that he is willing to make amends, recounts, as an example of the powers of Ate who had beguiled him, the story of Here using her to trick Zeus and to make Eurystheus king instead of Heracles. In Here's short and malicious announcement of the birth of Eurystheus to Zeus the name of the newborn is postponed 17 , in order to taunt Zeus' expectations that Heracles will be born first, and adroitly combined with the apposition σὸν γένος "your progeniture" 18 . Eurystheus' name and lineage (which in this case is equally important) markedly occupy a whole verse: Ζεῦ πάτερ ἀργικέραυνε, ἔπος τί τοι ἐν φρεσὶ θήσω· ἤδη ἀνὴρ γέγον' ἐσθλὸς ὃς Ἀργείοισιν ἀνάξει Εὐρυσθεὺς Σθενέλοιο πάϊς Περσηϊάδαο M. Schmidt in his entry in LfgrE sides with the first interpretation (see LfgrE, vol. 2, col. 638, s.v. ἐπιΐστωρ). 17 Cf. Edwards 1991: 251, ad Il. 19, 121-124: "Here's revelation is crafted with immense skill; first comes the birth of a future king, then the surprise of his name (in the prominent position […]) and his lineage, and finally the triumphant σὸν γένος, which again begins the verse". 18 Here's malice in her use of the apposition σὸν γένος lies in its ambiguity: Zeus is expecting his own son, Heracles, to be born first and become king, whereas Here is evoking the fact that Sthenelos was the son of Perseus, and thus grandson of Zeus. Cf. Eustathius' succinct explanation of what Here's words to Zeus meant: οἷα δηλαδὴ κατηγμένῳ ἐκ σοῦ διὰ τὴν σήν ποτε Δανάην, ἐξ ἧς ὁ πατὴρ τούτου Περσεύς "obviously as he is your descendent from Danae, who was yours at the time and from whom [Sthenelus'] father was born" (Eustath. in Il. 19, 96-133 = 4, 293 van der Valk). σὸν γένος· οὔ οἱ ἀεικὲς ἀνασσέμεν Ἀργείοισιν, "Father Zeus, lord of bright thunder, I will announce some news (lit. put a word in your mind): he is born already, the beautiful man who will rule over the Argives, Eurystheus, son of Sthenelus son of Perseus, your progeniture: it is not unfit for him to rule over the Argives" (Il. 19, 121-124).
The line seems to have been evoked fairly regularly in scholarly contexts, dealing with mythography: thus, the line is quoted by scholia to Thucydides' ἀρχαιολογία (schol. in Thuc. 1, 9, 2), and M. L. West, following K. Latte, suggested that Hesychius' entry on Eurystheus was based on this verse 19 ; among literary texts, the verse does not seem to have been cited much, although there may be an allusion to it in Pindar fr. 169a 44-45. Given the subject of the cento, this was the one Homeric verse the author of the poem could not really forgo. Aristides 20 . The ingenuity of the author of the cento is evident in his use of this line. On the one hand, it contains a straightforward description of Heracles' labor (incidentally, it is the only one that Athena mentions specifically in her speech, although she states that she had helped out with other labors as well) 21 , and its use was warranted by the subject of the cento. On the other hand, for a reader who remembered the original context of the line, the fact that it was spoken by Athena would anticipate the mention of the goddess in line 9 (= Od. 11, 626).
βῆ δ' ἴμεν ὥς τε λέων ὀρεσίτροφος, ἀλκὶ πεποιθώς.
This line is among stock Homeric quotations; it is taken from the opening of one of the most well-known Homeric similes, comparing naked, covered in filth, hungry Odysseus to a mountain lion: βῆ δ' ἴμεν ὥς τε λέων ὀρεσίτροφος, ἀλκὶ πεποιθώς, ὅς τ' εἶσ' ὑόμενος καὶ ἀήμενος, ἐν δέ οἱ ὄσσε δαίεται· αὐτὰρ ὁ βουσὶ μετέρχεται ἢ ὀΐεσσιν ἠὲ μετ' ἀγροτέρας ἐλάφους· κέλεται δέ ἑ γαστὴρ μήλων πειρήσοντα καὶ ἐς πυκινὸν δόμον ἐλθεῖν… "He advanced as a mountain lion, confident in his might, who walks in rain and wind, and his eyes are ablaze: and he comes unto oxen or sheep, or on wild deer; for his hunger (lit. stomach) drives him to attack cattle, and even to enter a well-defended homestead…" (Od. 6,(130)(131)(132)(133)(134) Probably it was the opening, βῆ δ' ἴμεν, that first directed the attention of the cento's author to it (at this point in the poem, the beginning of Heracles' journey had to be highlighted), but ultimately the insertion of an extended simile, typical trait of Homer's style, seems to have been of equal importance. While there is a large number of lion similes in the Homeric poems, and there is even another simile that starts with the same βῆ δ' ἴμεν ὥς τε λέων ὀρεσίτροφος in the Iliad (12, 299) 22 , the author seems to have chosen Od. 6, 130 for several reasons: (a) the line is syntactically 20 For a full list of references, see M. L. West's apparatus criticus (ad loc.). The passage (Il. 8,(366)(367)(368)(369) is quoted directly by Pausanias (8, 18, 3) and Aelius Aristides (Or. 3, 377). 21 οὐδέ τι τῶν μέμνηται, ὅ οἱ μάλα πολλάκις υἱὸν / τειρόμενον σώεσκον ὑπ' Εὐρυσθῆος ἀέθλων, "nor does he remember, how I saved many times for him his son under duress of Eurystheus' tasks" (Il. 8,(362)(363). 22 On the relationship of Il. 12, 299 and Od. 6, 130-134, see Hainsworth 1993: 351, ad Il. 12, 299-306; more generally on lion similes as a traditional element of the poetic diction, see Fränkel 1921: 69-70;Scott 1974: 58-62;Friedrich 1981: 120-125. autonomous, constituting a full phrase, which made it easier to combine it with the next verse; (b) there seems to be a subtle irony in applying Odysseus' comparison to a hungry lion on the prowl (which concerned his general appearance, not to predatory intentions 23 ) to Heracles whose actual task is to abduct the dog Cerberus. There is thus a tongue-in-cheek change of applicability of the epic simile (as concerns the tertium comparationis), and a reader who remembered the Homeric simile might have even associated ἐς πυκινὸν δόμον ἐλθεῖν at the end of the passage with the Hades 24 . 6-8. καρπαλίμως ἀνὰ ἄστυ· φίλοι δ' ἀνὰ πάντες ἕποντο… οἶκτρ' ὀλοφυρόμενοι, ὡσεὶ θανάτονδε κιόντα. Verses 6 and 8 of the cento both come from a single passage in the Iliad 24, describing Priam's progress on his chariot through Troy on his way to the Greek camp, where he goes to plead with Achilles to give him back Hector's corpse 25 . In view of the mortal danger he is putting himself in, his friends and close ones follow Priam on his way, and leave him at the gates of Troy (vv. 329-331). The text of v. 6 as given in the quotation from Irenaeus in the Panarion diverges from the Homeric vulgate: (a) instead of the Homeric κατὰ ἄστυ it reads ἀνὰ ἄστυ, and (b) instead of Homeric ἅμα the text reads ἀνά (producing a tmesis ἀνὰ… ἕποντες). However, a comparison with the Latin translation (where the line is rendered as urbem per mediam: noti simul omnes abibant, see Harvey 1857: 87) shows that, at least in the case of the second divergence, the error must have crept into the 23 Cf. de Jong 2001: 158, who argues that appearance is only the secondary function of the simile, while the primary tertium comparationis is that both Odysseus and the lion are in need (Odysseus of clothing and guidance, the lion of prey): "The secondary function of this simile is to give expression to the way in which the girls focalize Odysseus: in their eyes, Odysseus is as frightening as a lion, not only because he is a man and might harm them […] but especially because, like that animal, he is disfigured through exposure to the elements (the lion is 'rained on and blown by the wind', Odysseus is 'befouled with brine')". On the double point of this comparison, see also Heubeck, West, Hainsworth 1988: 302, ad Od. 6, 130-7. 24 Cf. the formular expression "house of Hades" (Ἀΐδαο δόμοι, usually in dative or in the accusative) in Il. 15,251;22,52;22,482;23,19;23,103;23,179;Od. 10,175,491 and 564;12,21;14,208;15,350;20,208;24,204;24,264. 25 The passage leads up to one of the best-known scenes from the Homeric epics (Priam's night visit to Achilles to claim Hector's body). While not often quoted per se, there is little doubt that these lines (and preceding verses) were regularly included in discussions of the scene. text of the cento after Irenaeus (simul undoubtedly renders ἅμα, not ἀνά). It is worthwhile to quote the passage in full, not only to quote the Homeric original reading, but also because it allows us to show how the author of the cento had to adapt syntactically the verses for his poem: πρόσθε μὲν ἡμίονοι ἕλκον τετράκυκλον ἀπήνην, τὰς ᾿Ιδαῖος ἔλαυνε δαΐφρων· αὐτὰρ ὄπισθεν ἵπποι, τοὺς ὃ γέρων ἐφέπων μάστιγι κέλευε καρπαλίμως κατὰ ἄστυ· φίλοι δ' ἅμα πάντες ἕποντο πόλλ' ὀλοφυρόμενοι ὡς εἰ θάνατον δὲ κιόντα, "in front of him mules drew the four-wheeled wagon that the skillful Idaeus drove forward; and behind them the horses, that the elderly man [Priam] whipped on swiftly through the city: and all his friends (or loved ones) followed together with him, much lamenting him, as if he were heading to his death" (Il. 24, 324-328).
The author of the cento took only the last part of the description of Priam's progress which did not form a full sentence from the point of syntax, joining it adroitly with Od. 6, 130: by the same dint he began a new phrase in the middle of the verse, and, had he followed Homer without modification, the sentence would have ended on the very next verse: however, the use of two consecutive Homeric lines in a row would have been contrary to the "rules of the game", as the very idea of cento presupposed combining disjoint lines 26 . To eschew this, the author of the cento expanded the subject of the sentence (φίλοι) by introducing (as verse 7) an equally famous line νύμφαι τ' ἠΐθεοί τε, πολύτλητοί τε γέροντες from the 26 This is mentioned as a rule by Prieto Domínguez 2011: 108; Sowers 2020: 100. Indeed, ancient writers of centones also state that they did their best to avoid using two consecutive lines in a row: Ausonius in the preface to Cento nuptialis says nam duos iunctim locare ineptum est, et tres una serie merae nugae, "for it is inappropriate to place together two joint lines, and to place three in a row is a mere joke"; and Eudocia in the preface to her own Homerocentones felt it necessary to apologize for using "double lines": εἰ δέ τις αἰτιόῳτο καί ἡμέας ἐς ψόγον ἕλκοι, / δοιάδες οὕνεκα πολλαὶ ἀρίζηλον κατὰ βίβλον / εἰσὶν Ὁμηρείων τ' ἐπέων θ' ὅπερ οὐ θέμις ἐστίν, / ἴστω τοῦθ', ὅτι πάντες ὑποδρηστῆρες ἀνάγκης, "and should someone accuse and condemn us, because there are multiple double lines from Homeric epic poems in this conspicuous book, which is not allowed, let him know that all they were abettors of necessity" . Incidentally, she did indeed use the two successive lines Il. 24, 327-328 in her cento (vv. 1734-1735). catalogue of souls from Odysseus' account of his descent to Hades in the Odyssey: […] αἱ δ' ἀγέροντο ψυχαὶ ὑπὲξ ᾿Ερέβευς νεκύων κατατεθνηώτων· νύμφαι τ' ἠΐθεοί τε πολύτλητοί τε γέροντες παρθενικαί τ' ἀταλαὶ νεοπενθέα θυμὸν ἔχουσαι, πολλοὶ δ' οὐτάμενοι χαλκήρεσιν ἐγχείῃσιν, ἄνδρες ἀρηΐφατοι, βεβροτωμένα τεύχε' ἔχοντες… "and they, the souls of the departed dead, gathered from the depths of Erebos: young women and young men, and the elderly who had known much suffering, and gentle maidens with their hearts bearing recent woes, and many warriors who had been wounded by bronze spears, <still> wearing their armor covered in blood" (Od. 11,(36)(37)(38)(39)(40)(41).
Incidentally, the insertion of Od. 11, 38 between lines 6 and 8 of the cento is done with great delicacy towards the original context of Iliad 24, for it echoes the expanded designation of φίλοι who followed Priam in the apposition παῖδες καὶ γαμβροί (Il. 24,331). At the same time, the original context of the line contributes to the intricate play that the author of the cento had created through a clever combination of the verses on the ambiguity of descent to the Underworld through dying (the normal way for mortals) and while still living, as a heroic (superhuman) feat. It is also worthwhile to question whether there might not be a subtle irony in νύμφαι τ' ἠΐθεοί τε with regard to Heracles who, in mythology, was well known to be a great lover both of girls and boys 27 ; this type of sly allusion to Heracles' amorous reputation would be in keeping with the irony of comparing him to a lion as he is on his way to abduct a dog.
The beginning of v. 8 (Il. 24, 328) differs from the Homeric vulgate: instead of the generic πόλλ' ὀλοφυρόμενοι "weeping greatly" the cento reads οἶκτρ' ὀλοφυρόμενοι "weeping pitifully". Now, this expression is used elsewhere in the fixed formulaic position at the beginning of the verse, but almost exclusively of women's lament 28 : it is only once applied by Odysseus to his comrades, as they weep (οἴκτρ' ὀλοφυρομένους,Od. 10,409), strengthening the impression of overemotional and unmanly behavior. Most Homeric manuscripts to Il. 24, 328 give πόλλ' ὀλοφυρόμενοι as the only reading (see West 2000West -2006, and Eudocia in her cento had used the vulgate reading (lines 1734-1735); οἶκτρ' ὀλοφυρόμενοι seems to have appeared only in Irenaeus' cento, but M. L. West, judging from his apparatus criticus, seems to be open to treating it as an ancient variant reading. However, a look at the Latin translation of the cento suggests that the variant reading was introduced into the text of the cento after Irenaeus, possibly at the point when the passage was included in the Panarion: the line is translated plorantes multum, ac si mortem iret ad ipsam (see Harvey 1857: 87), where plorantes multum surely renders πόλλ' ὀλοφυρόμενοι, not οἶκτρ' ὀλοφυρόμενοι.
The second part of the verse, ὡς εἰ θάνατόνδε κιόντα, that in the Iliad reflected the feelings and apprehensions of those who were seeing Priam off is exquisitely reapplied by the author of the cento, for Heracles is indeed going to the realm of death 29 . 9. Ἑρμείας δ' ἀπέπεμπεν, ἰδὲ γλαυκῶπις Ἀθήνη. The line is taken from Heracles' address to Odysseus as they meet in the Underworld. Asking Odysseus what brought him to the realm of the dead, Heracles remembers his own descent to Hades to abduct Cerberus: καί ποτέ μ' ἐνθάδ' ἔπεμψε κύν' ἄξοντ'· οὐ γὰρ ἔτ' ἄλλον φράζετο τοῦδέ γέ μοι κρατερώτερον εἶναι ἄεθλον. τὸν μὲν ἐγὼν ἀνένεικα καὶ ἤγαγον ἐξ Ἀΐδαο· Ἑρμείας δέ μ' ἔπεμπεν ἰδὲ γλαυκῶπις Ἀθήνη, "and once he [Eurystheus] sent me here, to bring back the hound: for he thought that no other task would be harder for me than this. And I carried him back up <to earth>, and led him out of Hades, and Hermes escorted me <on my way> and the owl-eyed Athena" (Od. 11, 623-626).
The cento is the only source to quote the line directly; but the choice is masterly. True, the author did need to modify δέ μ' ἔπεμπεν to δ' ἀπέπεμπεν as the line had to depict Heracles on his way to accomplish the labor. But the very choice of the verse (the original context) points to the fact that he was successful in his enterprise and was able to recount it to Odysseus (while Heracles of the cento is apprehensive). Explicitly, the line is linked to v. 1, by the use of the verb ἀποπέμπω: by introducing a minor correction into Homer's text, the author of the cento ingeniously contrives to use the verb ἀποπέμπω in the two meanings ("send away, turn away" and "send off, lead, escort") in which it appeared in Od. 10, 73 and 76 (cf. n. 12). However, a reader who remembered the broader original context of v. 9 would have noticed the implicit connection between this verse and v. 4 (ἐξ ᾿Ερέβευς ἄξοντα κύνα στυγεροῦ Ἀΐδαο; cf. κύν ' ἄξοντ' in Od. 11,632), and the mention of Athena (ἰδὲ γλαυκῶπις Ἀθήνη) strengthens the connection with Athena's account of her role in Cerberus' abduction Iliad 8 (cf. commentary on n. 4).
10. ᾔδεε γὰρ κατὰ θυμὸν ἀδελφεὸν, ὡς ἐπονεῖτο. For the last line of the cento, the author chose a verse from a different passage where siblings support one another. The line was taken from Iliad 2, as Menelas, understanding the pressure his brother is under, stands by his side, showing support during the sacrifice to Zeus before the battle: αὐτόματος δέ οἱ ἦλθε βοὴν ἀγαθὸς Μενέλαος· ᾔδεε γὰρ κατὰ θυμὸν ἀδελφεὸν ὡς ἐπονεῖτο "Menelas good at war cry joined him (lit. came to him) of his own accord: for he knew in his heart, how hard pressed <his brother> was" (Il. 2, 408-409).
The verse seems to have been known: it is cited by Plutarch in the Quaestiones convivales (Plut. Mor. 706f), where the crucial point for his insertion of the quotation is the adjective αὐτόματος in v. 408, which he interprets as "of his own accord, i.e., without invitation" 30 . In the context of the cento the subject of the phrase is obviously Athena. By using this line to close his poem, the author of 30 Plutarch does not directly quote Il. 2, 408, but refers to it by using αὐτόματος in the authorial text: τὸν Μενέλαον Ὅμηρος πεποίηκεν αὐτόματον ἑστιῶντι τοὺς ἀριστεῖς τῷ Ἀγαμέμνονι παραγινόμενον· 'ᾔδεε γὰρ κατὰ θυμὸν ἀδελφεὸν ὡς ἐπονεῖτο', "Homer presented Menelas as coming without invitation (lit. of his own accord) to Agamemnon entertaining the chiefs of the army: 'for he knew in his heart, how besieged by troubles his brother was' " (Plut. Mor. 706f). the cento presented the relationship between Athena and Heracles as being closer than in Homer: a reader who remembered the original context of Il. 2, 409 would imagine Athena helping Heracles of her own accord, whereas in Homer she had been sent to his aid by Zeus (Il. 8,(364)(365).
A careful analysis of the cento shows that its author knew his Homer extremely well, not merely stitching together disjoint lines, but also subtly evoking their original context. He shows great mastery in choosing and fitting together Homeric lines, so that the pastiche required minimal alterations in the Homeric text: he was obliged to modify δέ μ' ἔπεμπεν to δ' ἀπέπεμπεν in v. 9, but this seems to be the only intervention (as we have shown, οἶκτρ' ὀλοφυρόμενοι instead of Homer's πόλλ' ὀλοφυρόμενοι in v. 8 is probably an error that was introduced into the text of the cento after Irenaeus, either by Epiphanius himself, or alternatively, it might have been present in his copy of Irenaeus; similarly for ἀνά instead of κατά, and ἀνά instead of ἅμα in v. 6). The author of the cento also masterfully eschewed breaking the rule that required using only disjoint lines in the cento, by introducing Od. 11, 38 between Il. 24, 327 and 328 (v. 6 and 8 respectively). It also seems important to note the subtle, but, in our view, unmistakable irony with which the author of the cento treats his subject (curiously, this has not been emphasized in the previous works on the cento). The irony appears as early as v. 1, as Heracles' intense emotional reaction is highlighted by βαρέα στενάχοντα; then follows the lion simile, presenting Heracles as a lion on his way to capture a dog; finally, the seems to be a tongue-in-cheek allusion to Heracles' amorousness in νύμφαι τ' ἠΐθεοί τε (v. 6), especially as it follows φίλοι of v. 5. This subtle and erudite irony would suggest a witty, cultivated, extremely well-educated and, most probably, pagan author who wrote the cento for the mere enjoyment of the form, with no serious (e.g. allegorical) intent. We would thus agree with Sowers and Prieto Domínguez who suggest that Irenaeus was using the cento as an example of a poetic technique that he could not fully approve of. True, in his comments following the cento he focuses on listing the original contexts of the lines that the author had used, and speaks of the danger that the less sophisticated readers might be fooled into believing that Homer had written about Heracles' descent into Hades. However, the words οὐδὲν γὰρ κωλύει παραδείγματος χάριν ἐπιμνησθῆναι καὶ τούτων seem to show that as a cultivated reader he was personally able enjoy the cento's playful and witty reworking and reinterpretation of the Homeric verses.
|
2022-08-19T15:05:47.049Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "c9ffd06fa7de151c057e4c45398cb8d3fa9fed2d",
"oa_license": null,
"oa_url": "https://doi.org/10.30842/ielcp230690152633",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "46bff2082659d11e7b707c2a47d768f305a01d95",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
}
|
10628578
|
pes2o/s2orc
|
v3-fos-license
|
Distortion of tRNA upon Near-cognate Codon Recognition on the Ribosome*
The accurate decoding of the genetic information by the ribosome relies on the communication between the decoding center of the ribosome, where the tRNA anticodon interacts with the codon, and the GTPase center of EF-Tu, where GTP hydrolysis takes place. In the A/T state of decoding, the tRNA undergoes a large conformational change that results in a more open, distorted tRNA structure. Here we use a real-time transient fluorescence quenching approach to monitor the timing and the extent of the tRNA distortion upon reading cognate or near-cognate codons. The tRNA is distorted upon codon recognition and remains in that conformation until the tRNA is released from EF-Tu, although the extent of distortion gradually changes upon transition from the pre- to the post-hydrolysis steps of decoding. The timing and extent of the rearrangement is similar on cognate and near-cognate codons, suggesting that the tRNA distortion alone does not provide a specific switch for the preferential activation of GTP hydrolysis on the cognate codon. Thus, although the tRNA plays an active role in signal transmission between the decoding and GTPase centers, other regulators of signaling must be involved.
The accurate decoding of the genetic information by the ribosome relies on the communication between the decoding center of the ribosome, where the tRNA anticodon interacts with the codon, and the GTPase center of EF-Tu, where GTP hydrolysis takes place. In the A/T state of decoding, the tRNA undergoes a large conformational change that results in a more open, distorted tRNA structure. Here we use a real-time transient fluorescence quenching approach to monitor the timing and the extent of the tRNA distortion upon reading cognate or nearcognate codons. The tRNA is distorted upon codon recognition and remains in that conformation until the tRNA is released from EF-Tu, although the extent of distortion gradually changes upon transition from the pre-to the post-hydrolysis steps of decoding. The timing and extent of the rearrangement is similar on cognate and near-cognate codons, suggesting that the tRNA distortion alone does not provide a specific switch for the preferential activation of GTP hydrolysis on the cognate codon. Thus, although the tRNA plays an active role in signal transmission between the decoding and GTPase centers, other regulators of signaling must be involved.
Proteins are synthesized from aminoacyl-tRNAs (aa-tRNAs) 2 that are delivered to the ribosome in ternary complexes with elongation factor Tu (EF-Tu) and GTP. The ribosome selects aa-tRNAs according to the sequence of codons in the mRNA template and rejects the bulk of aa-tRNAs with anticodons that do not match the given codon in each round of elongation. Correct base pairing between the mRNA codon and the anticodon of the tRNA on the 30S subunit of the ribosome provides a signal that is then transmitted to the GTPase center of EF-Tu on the 50S subunit and results in the activation of GTP hydrolysis by EF-Tu. Mismatches in the codon-anticodon complex impair GTPase activation, thereby allowing the ribosome to reject incorrect ternary complexes prior to GTP hydrolysis. Deciphering the mechanism and the specificity of signal transmission between the decoding center and the GTPase center of EF-Tu is one of the central questions in understanding the fidelity of translation.
Decoding entails a number of elemental steps. Initial binding of the ternary complex EF-Tu⅐GTP⅐aa-tRNA to the ribosome takes place codon-independently, mainly through contacts of EF-Tu with ribosomal protein L7/12, and is followed by rapid and reversible codon reading ( Fig. 1A; reviewed in Refs. [1][2][3][4]. The formation of the fully complementary codon-anticodon duplex induces local and global conformational changes at the decoding center of the ribosome, which lock the aa-tRNA in the codon-bound state and activate EF-Tu for rapid GTP hydrolysis (5)(6)(7)(8). Binding of near-cognate ternary complexes that entail single mismatches between codon and anticodon does not induce these structural rearrangements or rapid GTP hydrolysis, explaining why initial tRNA selection is more accurate than can be accounted for by the energetic differences between fully matched and mismatched codon-anticodon pairs alone. Hydrolysis of GTP and dissociation of inorganic phosphate (P i ) leads to a conformational rearrangement of EF-Tu, which is followed by the release of aa-tRNA from EF-Tu⅐GDP and the dissociation of the factor from the ribosome (9). Aa-tRNA is then either accommodated in the peptidyl transferase center or rejected in a proofreading mechanism.
Following codon recognition and prior to the release from EF-Tu, the aa-tRNA is bound in the so-called A/T state in which it transiently assumes a conformation that is more open or distorted compared with the unbound tRNA (10 -15). The timing or the exact step at which the tRNA changes the conformation is not known. The important role of the tRNA distortion is to pull EF-Tu into its productive, GTPase-activated conformation (12,13). It is attractive to speculate that the tRNA distortion might be a key regulator of signaling between the decoding site and the GTP binding site of EF-Tu. Mismatches in the codon-anticodon complex might impair or abrogate the tRNA distortion; as a result, the GTPase conformation of EF-Tu would not be induced; hence the slow GTP hydrolysis in a near-cognate complex. In fact, the physical properties of the tRNA body are important for accurate decoding (16 -19), which would be in line with the model.
The structural details of the tRNA distortion and the concomitant rearrangements of EF-Tu that take place upon reading a correct codon on the ribosome are well-documented (10 -15). However, the conformation of the tRNA reading a nearcognate codon is not known. Here we compare the formation of the transient distorted tRNA intermediate upon reading cognate or near-cognate codons. We took advantage of a rearrangement in the D stem of aa-tRNA resulting in the ϳ5 Å displacements in the distorted tRNA in the A/T state on a cognate codon (12). A fluorescence reporter group, proflavin, inserted at positions 16/17 in the D loop produces a fluorescent signal upon codon recognition (20). The effect can be explained by partial unstacking of proflavin from the neighboring guanines at positions 15 and 18 (20), which releases the fluorophore from static quenching (21), probably caused by photoinduced electron transfer between the fluorophore and the guanines (22). The distortion increases the accessibility of the proflavin reporter group for fluorescence quenchers such as iodide ions (20), consistent with a more open structure in the D arm region of the aa-tRNA (10 -13). Here we study the timeresolved distortion of aa-tRNA at different stages of decoding on cognate and near-cognate codons by monitoring the transient fluorescence quenching in real time.
The preparation of proflavin-labeled yeast tRNA Phe proceeds in two steps: reduction of the dihydroU base at position 16/17 in the D loop by borohydride treatment followed by the attachment of proflavin (21). tRNA Phe (10 A 260 units/ml in 0.2 M Tris-HCl (pH 7.5)) was mixed with NaBH 4 solution (100 mg in 1 ml of KOH). After incubation for 30 min at 0°C in the dark, the reaction was stopped by the addition of acetic acid to pH 4 -5, and the tRNA was precipitated with cold ethanol and 0.3 M potassium acetate (pH 4.5). Ethanol precipitation was repeated 3-4 times to remove traces of borohydride. Proflavin labeling was carried out by adding borohydride-treated tRNA Phe to 3 mM proflavin in 0.1 M sodium acetate pH 4.3. After incubation for 2 h at 37°C in the dark, the reaction was stopped by the addition of 1 M Tris-HCl (pH 9) to pH 7.5. Kinetic Measurements-Fluorescence stopped-flow experiments were performed using a SX-20MV apparatus (Applied Photophysics, Leatherhead, UK), monitoring proflavin fluorescence. Excitation was at 463 nm and the fluorescence was measured after passing a 500-nm cutoff filter (KV 500, Schott). Time courses were measured at pseudo-first-order conditions in excess of initiation complexes (1 M) over ternary complexes (0.2 M) and were evaluated by fitting an exponential function, F ϭ F ∞ ϩ A ϫ exp(Ϫk app ϫt). If necessary, additional exponential terms were included. The differential amplitudes obtained at different KI concentrations were fitted according to the Stern-Volmer equation in the form of Equation 1, to yield Stern-Volmer constants, K SV , for collisional quenching. Steady-state fluorescence measurements were carried out in a Fluorolog-3 (Horiba Jobin Yvon) spectrofluorimeter. Excitation was at 463 nm and the emission was measured at 502 nm; the K SV values were calculated as described (25).
Calculations were performed using TableCurve (Jandel Scientific) or Prism (Graphpad Software). Modeling of reaction intermediates was performed in Scientist (Micromath) based on the following kinetic scheme in Equation 2, where ternary complex (TC) and initiation complex (IC) form the initial binding complex A, which is converted to codon recognition complex B. C represents the state after GTPase activation and GTP hydrolysis, D the state after phosphate release from EF-Tu, and E the state after accommodation of the aa-tRNA into the A site. F represents the proofreading pathway, in which aa-tRNA is rejected from the ribosome. For the cognate ternary complex, the following rate constants were used: k 1 ϭ 140 M Ϫ1 s Ϫ1 and k Ϫ1 ϭ 85 s Ϫ1 for initial binding, k 2 ϭ 190 s Ϫ1 and k Ϫ2 ϭ 0.23 s Ϫ1 for codon recognition, k 3 ϭ 260 s Ϫ1 for GTPase activation and GTP hydrolysis and k 5 ϭ 23 s Ϫ1 for the accommodation and peptide bond formation (5). The rate of P i release (k 4 ϭ 10 s Ϫ1 , data not shown) was measured for the present conditions as described (9). For the near-cognate ternary complexes,
RESULTS
Transient Fluorescence Quenching Approach-To assess the extent of tRNA distortion at the D loop, the binding of the ternary complex EF-Tu⅐GTP⅐Phe-tRNA Phe (Prf) to the ribosome was followed in a stopped-flow apparatus, monitoring proflavin fluorescence in the presence of increasing concentrations of the fluorescence quencher KI, while keeping the ionic strength constant. Changes of proflavin fluorescence report the transient formation of several intermediates of decoding (20,26). If the fluorescence in all intermediates were quenched to the same extent, then the relative amplitudes of the various kinetic steps would be expected to be the same in the presence or the absence of the quencher (Fig. 1B) . The resulting differential curves (I 0 Ϫ I) can be deconvoluted into exponential terms which are characterized by the apparent rate constants (k app ) and amplitudes (A 0 Ϫ A) of the respective steps. To determine the Stern-Volmer quenching constant, K SV , which depends on the accessibility of the fluorophore for the quencher and, therefore, is a measure for the "openness" of the tRNA, the differential amplitudes of each step were plotted against the concentration of KI, and the plots were evaluated according to the Stern-Volmer relationship (Equation 1 in "Experimental Procedures"). The transient fluorescence quenching approach is particularly suitable for the analysis of transient intermediates in rapid, forward-committed reactions, such as the EF-Tu-dependent aa-tRNA binding to the A site. Other advantages of the transient quenching approach are the possibilities (i) to isolate a distorted tRNA intermediate from the coexisting ensemble of states that are not distorted as the reaction proceeds and (ii) to selectively monitor those tRNAs that bind to the ribosome, because only those contribute to fluorescence changes.
Transient Distortions-We first monitored the changes of the tRNA conformation that take place upon reading a cognate codon. The ternary complex, EF-Tu⅐GTP⅐Phe-tRNA Phe (Prf), was mixed with the ribosomal initiation complex, 70S⅐mRNA⅐ fMet-tRNA fMet exposing a cognate UUC codon in the A site and the changes in Prf fluorescence were monitored ( Fig. 2A). As observed previously, the proflavin fluorescence transiently increased during the time course of the reaction. According to our previous detailed step assignment, the fluorescence increase reflects all steps starting from ternary complex binding to the ribosome up to GTPase activation (5, 20, 26) (Fig. 1A).
Distortion of tRNA on the Ribosome
The decrease in fluorescence coincides with P i release from EF-Tu following GTP hydrolysis, the release of the aa-tRNA from EF-Tu, and the subsequent accommodation of the aa-tRNA in the A site (Fig. 1A). When the stopped-flow experiments were carried out in the presence of KI, the fluorescence of Phe-tRNA Phe (Prf) in both initial and final states was decreased due to quenching. In comparison, the fluorescence of the transient intermediate was decreased to a larger extent, indicating a higher accessibility of the fluorophore for the quencher and thus a more open tRNA intermediate (the scenario depicted in the right panel of Fig. 1B). Notably, at the highest KI concentration used, the transient fluorescence increase was no longer observed, indicating complete quenching and suggesting that the majority of the aa-tRNA assumes the distorted, more open state during the decoding of a cognate codon (see amplitudes in Table 1).
The differential curves could be accurately evaluated with a two-exponential function, accounting for the distortion of the tRNA and the relaxation back into the undistorted conformation (Fig. 2B). The K SV values determined from the amplitudes of the distortion and relaxation steps (Fig. 2C) were in the range of 11-17 M Ϫ1 , much higher than the value of 5 M Ϫ1 obtained for the tRNA free in solution or in the ternary complex with EF-Tu⅐GTP ( Table 1). Given that all individual rate constants are known (5,27), the elemental step (Fig. 1A) can be identified at which tRNA changes the conformation (Fig. 2D). The formation of the distorted intermediate proceeds with the same rate as codon recognition, about 30 s Ϫ1 at the ligand concentrations used. The distorted intermediate accumulates through the early steps of decoding, i.e. codon recognition, GTPase activation, and GTP hydrolysis (Fig. 1A). The tRNA relaxation takes place at the same rate, about 10 s Ϫ1 , as the steps following GTP hydrolysis (Fig. 1A), which in the following are collectively denoted as the "late steps" of decoding, in contrast to the "early steps" at which the distorted tRNA intermediate is formed. Thus, the tRNA is distorted upon codon recognition and remains in an open conformation through the pre-and posthydrolysis steps (Fig. 1A) until it is released from EF-Tu. On the other hand, the difference in the degree of tRNA distortion at the early and late states (K SV values of 11 versus 17 M Ϫ1 ) may suggest that the structural details of the conformational rearrangement gradually change upon transition from the early to the late steps of decoding ( Fig. 2C and Table 1).
When analogous experiments were carried out with ribosome complexes exposing a near-cognate CUC codon, a fluorescence increase of Phe-tRNA Phe (Prf) was observed followed by a very slow decrease (Fig. 3A). Previous analysis suggested that most of the amplitude of the fluorescence increase is due to the formation of the codon-recognition complex (Ref. 5 and below). Subsequent GTPase activation is impaired and GTP hydrolysis proceeds with a rate of about 0.1 s Ϫ1 , which is ratelimiting for the following steps of accommodation and proofreading that are observed as a fluorescence decrease (Fig. 3A). Because of slow GTPase activation, the codon-recognition complex accumulates as a high-fluorescence intermediate, and the addition of KI predominantly decreases the fluorescence of that intermediate (Fig. 3B). As determined from the KI dependence of the differential amplitudes (Fig. 3C), the K SV values of the early and late steps are 14 M Ϫ1 (Table 1), which indicates the formation of an open tRNA intermediate also when the codonanticodon complex contains a mismatch. Thus, upon reading a near-cognate codon, tRNA is distorted at the same step (upon codon recognition) and to an extent similar to that on a cognate codon.
Kinetic modeling based on the known kinetic constants (5) suggests that 53% of the ribosome-bound tRNA is in the codonrecognition complex (Fig. 3D), which is in excellent agreement (25); the calculation of the transient differential amplitudes A 0 Ϫ A is not applicable (n.a.). The K SV values for the free tRNA Phe (Prf) and free proflavin are 5.3 Ϯ 0.1 M Ϫ1 and 70 Ϯ 1 M Ϫ1 , respectively. b K SV value for the initial binding complex was measured in a model system with non-programmed ribosomes (28). c Early steps include pre-hydrolysis and GTP hydrolysis intermediates; late steps reflect post-hydrolysis steps (Fig. 1A). Distribution of intermediates is shown in Figs. 2D and 3D. d n.d., complex not detectable. MARCH 11, 2011 • VOLUME 286 • NUMBER 10 with the 56% fluorescence amplitude of the intermediate that is strongly quenched by KI (Table 1). From the distribution of the decoding intermediates, about 25% of aa-tRNA remains in the initial binding complex throughout the reaction (Fig. 3D), which may reduce the overall K SV value observed for an ensemble represented by a mixture of states. Therefore, the true value of K SV of either early or late steps is likely to be even higher than 14 M Ϫ1 . This would suggest that the tRNA distortion in the near-cognate pre-hydrolysis state may be even more extensive than in the cognate case (Ͼ14 M Ϫ1 compared with 11 M Ϫ1 ). The K SV values of the post-hydrolysis states are not significantly different (Ͼ14 M Ϫ1 and 17 M Ϫ1 on near-cognate and cognate codons, respectively), suggesting that the conformation of the tRNA is similar in those states.
Distortion of tRNA on the Ribosome
Conformation of the tRNA in the Isolated Intermediates-To further substantiate the assignment of early and late states, we have chosen conditions at which several intermediates of A-site binding can be stalled. As a model for the initial, codon-independent binding complex, we have used EF-Tu⅐GTP⅐Phe-tRNA Phe (Prf) bound to vacant ribosomes (28). The K SV value of that complex, as determined by transient fluorescence quenching, 7 M Ϫ1 (Table 1) is in excellent agreement with the value determined at steady-state conditions. 3 The value is not much different from that of unbound or EF-Tu-bound tRNA, suggesting that in the initial binding complex the tRNA preferentially assumes a non-distorted conformation.
To stall the codon-recognition complex, GTP hydrolysis had to be inhibited. For this purpose, we utilized a mutant of EF-Tu which was impaired in GTP hydrolysis by replacing the cata-lytic His-84 with Ala (29). Upon binding to the cognate codon, the ternary complex EF-Tu(H84A)⅐GTP⅐Phe-tRNA Phe proceeds through all steps up to codon recognition with rates similar to the wild-type complex, but is stalled prior to GTP hydrolysis (29) (Fig. 4A). The GTPase-activated state may be transiently sampled in this complex (29), but cannot be stabilized by the interactions of His-84 with the sarcin-ricin loop of the ribosome, as seen with the wild-type EF-Tu (13), and therefore does not accumulate. The overall amplitude of the early steps of decoding appears smaller with mutant compared with the wild-type EF-Tu (Fig. 4A); this can be explained by small structural differences between the A/T complexes in the preand post-hydrolysis states (12,13). The K SV value of the prehydrolysis state, as determined by the transient fluorescence quenching approach, is about 10 M Ϫ1 , which again indicates the aa-tRNA conformation that is more open than in the ternary complex or in free tRNA. The value is similar to that estimated for the early steps, but is somewhat lower than for the late steps of the uninterrupted decoding with wild-type EF-Tu. The contribution of the undistorted initial binding state should be relatively small in this case, as the tRNA is predominantly present in the codon-recognition state (Fig. 4B). This suggests that the K SV ϭ 10 M Ϫ1 reflects the distortion of aa-tRNA during decoding of a cognate codon. When a near-cognate codon was used, the aa-tRNA bound in the A/T complex with EF-Tu(H84A) is also distorted (Fig. 4C); the K SV value of 11 M Ϫ1 is similar to that in the cognate complex (Table 1). However, because in the near-cognate case about 25% of aa-tRNA remains in the initial binding step (Fig. 4D), which is characterized by a lower K SV , the true K SV value for the near-cognate codon-recognition complex is likely Ͼ11 M Ϫ1 . The data suggest that in the prehydrolysis complex the tRNA assumes a distorted conformation, regardless of whether the A-site codon is cognate or near-cognate. Finally, for better comparison with the structures of the complexes obtained by cryo-EM and crystallography, we also determined the K SV value for aa-tRNA stalled in the A/T state by kirromycin. The antibiotic does not affect the early steps of decoding up to GTP hydrolysis and P i release from EF-Tu, but blocks the conformational change of EF-Tu that leads to the release of aa-tRNA from the factor (9,20). In this state, the cognate aa-tRNA is strongly distorted with a K SV ϭ 16 M Ϫ1 (Table 1), consistent with the value obtained for the late decoding state in full decoding. This finding further substantiates the notion that the conformation of the tRNA gradually changes toward a more open structure upon progressing from the preto the post-hydrolysis states. As kirromycin did not stabilize the binding of aa-tRNA on the near-cognate codon (data not shown), analogous experiments with the near-cognate A/T state were not feasible.
DISCUSSION
The Timing of tRNA Distortion-The present results suggest the following timing of the conformational rearrangements of aa-tRNA during decoding. In the ternary complex bound to the ribosome before codon reading (Fig. 1), aa-tRNA retains its undistorted, closed conformation, although transient excursions into an open conformation may occur. Upon cognate codon recognition, the tRNA assumes a conformation which is significantly more open at the D loop than in the preceding initial binding complex or in the free ternary complex with EF-Tu⅐GTP ( Table 1). The solvent exposure of the D loop increases even further in the post-hydrolysis state up to the step when the tRNA is released from EF-Tu and accommodated in the A site (Fig. 1), which allows the tRNA to relax into the undistorted conformation. The gradual change in the tRNA arrangement between the early and late steps of decoding may reflect structural differences between the pre-and post-hydrolysis states, although the crystal structures of the tRNA in the two states are very similar (12,13). Alternatively, as the codonrecognition step presumably entails a number of substeps and the tRNA rapidly samples between different conformations and substates (6,7,30), the observed solvent exposure of the given state may be a global value that represents a mixture of undistorted and distorted states. The open tRNA conformation may be favored by the contacts with the ribosome as soon as codon recognition takes place and may be further stabilized in the post-hydrolysis state.
Distortion of tRNA in the Near-cognate A/T State-The tRNA plays an important role in activation of GTP hydrolysis. An intact aa-tRNA is required for the GTPase activation of EF-Tu (18) and mutations in the D arm affect GTP hydrolysis (16,31). The structures suggest how the tRNA distortion on a cognate codon may result in the GTPase activation of EF-Tu (12,13). This raises the question whether the tRNA distortion provides a switch that signals the formation of a correctly matched cognate codon-anticodon complex to the GTPase center. In the simplest case, a mismatch would impair the formation of the distorted tRNA intermediate, thus precluding the structural rearrangements that are induced by cognate codon recognition and are required for GTPase activation of EF-Tu. The present data demonstrate that the formation of the open tRNA intermediate does not depend on cognate codon recognition (Table 1): When a near-cognate codon is recognized, the tRNA is distorted as well, and the timing of the rearrangements is similar to the one on the cognate codon; yet GTP hydrolysis is more than 2000-fold slower in the near-cognate compared with the cognate complex. Thus, the tRNA distortion alone does not seem to provide the specific signal for the preferential activation of GTP hydrolysis by EF-Tu in the cognate case. Additional regulators, presumably ribosome elements, must play a role in sensing and transmitting the signals that communicate the decoding to GTPase activation (see below).
In a more complicated scenario the details of the distortion may be different on the cognate and near-cognate codons, thereby affecting the GTPase activation in different ways. Given the similarity of the quenching constants of the distorted intermediates formed on cognate and near-cognate codons, large differences in the respective tRNA conformations seem unlikely, although small differences cannot be excluded. The D loop of the tRNA bound on the near-cognate codon may be even slightly more open than on the cognate one (see "Results"). Structural differences in tRNA regions distant from the D loop cannot be ruled out, but seem unlikely, given the rigidity of the molecule and the coupling in rearrangements of the tRNA elbow region and the acceptor stem interacting with EF-Tu (12,32,33).
Consequences for GTP Hydrolysis-GTP hydrolysis proceeds through the attack of the hydrolytic water molecule on the ␥-phosphate of GTP in EF-Tu. His-84 in E. coli EF-Tu is the active site residue that stabilizes the GTPase transition state (13,29,34). Upon GTPase activation on the ribosome, His-84 has to move toward the ␥-phosphate, and this movement should be induced only when a correct codon-anticodon complex is formed. The tRNA distortion affects the relative orientation between tRNA and EF-Tu (10 -13), which leads to subtle rearrangement in EF-Tu and stabilization of the catalytically active orientation of His-84 by A2662 of the sarcin-ricin loop of 23S rRNA (13), thereby ultimately resulting in GTPase activation. The mechanism of activation must be precisely tuned for each cognate aa-tRNA, as all of them exhibit similar kinetic properties despite a wide variety of structural features (35). Similarly, any mismatch in the codon-anticodon complex impairs GTPase activation, regardless of the thermodynamic stability of the respective codon-anticodon complexes or their docking partners at the decoding site (27). The uniformity of mismatch recognition suggests a global response mechanism, which would be consistent with the idea that all conformational changes that occur upon cognate codon recognition, including domain closure of the 30S subunit, distortions of the tRNA, and rearrangements in EF-Tu, are essential for the precise positioning of the GTPase center of EF-Tu at the sarcin-ricin loop. Although the tRNA is distorted also in the near-cognate A/T state, even subtle changes in the orientation of tRNA and EF-Tu could cause defects in the GTPase activation by preventing A2662 from properly placing His-84 into the active site (13). In this framework, the tRNA mutants that activate GTP hydrolysis on a near-cognate codon (16,19) appear to have found their own unique conformational solution to dock EF-Tu on the sarcin-ricin loop. However, other contacts in the decoding com-plex may specifically affect the stringency of decoding, e.g. helix 14 and helix 8 of 16S rRNA that negatively regulate GTP hydrolysis (36), or the interactions between helix 5 and domain 2 of EF-Tu (37). The structural basis for the very strong effect of ribosomal protein L7/12 on the GTPase activity of EF-Tu (38,39) remains to be clarified. Finally, the ribosome may play an active role in monitoring the correct codon-anticodon interaction using a network of rRNA and proteins from both ribosomal subunits, as suggested by the recent crystal structure of the proofreading complex (40). Further experiments will be necessary to determine the role of each interaction element in the A/T state in the codon-specific control of the GTPase activation of EF-Tu.
|
2017-05-26T01:55:54.108Z
|
2011-01-06T00:00:00.000
|
{
"year": 2011,
"sha1": "287e99cc8806b4da54982ffc31c1119656f32f6d",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/286/10/8158.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "176a1fe33c5f4e4d84e07a063e8458f483baac82",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
12765114
|
pes2o/s2orc
|
v3-fos-license
|
The curvature perturbation at second order
We give an explicit relation, up to second-order terms, between scalar-field fluctuations defined on spatially-flat slices and the curvature perturbation on uniform-density slices. This expression is a necessary ingredient for calculating observable quantities at second-order and beyond in multiple-field inflation. We show that traditional cosmological perturbation theory and the `separate universe' approach yield equivalent expressions for superhorizon wavenumbers, and in particular that all nonlocal terms can be eliminated from the perturbation-theory expressions.
Cosmological gauges
The unperturbed cosmology is taken to be described by a spatially flat Robertson-Walker metric ds 2 = −dt 2 + a(t) 2 dx 2 , where a(t) is the scale factor and H =ȧ/a is the Hubble parameter. An overdot denotes a derivative with respect to cosmic time t.
Choice of slicing.-In the unperturbed universe, spatial hypersurfaces of fixed time t are associated with a number of physical properties: they are slices of uniform energy density, uniform Hubble parameter, zero intrinsic Ricci curvature, and so on. Once we add perturbations these hypersurfaces continue to exist but typically no longer coincide. To compare the value of some physical quantity such as the density ρ between the perturbed and unperturbed universes we pick one set of hypersurfaces to use as a reference. This is said to be a choice of slicing. The perturbation in a physical quantity is defined to be the difference between its value on the same hypersurface in the perturbed and unperturbed universes. A choice of slicing, together with a rule for determining the spatial coordinates on each slice, is called a choice of gauge. In principle we can fix the slicing and use whatever coordinate system we like to describe it, but in practice it is convenient to choose coordinates so that slices of constant t coincide with the slicing. We describe coordinates with this property as adapted to the slicing. Having chosen a slicing, the metric can be written in adapted coordinates using Arnowitt-Deser-Misner (ADM) quantities, (2.2) where N is the lapse function and N i the shift vector. The spatial metric h ij is used to raise and lower spatial indices, eg. N i = h ij N j . The curvature perturbation associated with this slicing, denoted ψ, is defined by 2 e 6ψ ≡ det(h ij /a 2 ). • Spatially flat slicing. This has det(h ij /a 2 ) = 1 and therefore ψ is identically zero. In the absence of gravitational waves there exist coordinates for which h ij = a 2 δ ij . The Ricci curvature of each spatial hypersurface is zero.
If gravitational waves are present then h ij = e γ ij where γ ij is transverse and traceless. This preserves the condition det h ij = 1 but the Ricci curvature is no longer zero. In this context we should more properly speak of a 'uniform Hubble slicing'.
• Comoving slicing. This is chosen so that there is no net energy flux measured on a fixed slice. Applied to the energy-momentum tensor in a holonomic basis adapted to the slicing this implies T 0i = 0. The curvature perturbation defined by this slicing is conventionally denoted R. 3 2 This definition is not the same as that of the review article by Malik & Wands [6]. It agrees with the quantity used at the nonlinear level by Maldacena [1]. The definition (2.3) was used to prove conservation of ψ = ζ in the uniform density gauge at a classical level by Shellard & Rigopoulos [7], Lyth, Malik & Sasaki [8] and Weinberg [9,10]. More recently the proof has been strengthened to an operator statement in quantum mechanics by Assassi, Baumann & Green [11]. 3 There are differing sign conventions for R. Our definition gives ζ = R + O(k/aH) 2 on superhorizon scales, but other definitions reverse this to ζ = −R + O(k/aH) 2 .
• Uniform density slicing. The density ρ is constant on a fixed slice. The curvature perturbation is conventionally written ζ. In the absence of gravitational waves, coordinates exist in which the spatial metric can be written h ij = a 2 e 2ζ δ ij .
To first order in fluctuations it is known that R and ζ agree on superhorizon scales, in the sense that R − ζ = O(k/aH) 2 [6]. We will reproduce this result by direct calculation in §2.3 below. In this paper we focus on ζ because it is known to be conserved to all orders in perturbation theory (including quantum effects) when the dynamics are adiabatic [7][8][9][10][11][12] To our knowledge the equivalence between ζ and R, and conservation of R in an adiabatic regime, have been explicitly demonstrated only to second order [13].
In the absence of isocurvature perturbations, ζ can be used to set initial conditions for the CMB anisotropy. Therefore it represents a convenient way to express observable quantities. But inflationary calculations are often technically simplest in the spatially flat gauge, where the curvature perturbation is zero and fluctuations are measured by the scalar field perturbations δφ α . If we take advantage of this simplicity then a rule is needed to connect the δφ α to ζ. As explained in §1, our objective is to compute this rule to second order in the δφ α . This approach was used by Guth & Pi [14] and Bardeen,Steinhardt & Turner [15] in the earliest estimates of the density perturbation. These calculations exploited the technical simplicity of the flat gauge to compute the amplification of quantum effects, after which a variety of arguments were used to estimate the first-order, single-field result ζ ∼ −Hδφ/φ [14]. The relation between these methods was clarified by Lyth [16]. Later, the first-order result was extended to multiple-field scenarios by Salopek & Bond [17], who used it to generate numerical results. Formulae for more complex models were given by Sasaki & Stewart using the 'separate universe approach' [18][19][20]. More recently, Maldacena computed the relationship between ζ and δφ in a single-field model and discussed its application to higher n-point functions [1].
Changing slicing
To connect quantities defined by different slicings, such as ζ and δφ α , we must change the gauge. In the literature this is sometimes described as a coordinate transformation. If not interpreted correctly this description is confusing because under a coordinate transformation any tensor transforms covariantly, and we shall see that this is not the same as the transformation law under a change of gauge. The difference arises because to change gauge we first change the slicing and then change the coordinates to adapt to it.
Begin with some initial slicing and adapted coordinates x µ . Suppose we wish to switch to a different set of slices which are slightly displaced. At any point p the displacement to the matching point p ′ on the new surface is written The Lie derivative L ξ is understood to act on the coordinates x µ (p) as if they were the components of a contravariant vector field. This abuse of notation is unfortunate but conventional. The vector ξ µ associated with the Lie derivative is called the gauge parameter. Given two slicings our task will usually be to solve for an appropriate gauge parameter. Now introduce a second set of coordinates x µ ′ adapted to the new slicing, with the time coordinate adjusted so that the numerical value of time agrees on both slices. By a 'gauge transformation', we mean a map from tensors at p expressed in the basis dx µ to tensors at p ′ expressed in the basis dx µ ′ . This is both a change of evaluation point and a change of basis. Writing T = T µ··· ν··· (p) dx µ ⊗ · · · ⊗ dx ν ⊗ · · · for a generic tensor at p, and for its gauge transform, the required map is where on the right-hand side the Lie derivative is understood to mean its action on the components of T in the original basis. 4 In the context of perturbation theory, the displacement between hypersurfaces is small and therefore so is the gauge parameter ξ µ . In this paper we are interested in computing the relationship between quantities defined on different slicings up to second order in amplitude. Hence, we must work to the same order in powers of ξ µ . We break ξ µ into temporal and spatial gauge parameters ξ 0 and ξ j , corresponding to the time and space components of ξ µ .
Transformation of field fluctuations.-Using Eq. (2.5) we can compute how each quantity of interest transforms between slicings. A field fluctuation δφ α transforms according to the rule On the right-hand side,φ α ,φ α (and so on) represent derivatives of the background field with respect to time. Because we adjusted the time coordinates of the slices to agree it is not necessary to specify whether the derivatives are with respect to t or t ′ . The symbols δφ α (t) and δφ α (t ′ ) denote, respectively, field fluctuations defined on the first slicing of constant t, and the second slicing of constant t ′ .
The time derivative of a field fluctuation transforms according to Transformation of metric components.-We also require transformation rules for the metric components N , N i and h ij . Bearing in mind that we intend to compute ζ in terms of the flat-gauge perturbations δφ α we simplify these expressions by assuming that the initial slicing corresponds to the flat gauge where h ij = a 2 δ ij . We do not yet impose any restriction on the final slicing. Instead of working with the lapse directly it is more convenient to work in terms of its perturbation α, defined by N ≡ 1 + α. The transformation rule for α is 5 (2.8) 4 Note that this map must be phrased carefully. The statement T → T ′ = e L ξ T sometimes seen in the literature is not correct, and does not agree with (2.5). It yields a tensor T ′ in the tangent space at p, rather than p ′ . 5 Contraction of repeated indices in the lowered position implies summation with the Euclidean metric δij .
Likewise, the transformation rule for the shift vector is
Finally, the spatial metric transforms according to (2.10) From (2.3) and (2.10) we can compute the curvature perturbation in the new slicing. It is The definition ψ ∼ det h/a 2 implies that the curvature perturbation measures modulation in proper volume from place to place on a fixed slice. Eq. (2.11) exhibits the expected invariance under volume-preserving transformations of the spatial coordinates which do not change the slicing. These are generated by gauge transformations with ξ 0 = 0 and divergenceless ξ j , viz. ∂ j ξ j = 0. They include the spatial rotations.
Gauge transformations with ξ 0 = 0 change the slicing. For such transformations there is a small second-order volume modulation even if ξ j is divergenceless, provided it is timedependent and ξ 0 is spatially dependent. This arises from the second-to-last term in the first line of (2.11). If ξ j is time-independent there is no modulation, and no contribution to the curvature perturbation. Eq. (2.9) shows that a time-independent transformation of this kind negligibly perturbs the shift-vector N j when all k-modes are associated with superhorizon scales for which k/aH ≪ 1. 6 Restriction to diagonal metric.-Normally only ξ 0 is needed to select the slicing of interest, leaving ξ j undetermined. As described above, this ambiguity is irrelevant if ξ j becomes time-independent and volume-preserving when all modes are superhorizon. More generally we could choose ξ j to bring h i ′ j ′ to a diagonal form. This requires the first-order perturbation to satisfy ∂ i ξ j 1 = 0, which forces ξ j 1 to be spatially homogeneous (but perhaps time-dependent) and therefore volume-preserving. At second order the diagonal constraint is more complex, but entails When ξ j is chosen to satisfy (2.12) it can be checked that ψ ′ becomes independent of its precise value. We find The right-hand side of (2.12) decays when all wavenumbers are associated with superhorizon scales. Therefore, on these scales, any rigid volume-preserving spatial gauge transformation leaves h i ′ j ′ diagonal and allows ψ ′ to be computed using the simplified expression (2.13). Conversely, because different possibilities for ξ j change ψ ′ when k/aH 1 there is no unique value of the curvature perturbation associated with subhorizon scales. In practice this is harmless because on these scales ψ ′ has no clear significance.
Spatially flat slicing
Now we apply this formalism to translate between the spatially flat slicing and the uniformdensity slicing. In the language of §2.1, slices of constant t correspond to the flat gauge and slices of constant t ′ correspond to the uniform density gauge. The transformed curvature perturbation ψ ′ will be ζ.
We begin from coordinates in which the flat-gauge spatial metric is diagonal, viz. h ij = a 2 δ ij . We choose ξ 0 to select an appropriate final slicing and assume that the spatial gauge transformation is chosen to satisfy (2.12).
Lapse and shift.-Before embarking on the calculation, we use this section to collect formulae for the lapse and shift in the spatially flat gauge. Eq. (2.11) shows that these are not directly required to compute ζ-this expression does not contain α, and its N j dependence drops out when all wavenumbers are associated with superhorizon scales. However, they are required indirectly because the density perturbation which will be used to determine ξ 0 depends on the metric. Moreover, the lapse and shift are elements in an important constraint equation-the Hamiltonian constraint-which we will use later to simplify our results.
We work perturbatively in the scalar field fluctuation δφ α . We break the shift vector N j into irrotational and solenoidal components ϑ and β, where ∂ j β j = 0. Then ϑ, β j and the lapse perturbation α can be expanded in powers of δφ α , giving where the term α n contains exactly n factors of δφ, and likewise for ϑ n and β n|j . We neglect tensor perturbations, which correspond to gravitational waves. These could be kept but because they are represented by transverse traceless tensors γ ij they can enter a scalar quantity such as ζ only at third order or above. With these choices the lapse perturbations satisfy [1,21,22] This expression for α 2 already signals a potential difficulty because it involves the nonlocal inverse Laplacian ∂ −2 , defined as multiplication by −1/k 2 in Fourier space. Terms of this nature cannot arise in the separate universe approach because it corresponds to an expansion in purely positive powers of k. To demonstrate that a perturbation-theory expression involving such terms is compatible with a separate-universe calculation we must show carefully how all nonlocal pieces disappear from the result. We will do this explicitly in §2.3. The first-order component of the scalar shift satisfies [1,21] − 4H whereφ 2 ≡φ αφ α and V α ≡ ∂ α V (and likewise for higher derivatives). At second order we have [22] − 4H (2.17b) At linear order β 1|j = 0. The second-order component β 2|j can appear in scalar quantities only at third order or above because it is divergenceless, and therefore is not needed.
Hamiltonian constraint.-Eqs. (2.17a)-(2.17b) are the first-and second-order parts of the 'Hamiltonian constraint', so called because in Einstein gravity it is enforced by the lapse N acting as its Lagrange multiplier. Because the lapse is associated with time reparametrization invariance the Hamiltonian constraint plays a role analogous to the Hamiltonian in conventional theory.
We are primarily interested in the case where all k-modes are associated with superhorizon scales. In this limit, ∂ 2 ϑ n /a 2 decays [9, 10, 23] and the Hamiltonian constraint becomes (2.18)
The uniform-density curvature perturbation
In this section we compute the gauge transformation parameter ξ 0 . To simplify the calculation we take ξ j = 0 from the outset. On superhorizon scales this will satisfy (2.12), giving a diagonal spatial metric and trivial lapse. In §3 we will see that this statement (promoted to all orders in perturbation theory) is the basis of the separate universe approach.
Density perturbation.-Each slicing defines a field of normal vectors n µ which are orthogonal to the slices. We normalize so that n µ n µ = −1. The density measured by an observer on a fixed spatial slice is ρ = T µν n µ n ν , where T µν is the energy-momentum tensor. In a holonomic basis of coordinates adapted to the slicing, this gives ρ = − T 00 g 00 . Therefore, up to second order, the perturbation in the density will be δρ = δT 00 + ρδg 00 + (δT 00 + ρδg 00 )δg 00 . (2.20) Eqs. (2.19) and (2.20) apply for any slicing. Our interest lies in the uniform-density slicing, for which the density perturbation δρ(t ′ ) on slices of constant t ′ is identically zero. Using the gauge-transformation formulae collected in §2.1 it is possible to express δρ(t ′ ) in terms of quantities defined on the original flat slices of constant t. That gives (2.21) The combination T ij +ρg ij in the final bracket depends only on background quantities. Setting the left-hand side equal to zero, Eq. (2.21) represents an equation for the gauge parameter ξ 0 which can be solved to find the transformation between flat and uniform-density slices.
Curvature perturbation.-The solution is In this expression, all perturbative quantities on the right-hand side are evaluated on spatially flat slices. After substitution in (2.11) with ξ j = 0, we find (2.23) Eq. (2.23) is one of our central results. It gives the curvature perturbation on uniformdensity slices in terms of the flat-gauge density perturbation, the 0i components of the flatgauge energy-momentum tensor and metric, and the scalar part of the flat-gauge shift vector encoded in ϑ 1 . It applies for any matter content. For applications to inflation the matter theory is given by an arbitrary number of scalar fields interacting via a potential V . The energy-momentum tensor is It gives a background density ρ =φ 2 /2 + V . The density perturbation on flat slices is 25) and the 0i component is Explicit expressions.-We can now give explicit expressions for the first-and second-order components of ζ. We define these to satisfy ζ = ζ 1 + ζ 2 + · · · , and as above ζ n contains terms with exactly n powers of the field perturbations. Dropping terms which decay when all wavenumbers correspond to superhorizon scales, we find These expressions are exact, except for the neglect of decaying terms. In deriving them we have made no use of the slow-roll approximation. Eq. (2.27b) shows that, when derived using this method, the second-order curvature perturbation contains α 2 and therefore apparently depends on the nonlocal combination which appears in (2.16b). If true this would be perplexing. The explicit single-field expression given by Maldacena contains no such terms [1]. The resolution is that, in Maldacena's calculation, the second-order lapse was removed entirely by the Hamiltonian constraint (2.18).
The existence of constraints means that Eqs. (2.27a)-(2.27b) can be written in a number of superficially different ways. One reason for doing so is that, because these rewritten formulations contain different terms, their numerical properties can differ even though they are mathematically equivalent. If we choose to exploit this freedom, however, we must remember that the Hamiltonian constraint mixes terms of different orders in the field fluctuations δφ α . Therefore, in quantities which depend on both ζ 1 and ζ 2 , we must use expressions which have been simplified in the same way. Failure to do so will lead to a mismatch. In particular this applies when computing the three-point function ζ(k 1 )ζ(k 2 )ζ(k 3 ) from n-point functions of the field fluctuations.
One option is to remove α 2 entirely. This will leave a purely local expression comparable to the one obtained by Maldacena. This choice gives Different forms for ζ 2 can be obtained by further use of the first-order Hamiltonian constraint. For example, the cross-term δφ α δφ β could be eliminated entirely at the expense of a more complex coefficient for the δφ α δφ β term. If we are prepared to tolerate residual nonlocal terms, we could alternatively use the Hamiltonian constraint to simplify ζ 1 and ζ 2 as much as possible. One choice is As above the δφ α δφ β terms can be removed, if desired, using the first-order constraint. This form of ζ 1 is especially simple, being the multiple-field generalization of the estimate ζ ∼ Hδφ/φ obtained in early calculations [14][15][16]. It coincides with the first-order expression obtained by direct calculation of the comoving-gauge curvature perturbation R, and therefore reproduces the first-order relation ζ 1 = R 1 on superhorizon scales which was discussed in §2.
Because it requires the constraint equations this relationship is a consequence of Einstein gravity and need not hold more generally.
Eqs. (2.28a)-(2.28b) and (2.29a)-(2.29b) are exactly equivalent. Neither involves any form of approximation except that because we have neglected terms which decay when all wavenumbers are associated with superhorizon scales they are valid only in this limit. Which we use is a matter of our own convenience. The only thing we cannot do is mix (for example) the simple first-order expression ζ simple 1 with the local second-order result ζ local 2 , or vice-versa. Which set is most convenient will depend on the problem at hand. Eq. (2.29b) shows that it is possible to compute the curvature perturbation knowing onlyφ α , H and V α from the background, provided we are prepared to tolerate the nonlocal terms. In contrast with (2.28b) it is not necessary to know the second derivative V αβ and we do not need a term quadratic in the derivatives δφ α . When used to obtain correlation functions of ζ this last property reduces the number of n-point functions of the fields and their derivatives which must be computed.
With the guarantee provided by (2.28a)-(2.28b) that it is possible to write a purely local formula for ζ, the nonlocal terms in (2.29b) are harmless. For computations of n-point functions, which naturally take place in Fourier space, they merely become constant factors of k. Our numerical experiments suggest that Eqs. (2.29a)-(2.29b) may even be preferable to (2.28a)-(2.28b) because there are fewer cancellations between large contributions. This is especially noticeable in models where ζ is conserved at or after the end of inflation. Conservation relies on a delicate interplay between separate terms in ζ which may themselves be varying quite rapidly.
Comparison with the separate universe picture
The flat-gauge results for ϑ quoted in Eqs. (2.17a)-(2.17b) show that-up to second order in fluctuations, and in coordinates where the spatial metric is diagonal-the shift vector N j approaches zero on superhorizon scales [9,10,23]. In these coordinates the only surviving perturbation to the metric on superhorizon scales is the lapse α which can be absorbed into a shift of time.
After making this shift the metric is unperturbed. Therefore the equations for each matter species must be those of the homogeneous, unperturbed universe, up to corrections of order (k/aH) 2 , except with initial conditions displaced by the time shift necessary to remove α. When promoted to all orders in fluctuations this argument constitutes the separate universe approach [7,8,18,19,24,25]. The necessary decay of the shift vector N j on superhorizon scales to all orders in perturbation theory was shown by Weinberg [9,10] and later strengthened by Sugiyama, Futamase & Komatsu [23]. The conclusion is that superhorizon-sized regions evolve individually like an unperturbed universe.
This formalism can be used to study the behaviour of superhorizon-scale perturbations by comparing the behaviour of each quantity of interest on fixed spatial hypersurfaces drawn from our choice of slicing. To do so we must know how the background solutions, parametrized in terms of this slicing, change under a shift of their initial conditions [4]. Therefore, in the separate universe approach, choice of gauge is encoded as the choice of time variable [20].
Gauge transformations in the separate universe approach.-In this section we use the separate universe approach to compute the gauge transformation between δφ α and ζ. Versions of this calculation have been given before. Anderson et al. collected formulae valid to third-order on superhorizon scales, invoking the slow-roll expansion [3]. A derivation of the second-order gauge transformation was given in Ref. [4] using purely geometrical methods on the phase space of solutions to the background equations.
The flat slicing corresponds to hypersurfaces separated by equal amounts of expansion N , where N (t 1 , t 2 ) = ln a(t 2 )/a(t 1 ) measures the growth of the scale factor between times t 1 and t 2 . The uniform-density slicing corresponds to hypersurfaces separated by equal intervals of ρ. In the separate universe approach, changing gauge from the flat to uniform density slicings corresponds to changing time variable from N to ρ.
Consider an initial spatially flat hypersurface on which the density can be written ρ(φ α ,φ α ). Define some fixed value ρ * which is smaller than ρ everywhere on the hypersurface of interest, and write ∆ρ = ρ * − ρ. At each point p on the hypersurface we evolve the background equations of motion (with initial conditions taken from their values at p) until the density reaches the constant value ρ * , and record the expansion ∆N which is accumulated. Because ρ varies over the slice ∆N will vary from point to point. Its variation δ(∆N ) represents a modulation det h ∼ e 6δ(∆N ) of the proper volume on the final slice of fixed density, and therefore we can identify ζ = δ(∆N ).
Uniform-density gauge curvature perturbation.-If ∆ρ is not too large the expansion accumulated during this evolution can be written It varies over the initial slice because each term is a function of position p. If the variation δρ under changes of p is also not too large, then the variation in ∆N under a change of initial location is Because our interest lies in the gauge transformation at a fixed time we have neglected terms which vanish in the limit ∆ρ → 0, which corresponds to coincidence of the initial and final slices.
To obtain explicit expressions we require the derivatives Up to this point our expressions apply for an arbitrary matter theory. Specializing to the case of canonical scalar fields appropriate for inflation, the variation δ(dN/dρ) satisfies We also have the exact expression ρ = V /(1 − ǫ/3), from which the variation δρ can be computed. The result is The first-order term ζ δN 1 agrees immediately with the local expression ζ local 1 given in Eq. (2.28a). Although ζ δN 2 is superficially different to ζ local 2 they can be made to agree using the first-order Hamiltonian constraint and the equation of motion for the background scalar field. This gives an explicit demonstration (assuming Einstein gravity) that the gauge transformation derived from the separate universe approach agrees with the one derived from traditional cosmological perturbation theory. In practice, if a local expression is required, the more compact form (2.28a)-(2.28b) is likely to be preferable.
Conclusions
In this paper we give a formula for the uniform-density gauge curvature perturbation written explicitly in terms of the scalar field fluctuation δφ α defined on spatially-flat slices. This formula is needed to compute observable quantities from second-order perturbation theory, including the bispectrum ζ(k 1 )ζ(k 2 )ζ(k 3 ) .
Our results can be written in different ways using the Hamiltonian constraint. In particular, although the expressions obtained directly from cosmological perturbation theory involve 'nonlocal' terms which depend on the inverse Laplacian ∂ −2 -and are therefore naïvely incompatible with the separate universe approach-we show that that these terms can be removed using the constraints. After doing so the results of perturbation theory and the separate universe approach agree. Our final results, especially Eqs. (2.29a)-(2.29b) are compact, simple and can be used directly in numerical calculations. We have tested their validity using integrations of the two-and three-point functions δφ α (k 1 )δφ β (k 2 ) and δφ α (k 1 )δφ β (k 2 )δφ γ (k 3 ) . Using these gauge transformations we confirm the expected behaviour of ζ(k 1 )ζ(k 2 ) and ζ(k 1 )ζ(k 2 )ζ(k 3 ) , including accurate conservation when all isocurvature modes become quenched.
Comparison with Christopherson et al.-While this paper was in preparation, a preprint was released by Christopherson, Nalson & Malik which also gives an explicit expression for ζ in terms of δφ α up to second order [5].
To aid comparison, we briefly list the similarities and differences between our calculations. First, Christopherson et al. adopt a different definition of density. Our definition, T ab n a n b , gives ρ = π 2 /2N 2 + (∂φ) 2 /2a 2 + V expressed in coordinates adapted to the slicing, where π α ≡φ α − N m ∂ m φ α . It corresponds to what Hwang & Noh called the normal frame [26]. Christopherson et al. define the density in what Hwang & Noh call the energy frame, giving ρ = π 2 /2N 2 −(∂φ) 2 /2a 2 +V , again in coordinates adapted to the slicing. When all wavenumbers correspond to superhorizon scales the spatial gradients decay and these expressions agree. Therefore, under the same circumstances, our definitions of the uniform density slicing will also agree.
Second, our definitions of the curvature perturbation are different. Christopherson et al. adopt the definition of Malik & Wands [6], in which the spatial metric is written (including all orders in perturbation theory) where F i is divergenceless, and h ij is transverse and tracefree. Malik & Wands define the curvature perturbation to be ψ MW . Our definition is ψ = (1/6) ln det(h ij /a 2 ), because it is this quantity which is known to be conserved on superhorizon scales [2,7,11]. The Malik-Wands definition ψ MW is not equivalent to the determinant of h ij unless E = F i = h ij = 0.
In that case the first-order parts of ψ and ψ MW agree, and the second-order parts are related by ψ MW|2 = ψ 2 + 2(ψ 1 ) 2 [27]. Finally, we simplify our expressions using the Hamiltonian constraint, which Christopherson et al. refer to as the momentum equation. Christopherson et al. work only with cosmological perturbation theory, not the separate universe approach, and do not eliminate the nonlocal terms which appear in their expressions.
|
2016-05-17T10:01:42.000Z
|
2014-10-13T00:00:00.000
|
{
"year": 2014,
"sha1": "83b14bac75281534ff74df62fc8ad28665eba3d9",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1410.3491",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "83b14bac75281534ff74df62fc8ad28665eba3d9",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
116991414
|
pes2o/s2orc
|
v3-fos-license
|
Homotopy invariant presheaves with framed transfers
The category of framed correspondences $Fr_*(k)$, framed presheaves and framed sheaves were invented by Voevodsky in his unpublished notes [12]. Based on the theory, framed motives are introduced and studied in [7]. The main aim of this paper is to prove that for any $\mathbb A^1$-invariant quasi-stable radditive framed presheaf of Abelian groups $\mathcal F$, the associated Nisnevich sheaf $\mathcal F_{nis}$ is $\mathbb A^1$-invariant whenever the base field $k$ is infinite of characteristic different from 2. Moreover, if the base field $k$ is infinite perfect of characteristic different from 2, then every $\mathbb A^1$-invariant quasi-stable Nisnevich framed sheaf of Abelian groups is strictly $\mathbb A^1$-invariant and quasi-stable. Furthermore, the same statements are true in characteristic 2 if we also assume that the $\mathbb A^1$-invariant quasi-stable radditive framed presheaf of Abelian groups $\mathcal F$ is a presheaf of $\mathbb Z[1/2]$-modules. This result and the paper are inspired by Voevodsky's paper [13].
To formulate further two theorems relatingétale excision property, we need some preparations. Let S ⊂ X and S ′ ⊂ X ′ be closed subsets. Let / / X be an elementary distinguished square with X and X ′ affine k-smooth. Let S = X − V and S ′ = X ′ − V ′ be closed subschemes equipped with reduced structures. Let x ∈ S and x ′ ∈ S ′ be two points such that Π(x ′ ) = x. Let U = Spec(O X,x ) and U ′ = Spec(O X ′ ,x ′ ). Let π : U ′ → U be the morphism induced by Π. We are now in aposition to prove the following Theorem 2.15. For any A 1 -invariant quasi-stable ZF * -presheaf of abelian groups F the following statements are true: (1) under the assumptions of Theorem 2.9 the map i * : F (U ) → F (V ) is injective; (2) under the assumptions of Theorem 2.10 the map is an isomorphism; is injective, where η : Spec(k(X )) → U is the canonical morphism; (3') is an isomorphism.
Proof of Theorem 2.1. Firstly, (1) and (2) show that F | A 1 is a Zariski sheaf. Using (5) applied to X = A 1 , one shows that for any open V in A 1 one has F Nis (V ) = F (V ). Now consider the following Cartesian square of schemes Evaluating the Nisnevich sheaf F Nis on this square, we get a square of abelian groups F Nis (Spec(k(X ))) F Nis (X ) The map i * 0,X is plainly surjective. It remains to check its injectivity. The map (η × id) * is injective (apply (3')). As already mentioned in this proof, F Nis (A 1 k(X) ) = F (A 1 k(X) )). Since F Nis (Spec(k(X )) = F (Spec(k(X )), we see that the map i * 0,k(X) is an isomorphism. Thus the map i * 0,X is injective.
NOTATION AND AGREEMENTS
Notation 3.1. Given a morphism a ∈ Fr n (Y, X ), we will write a for the image of 1 · a in ZF n (Y, X ) and write [a] for the class of a in ZF n (Y, X ). Given a morphism a ∈ Fr n (Y, X ), we will write Z a for the support of a (it is a closed subset in Y × A n which finite over Y and determined by a uniquely). Also, we will often write (V a , ϕ a : V a → Y × A n ; g a : V a → X ) for a representative of the morphism a (here (V a , ρ : V a → Y × A n , s : Z a ֒→ V a ) is anétale neighborhood of Z a in Y × A n ).
Lemma 3.2.
If the support Z a of an element a = (V , ϕ; g)) ∈ Fr n (X ,Y ) is a disjoint union of Z 1 and Z 2 , then the element a determines two elements a 1 and a 2 in Fr n (X ,Y ). Namely, It is easy to see that if there is a morphism a ′ satisfying condition (1), then it is unique. In this case the pair (a, a ′ ) is an element of ZF n ((Y,Y ′ ), (X , X ′ )). For brevity we will write a for (a, a ′ ). Lemma 3.4. Let i Y : Y ′ ֒→ Y and i X : X ′ ֒→ X be open embeddings. Let a ∈ Fr n (Y, X ). Let Z a ⊂ Y × A n be the support of a. Set Z ′ a = Z a ∩Y ′ × A n . Then the following are equivalent: , (X , X ′ )) and one has an obvious equality in ZF n ((Y,Y ′ ), (X , X ′ )). Lemma 3.6 (A disconnected support case). Let i Y : Y ′ ֒→ Y and i X :
Then for any integer n 1, one has an equality Proof. Let m 1 be an integer. Then ). The first equality follows from Corollary 4.2, the third one follows from Corollary 4.6, the middle one is obvious.
There is a chain of equalities in ZF 1 ((U,U ′ ), (U,U ′ )): Here the first equality holds by Corollary 4.6, the second one holds by Lemma 3.6, the third one holds by Corollary 4.4, the forth one is obvious (replacement of neighborhoods). Continue the chain of equalities in ZF 1 ((U,U ′ ), (U,U ′ )) as follows: Here the first equality holds by Corollary 4.6, the second one holds by the definition of σ U (see Notation 2.7), the third one holds by Corollary 4.2. We proved the equality Combining that with the equality (5) for m = 2n + 1 we get the desired equality in ZF 1 ((U,U ′ ), (U,U ′ )). Whence the proposition.
INJECTIVITY AND EXCISION ON AFFINE LINE
The aim of this section is to prove Theorems 2.9 and 2.10.
(2) both are unitary in Y and the leading coefficients equal one; in ZF 1 (U,U ).
Proof. One has a homotopy h θ = ( Its restriction to 0×U and to 1×U coincides with morphisms (U ×U, G 0 ; p 2 ) and (U ×U, G 1 ; p 2 ) respectively. Whence the lemma.
Proof of Theorem 2.9. Under the assumptions of this theorem set
. Then one has a chain equalities in ZF 1 (U,U ): Here the first equality is obvious, the second one holds by Lemma 5.1, the third one holds by Proposition 4.7. Whence the theorem.
Then one has an equality
Proof of the corollary. The support Z θ of the homotopy h θ from the proof of Lemma 5.1 coincides with the vanishing locus of the polinomial G θ . Since in ZF 1 ((U,U − S), (U,U − S)). In fact, the second equality here holds by Corollary 3.5. The first and the third equalities hold since for i = 1, 2 one has h i = (U ×U, G i ; p 2 ) in Fr 1 (U,U ).
To this end set
Recall that S ⊂ V is a closed subset. Take any big enough integer m 1 and find a unitary polinomial F m (Y ) of degree m satisfying the following properties: . For that morphism one has equalities Here the first equality is obvious, the second one follows from Corollary 5.2. Take a big enough integer n. Set The first equality proven a few lines above and the second one follow from Proposition 4.7. Whence equality (7) holds. Now find morphisms l ∈ ZF 1 ((U,U − S)), (V,V − S)) and g ∈ ZF 1 ((V,V − S)), Claim 5.3. Equality (8) holds for the morphisms l and g defined above.
Note firstly that One has a chain of equalities The first equality holds by condition (iv ′ ) and Corollary 4.4. The second one is obvious. The third one is equality (5) for m = 1 from the proof of Proposition 4.7. The forth one is the definition of σ V (see Definition 2.2 and Notation 2.7). Combining altogether, we get a chain of equalities which proves the claim. Whence the theorem.
EXCISION ON RELATIVE AFFINE LINE
Proof of Theorem 2.12.
INJECTIVITY FOR LOCAL SCHEMES
The main aim of this section is to prove Theorem 2.11. Let X ∈ Sm/k, x ∈ X be a point, U = Spec(O X,x ), i : D ֒→ X be a closed subset. Under the notation of Theorem 2.11 we will construct an integer N and a morphism r ∈ ZF N (U, X − D) such that Let X ′ ⊂ X be an open subset containing the point x and let D ′ = X ′ ∩ D. Clearly, if we solve a similar problem for the triple U , X ′ and X ′ − D ′ , then we solve the problem for the original triple U , X and X − D. So, we may shrink X appropriately. In particular, we may assume that X is irreducible and the canonical sheaf ω X/k is trivial, i.e. is isomorphic to the structure sheaf Shrinking X more (and replacing D with its trace), we can find a commutative diagram of the form where p : X → B is an almost elementary fibration in the sense of [6], B is an affine open subset of the projective space P d−1 k , π is a finite surjective morphism, p| D is a finite morphism. The canonical sheaf ω X/k remains to be trivial. Since p is an almost elementary fibration, then it is a smooth morphism such that for each point The base change of diagram (30) gives a commutative diagram of the form Now regard X as an affine A 1 × U -scheme via the morphism Π. And also regard X as an X -scheme via p X .
Remark 7.2. By Lemma 7.1 the class [O X ] of the structure sheaf of the subscheme X defines a morphism in Kor
. Below we lift these elements to the category ZF * (k) and equalities to the category ZF * (k).
and a morphism r : V → X such that: (iii) the morphism r is a U -scheme morphism if V is regarded as a U -scheme via the morphism pr U • ρ and X is regarded as a U -scheme via the morphism p U .
) be a unique matrix which converts the second free basis to the first one and let J := det(A) be its determinant. Replacing ϕ 1 by J −1 ϕ 1 , we may and will assume below in this section that J = 1 ∈ k[W ]. This is useful to apply Theorem ?? below.
Claim 7.10. One has an equality
. By Remark 7.4 and Theorem 12.1 for the second summand one has . Thus one has a chain of equalities . Whence the Claim. Whence Theorem 2.11.
PRELIMINARIES FOR THE INJECTIVE PART OF THEÉTALE EXCISION
Let S ⊂ X and S ′ ⊂ X ′ be closed subsets. Let To prove Theorem 2.13, it suffices to find morphisms Then the morphisms a = in ′ • a • and b G = in • b • G satisfy property (12). Thus if we shrink X and X ′ in such a way that properties (1) − (4) are fulfilled and find appropriate morphisms a Y and b Y G , then we find a and b G subjecting condition (12). Remark 8.1. One way of shrinking X and X ′ such that properties (1) − (4) are fulfilled is as follows. Replace X by an affine open X • containing x and then replace X ′ by ( The shrunk scheme X ′ will be regarded below as a B-scheme via the morphism q • Π.
and a morphism r : V → X such that: Applying item (c), we get another inclusion: ). Below we lift these elements to the category ZF * (k) and equalities to the category ZF * (k).
9. REDUCING THEOREM 2.13 TO PROPOSITIONS 8.6 AND 8.9 To construct a morphism b ∈ Fr N (U, X ), we first construct its support in U ×A N for an integer N, then we construct anétale neighborhood of the support in U × A N , then one constructs a framing of the support in the neighborhood, and finally one constructs b itself. In the same manner we construct a morphism a ∈ Fr N (U, X ′ ) and a homotopy H ∈ Fr N (A 1 ×U, X ) between Π •a and b. Using the fact that the support factor through X − S. Moreover, we are able to work with morphisms of pairs. We will use systematically in this section the data from Proposition 8.6. The details are given below in this section. Under the assumptions and notation of Proposition 8.6, Lemma 8.6 and Remark 8.3, set V ′ = X ′ × B V . So we have a Cartesian square where r ′ and Π ′ are the projections to the first and second factors respectively. The section s : X → V defines a section s ′ = (id, s) : X ′ → V ′ of r ′ . For brevity, we will write below U Let X ⊂ B × A N be the closed inclusion from Proposition 8.6. Taking the base change of the latter inclusion by means of the morphism U → B, we get a closed inclusion U × X ⊂ U × A N .
Under the notation from Proposition 8.6 and Proposition 8.9, construct now a morphism b ∈ Fr N (U, X ). Let Z 0 ⊂ U × X be the closed subset from Proposition 8.9. Then one has the closed inclusions Let in 0 : Z 0 ⊂ U × X be the closed inclusion. Define anétale neighborhood of Z 0 in U × A N as follows: We will sometimes write below (Z 0 ,U × V , p * V (ϕ), (id × r) * (h 0 ); pr X • (id × r)) to denote the morphism b ′ .
For brevity, we will sometimes write
; pr X • (id × r)). Under the notation from Proposition 8.6 and Proposition 8.9 construct now a morphism a ∈ Fr N (U, X ). Let Z 1 ⊂ U × X be the closed subset from Proposition 8.9. Then one has closed inclusions Using the notation of Proposition 8.6, define anétale neighborhood of Z 1 in U × A N as follows: . For brevity, we will sometimes write . Under the notation of Proposition 8.6 and Proposition 8.9, let us construct now a morphism H θ ∈ Fr N (A 1 × U, X ). Let Z θ ⊂ A 1 × U × X be the closed subset from Proposition 8.9. Then one has closed inclusions Let in θ : Z θ ⊂ A 1 × U × X be the closed inclusion. Define anétale neighborhood of Z θ in A 1 ×U × A N as follows: Definition 9.4. Under the notation of Propositions 8.6 and 8.9 we set We will sometimes write below (Z θ ,
Lemma 9.5. One has equalities H
Proof. The first equality is obvious. To check the second one, consider Here we use regarded as a morphism ofétale neighborhoods. Refining theétale neighborhood of Z 1 in the definition of H 1 by means of that morphism, we get a N-frame H ′ 1 = H 1 , which has the form Note that Fr N (U, X ). The following lemma follows from Lemma 3.4 and Remark 8.10. Lemma 9.6. The morphisms a| U−S , b| U−S , H θ | A 1 ×(U−S) and Π| X ′ −S ′ run inside X ′ − S ′ , X − S, X − S and X − S respectively.
By the preceding lemma the morphisms a, b, H θ and Π define morphisms where j : (X − S, X − S) ֒→ (X , X − S) is a natural inclusion. By the latter comments and Corollary 9.7 one gets an equality
PRELIMINARIES FOR THE SURJECTIVE PART OF THEÉTALE EXCISION
Let S ⊂ X and S ′ ⊂ X ′ be closed subsets. Let be an elementary distinguished square with affine k-smooth X and X ′ . Let S = X − V and S ′ = X ′ − V ′ be closed subschemes equipped with reduced structures. Let x ∈ S and x ′ ∈ S ′ be two points such that Π( To prove Theorem 2.14 it suffices to find morphisms a ∈ ZF N ((U,U − S)), (X ′ , X ′ − S ′ )) and Replace X by an affine open neighborhood in : X • ֒→ X of the point x. Replace X ′ by (X ′ ) • := Π −1 (X • ) and write in ′ : In this section we use agreements and notation from Definition 8.
If, furthermore, j : X ′ ֒→ B × A N is a closed embedding of B-schemes, then one has [N ( j)] = (N − 1)[O X ] in K 0 (X ), where N ( j) is the normal bundle to X ′ associated with the imbedding j.
Thus increasing the integer N, we may assume that the normal bundle N ( j) is isomorphic to the trivial bundle O N−1 X ′ . Definition 10.3. Let x ∈ S, x ′ ∈ S ′ be such that Π(x ′ ) = x. We put U = Spec(O X,x ). There is an obvious morphism ∆ ′ = (id, can) : U ′ → U ′ × B X ′ . It is a section of the projection p U ′ :
an equality of closed subschemes) and G
(e) the morphism (pr U , F) : U × X ′ → U × A 1 is finite surjective, and hence the closed subscheme Z 1 := F −1 (0) ⊂ U × X ′ is finite flat and surjective over U ; Remark 10.6. Item (d) yields the following inclusions: . Below we will lift these elements to the category ZF * (k) and relations to the category ZF * (k).
11. REDUCING THEOREM 2.14 TO PROPOSITIONS 10.1 AND 10.5 We suppose in this section that S ⊂ X is k-smooth. To construct a morphism a ∈ Fr N (U, X ′ ), we first construct its support in U ×A N for an integer N, then we construct anétale neighborhood of the support in U × A N , then one constructs a framing of the support in the neighborhood and finally one constructs a itself. In the same fashion we construct a morphism b ∈ Fr N (U ′ , X ′ ) and a homotopy H ∈ Fr N (A 1 ×U ′ , X ′ ) between a • π and b. Using the fact that the support Moreover, we are able to work with morphisms of pairs. In this section we will use systematically the data from Propositions 10.1 and 10.5 and Notation 10.4. Details are given below in this section.
Let X ′ ⊂ B × A N be the closed inclusion from Proposition 10.1. Taking the base change of the latter inclusion by means of the morphism U → B, we get a closed inclusion Under the notation from Proposition 10.1 and Proposition 10.5, construct now a morphism b ∈ Fr N (U ′ , X ′ ). Let Z ′ 0 ⊂ U ′ × X ′ be the closed subset from Proposition 10.5. Then one has closed inclusions Define anétale neighborhood of Z ′ 0 in U ′ × A N as follows: We will write Definition 11.1. Under the notation from Proposition 8.6 and Proposition 8.9 set To construct the desired morphism b ∈ Fr N (U ′ , X ′ ), we need to modify a bit the function p * V ′′ (ϕ ′ 1 ) in the framing of Z ′ 0 . By Proposition 10.1 and item (b) of Proposition 10.5, the functions ) are two free bases of the k[((id × s ′′ ) • ∆ ′ )(U ′ ))]-module I/I 2 . Let J ∈ k[U ′ ] × be the Jacobian of a unique matrix A ∈ M N (k[U ′ ]) changing the first free basis to the second one. There is an element λ ∈ k[U ] such that λ | S∩U = J| S ′ ∩U ′ (we identify here S ′ ∩U ′ with S∩U via the morphism π| S ′ ∩U ′ ). Clearly, λ ∈ k[U ] × .
Under the notation from Proposition 10.1 and Proposition 10.5 construct now a morphism a ∈ Fr N (U, X ′ ). Let Z 1 ⊂ U × X ′ be the closed subset from Proposition 10.5. Then one has closed inclusions Z 1 ⊂ U × X ′ ⊂ U × A N . Let in 1 : Z 1 ⊂ U × X be the closed inclusion. Define anétale neighborhood of Z 1 in U × A N as follows: Definition 11.3. Under the notation from Proposition 10.1 and Proposition 10.5 set We will sometimes write (Z 1 ,U × V ′′ , ψ, (id × r ′′ ) * (F); pr X ′ • (id × r ′′ )) to denote a.
Proof. The first equality is obvious. Let us prove the second one. By Proposition 10.5 one has h ′ 1 = (π × id) * (F). Thus one has a chain of equalities in Fr N (U ′ , X ′ ): The following lemma follows from Lemma 3.4 and Remark 10.6.
Lemma 11.6. The morphisms a| U−S , b| By the preceding lemma the morphisms a, b, H θ and π define morphisms
Corollary 11.7. There is a relation [[a]] • [[π]] = [[b]] in ZF
Proof of Corollary 11.7. In fact, by Corollary 3.5 one has a chain of equalities Reducing Theorem 2.14 to Propositions 10.1 and 10.5. The support Z 0 of b is the disjoint union ∆ ′ (U ′ ) ⊔ G ′ . Thus, by Lemma 3.6 one has, is a natural inclusion. By the latter comments and Corollary 11.7 one gets, To prove equality (18), and hence to prove Theorem 2.14, it remains to check that S ′ ∩U ′ be the henzelization of U ′ at S ′ ∩ U ′ and let π ′ : U ′′ → U ′ be the structure morphism. Recall that S ′ ∩ U ′ is essentially k-smooth. Thus the pair (U ′′ , S ′ ∩ U ′ ) is a henselian pair with an essentially k-smooth closed subscheme S ′ ∩U ′ . Recall that one has equality (21). Thus by Theorem 12.2 one has an equality . Applying Theorem 2.13 to the morphism π ′ : U ′′ → U ′ , we see that for an integer M 0 one has an equality Thus, With these in hand the following equality holds: The latter equality is of the form (18). Whence Theorem 2.14.
APPENDIX
Theorem 12.1. Let W be an essentially k-smooth local k-scheme and let N 1 be an integer. Let s : W → W × A N be a section of the projection pr W : be the henselization of W × A N at s(W ) (particularly, s = ρ • s h ). Let X be a k-smooth scheme.
If W • ⊂ W is Zariski open and X • ⊂ X is Zariski open and g(s h
To prove these two theorems, we need some technical lemmas.
x x q q q q q q q q q q q q q B (30) of morphisms satisfying the following conditions: (i) j is an open immersion dense at each fibre of q, and X = X − X ∞ ; (ii) q is smooth projective all of whose fibres are geometrically irreducible of dimension one; (iii) q ∞ is a finite flat morphism all of whose fibres are non-empty;
|
2018-01-28T15:35:53.000Z
|
2015-04-03T00:00:00.000
|
{
"year": 2020,
"sha1": "0b188cdbc5bf13a1cc6b7590e8a678a723779ee6",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.00884",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "0b188cdbc5bf13a1cc6b7590e8a678a723779ee6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.